Dec 13 14:06:43.064344 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 14:06:43.064403 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 14:06:43.064419 kernel: BIOS-provided physical RAM map: Dec 13 14:06:43.064437 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:06:43.064448 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:06:43.064459 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:06:43.064472 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 14:06:43.064484 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 14:06:43.064495 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:06:43.064506 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 14:06:43.064517 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:06:43.064528 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:06:43.064545 kernel: NX (Execute Disable) protection: active Dec 13 14:06:43.064557 kernel: APIC: Static calls initialized Dec 13 14:06:43.064570 kernel: SMBIOS 2.8 present. Dec 13 14:06:43.064582 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 14:06:43.064595 kernel: Hypervisor detected: KVM Dec 13 14:06:43.064611 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:06:43.064624 kernel: kvm-clock: using sched offset of 4521628617 cycles Dec 13 14:06:43.064637 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:06:43.064650 kernel: tsc: Detected 2499.998 MHz processor Dec 13 14:06:43.064662 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:06:43.064675 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:06:43.064687 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 14:06:43.064700 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 14:06:43.064712 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:06:43.064730 kernel: Using GB pages for direct mapping Dec 13 14:06:43.064743 kernel: ACPI: Early table checksum verification disabled Dec 13 14:06:43.064755 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 14:06:43.064768 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:43.064780 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:43.064792 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:43.064805 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 14:06:43.064817 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:43.064830 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:43.064847 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:43.064860 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:06:43.064872 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 14:06:43.064885 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 14:06:43.064897 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 14:06:43.064916 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 14:06:43.064929 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 14:06:43.064947 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 14:06:43.064960 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 14:06:43.064973 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:06:43.064999 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:06:43.065012 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 14:06:43.065025 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 14:06:43.065038 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 14:06:43.065050 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 14:06:43.065069 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 14:06:43.065082 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 14:06:43.065095 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 14:06:43.065108 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 14:06:43.065121 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 14:06:43.065133 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 14:06:43.065146 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 14:06:43.065159 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 14:06:43.065172 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 14:06:43.065184 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 14:06:43.065202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 14:06:43.065215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 14:06:43.065251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 14:06:43.065264 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 14:06:43.065278 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 14:06:43.065291 kernel: Zone ranges: Dec 13 14:06:43.065304 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:06:43.065317 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 14:06:43.065329 kernel: Normal empty Dec 13 14:06:43.065349 kernel: Movable zone start for each node Dec 13 14:06:43.065362 kernel: Early memory node ranges Dec 13 14:06:43.065375 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:06:43.065388 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 14:06:43.065409 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 14:06:43.065433 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:06:43.065459 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:06:43.065481 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 14:06:43.065494 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:06:43.065513 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:06:43.065527 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:06:43.065540 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:06:43.065553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:06:43.065566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:06:43.065579 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:06:43.065591 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:06:43.065604 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:06:43.065617 kernel: TSC deadline timer available Dec 13 14:06:43.065635 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 14:06:43.065648 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 14:06:43.065661 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 14:06:43.065674 kernel: Booting paravirtualized kernel on KVM Dec 13 14:06:43.065687 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:06:43.065700 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 13 14:06:43.065713 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 14:06:43.065726 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 14:06:43.065739 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 14:06:43.065757 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:06:43.065770 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:06:43.065785 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 14:06:43.065798 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:06:43.065811 kernel: random: crng init done Dec 13 14:06:43.065824 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:06:43.065836 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:06:43.065849 kernel: Fallback order for Node 0: 0 Dec 13 14:06:43.065867 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 14:06:43.065880 kernel: Policy zone: DMA32 Dec 13 14:06:43.065893 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:06:43.065906 kernel: software IO TLB: area num 16. Dec 13 14:06:43.065919 kernel: Memory: 1899484K/2096616K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 196872K reserved, 0K cma-reserved) Dec 13 14:06:43.065932 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 14:06:43.065945 kernel: Kernel/User page tables isolation: enabled Dec 13 14:06:43.065958 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 14:06:43.065971 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 14:06:43.066002 kernel: Dynamic Preempt: voluntary Dec 13 14:06:43.066016 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:06:43.066030 kernel: rcu: RCU event tracing is enabled. Dec 13 14:06:43.066043 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 14:06:43.066057 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:06:43.066084 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:06:43.066102 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:06:43.066116 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:06:43.066129 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 14:06:43.066143 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 14:06:43.066156 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 14:06:43.066169 kernel: Console: colour VGA+ 80x25 Dec 13 14:06:43.066188 kernel: printk: console [tty0] enabled Dec 13 14:06:43.066202 kernel: printk: console [ttyS0] enabled Dec 13 14:06:43.066215 kernel: ACPI: Core revision 20230628 Dec 13 14:06:43.066243 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:06:43.066256 kernel: x2apic enabled Dec 13 14:06:43.066276 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 14:06:43.066290 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 14:06:43.066304 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 14:06:43.066318 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:06:43.066331 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 14:06:43.066344 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 14:06:43.066358 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:06:43.066371 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:06:43.066384 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:06:43.066398 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:06:43.066417 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 14:06:43.066430 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:06:43.066444 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 14:06:43.066457 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 14:06:43.066470 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 14:06:43.066483 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 14:06:43.066497 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:06:43.066510 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:06:43.066524 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:06:43.066537 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:06:43.066556 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:06:43.066569 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:06:43.066583 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:06:43.066596 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 14:06:43.066609 kernel: landlock: Up and running. Dec 13 14:06:43.066623 kernel: SELinux: Initializing. Dec 13 14:06:43.066636 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:06:43.066649 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:06:43.066663 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 14:06:43.066677 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 14:06:43.066690 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 14:06:43.066709 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 14:06:43.066723 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 14:06:43.066737 kernel: signal: max sigframe size: 1776 Dec 13 14:06:43.066750 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:06:43.066764 kernel: rcu: Max phase no-delay instances is 400. Dec 13 14:06:43.066778 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:06:43.066791 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:06:43.066805 kernel: smpboot: x86: Booting SMP configuration: Dec 13 14:06:43.066818 kernel: .... node #0, CPUs: #1 Dec 13 14:06:43.066836 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 14:06:43.066850 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:06:43.066864 kernel: smpboot: Max logical packages: 16 Dec 13 14:06:43.066877 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 14:06:43.066891 kernel: devtmpfs: initialized Dec 13 14:06:43.066904 kernel: x86/mm: Memory block size: 128MB Dec 13 14:06:43.066918 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:06:43.066932 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 14:06:43.066945 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:06:43.066964 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:06:43.066977 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:06:43.067003 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:06:43.067017 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:06:43.067030 kernel: audit: type=2000 audit(1734098801.245:1): state=initialized audit_enabled=0 res=1 Dec 13 14:06:43.067043 kernel: cpuidle: using governor menu Dec 13 14:06:43.067057 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:06:43.067070 kernel: dca service started, version 1.12.1 Dec 13 14:06:43.067083 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:06:43.067103 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 14:06:43.067117 kernel: PCI: Using configuration type 1 for base access Dec 13 14:06:43.067131 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:06:43.067145 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:06:43.067158 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 14:06:43.067172 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:06:43.067185 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 14:06:43.067199 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:06:43.067212 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:06:43.067245 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:06:43.067259 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:06:43.067273 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:06:43.067298 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 14:06:43.067313 kernel: ACPI: Interpreter enabled Dec 13 14:06:43.067327 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:06:43.067340 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:06:43.067354 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:06:43.067368 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 14:06:43.067387 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:06:43.067401 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:06:43.067689 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:06:43.067880 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:06:43.068070 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:06:43.068091 kernel: PCI host bridge to bus 0000:00 Dec 13 14:06:43.068349 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:06:43.068523 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:06:43.070141 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:06:43.070634 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 14:06:43.070793 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:06:43.070946 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 14:06:43.071115 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:06:43.071331 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:06:43.071529 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 14:06:43.071715 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 14:06:43.071894 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 14:06:43.072089 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 14:06:43.074543 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:06:43.074765 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 14:06:43.074958 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 14:06:43.075175 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 14:06:43.076804 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 14:06:43.077013 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 14:06:43.077188 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 14:06:43.077393 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 14:06:43.077563 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 14:06:43.080322 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 14:06:43.080518 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 14:06:43.080712 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 14:06:43.080894 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 14:06:43.081099 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 14:06:43.081321 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 14:06:43.081514 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 14:06:43.081716 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 14:06:43.081926 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:06:43.082127 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 14:06:43.083372 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 14:06:43.083550 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 14:06:43.083732 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 14:06:43.083924 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:06:43.084111 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 14:06:43.084304 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 14:06:43.084475 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 14:06:43.084657 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:06:43.084827 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:06:43.085034 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:06:43.085205 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 14:06:43.087458 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 14:06:43.087665 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:06:43.087839 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 14:06:43.088049 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 14:06:43.088246 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 14:06:43.088435 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 14:06:43.088604 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 14:06:43.088773 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:06:43.088959 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 14:06:43.089172 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 14:06:43.091423 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 14:06:43.091617 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 14:06:43.091815 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 14:06:43.092053 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 14:06:43.093289 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 14:06:43.093475 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 14:06:43.093647 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 14:06:43.093816 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 14:06:43.094036 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 14:06:43.094216 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 14:06:43.096441 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 14:06:43.096617 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 14:06:43.096788 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 14:06:43.096962 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 14:06:43.097145 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 14:06:43.097358 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 14:06:43.097533 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 14:06:43.097701 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 14:06:43.097868 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 14:06:43.098055 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 14:06:43.103318 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 14:06:43.103642 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 14:06:43.103830 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 14:06:43.104041 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 14:06:43.104230 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 14:06:43.104413 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 14:06:43.104584 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 14:06:43.104753 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 14:06:43.104774 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:06:43.104789 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:06:43.104803 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:06:43.104817 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:06:43.104839 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:06:43.104853 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:06:43.104867 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:06:43.104881 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:06:43.104894 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:06:43.104908 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:06:43.104922 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:06:43.104936 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:06:43.104949 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:06:43.104968 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:06:43.104993 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:06:43.105008 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:06:43.105022 kernel: iommu: Default domain type: Translated Dec 13 14:06:43.105036 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:06:43.105049 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:06:43.105063 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:06:43.105076 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:06:43.105090 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 14:06:43.105285 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:06:43.105457 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:06:43.105640 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:06:43.105662 kernel: vgaarb: loaded Dec 13 14:06:43.105676 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:06:43.105690 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:06:43.105704 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:06:43.105718 kernel: pnp: PnP ACPI init Dec 13 14:06:43.105926 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:06:43.105949 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:06:43.105963 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:06:43.105977 kernel: NET: Registered PF_INET protocol family Dec 13 14:06:43.106003 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:06:43.106017 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:06:43.106031 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:06:43.106045 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:06:43.106067 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 14:06:43.106081 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:06:43.106094 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:06:43.106108 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:06:43.106122 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:06:43.106136 kernel: NET: Registered PF_XDP protocol family Dec 13 14:06:43.106331 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 14:06:43.106515 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:06:43.106701 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:06:43.106878 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 14:06:43.107072 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 14:06:43.108330 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 14:06:43.108508 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 14:06:43.108678 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 14:06:43.108857 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 14:06:43.109047 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 14:06:43.111250 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 14:06:43.111459 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 14:06:43.111632 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 14:06:43.111824 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 14:06:43.112025 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 14:06:43.112199 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 14:06:43.112466 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 14:06:43.112650 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 14:06:43.112844 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 14:06:43.113086 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 14:06:43.113289 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 14:06:43.113469 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:06:43.113695 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 14:06:43.113866 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 14:06:43.114079 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 14:06:43.116302 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 14:06:43.116583 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 14:06:43.116879 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 14:06:43.117092 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 14:06:43.119474 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 14:06:43.119685 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 14:06:43.119861 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 14:06:43.120051 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 14:06:43.120256 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 14:06:43.120436 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 14:06:43.120620 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 14:06:43.120803 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 14:06:43.120998 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 14:06:43.121188 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 14:06:43.123450 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 14:06:43.123624 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 14:06:43.123792 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 14:06:43.123969 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 14:06:43.124159 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 14:06:43.124377 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 14:06:43.124548 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 14:06:43.124719 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 14:06:43.124889 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 14:06:43.125076 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 14:06:43.127300 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 14:06:43.127482 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:06:43.127641 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:06:43.127797 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:06:43.127970 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 14:06:43.128147 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:06:43.128375 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 14:06:43.128554 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 14:06:43.128715 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 14:06:43.128875 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:06:43.129065 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 14:06:43.129267 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 14:06:43.129431 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 14:06:43.129589 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 14:06:43.129765 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 14:06:43.129927 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 14:06:43.130102 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 14:06:43.130308 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 14:06:43.130477 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 14:06:43.130644 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 14:06:43.130830 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 14:06:43.131009 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 14:06:43.131175 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 14:06:43.131389 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 14:06:43.131560 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 14:06:43.131720 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 14:06:43.132045 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 14:06:43.132214 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 14:06:43.132418 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 14:06:43.132611 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 14:06:43.132777 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 14:06:43.132951 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 14:06:43.132974 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:06:43.133002 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:06:43.133017 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:06:43.133032 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 14:06:43.133046 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:06:43.133061 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 14:06:43.133076 kernel: Initialise system trusted keyrings Dec 13 14:06:43.133100 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:06:43.133115 kernel: Key type asymmetric registered Dec 13 14:06:43.133130 kernel: Asymmetric key parser 'x509' registered Dec 13 14:06:43.133144 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 14:06:43.133159 kernel: io scheduler mq-deadline registered Dec 13 14:06:43.133173 kernel: io scheduler kyber registered Dec 13 14:06:43.133188 kernel: io scheduler bfq registered Dec 13 14:06:43.133457 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 14:06:43.133632 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 14:06:43.133813 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:06:43.134000 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 14:06:43.134176 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 14:06:43.134372 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:06:43.134546 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 14:06:43.134716 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 14:06:43.134901 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:06:43.135090 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 14:06:43.135298 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 14:06:43.135469 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:06:43.135640 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 14:06:43.135808 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 14:06:43.135996 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:06:43.136175 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 14:06:43.136368 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 14:06:43.136543 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:06:43.136718 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 14:06:43.136889 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 14:06:43.137099 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:06:43.137300 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 14:06:43.137473 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 14:06:43.137645 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:06:43.137667 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:06:43.137683 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:06:43.137706 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:06:43.137721 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:06:43.137736 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:06:43.137751 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:06:43.137766 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:06:43.137780 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:06:43.137795 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:06:43.138016 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 14:06:43.138195 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 14:06:43.138458 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T14:06:42 UTC (1734098802) Dec 13 14:06:43.138617 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 14:06:43.138638 kernel: intel_pstate: CPU model not supported Dec 13 14:06:43.138653 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:06:43.138667 kernel: Segment Routing with IPv6 Dec 13 14:06:43.138682 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:06:43.138697 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:06:43.138711 kernel: Key type dns_resolver registered Dec 13 14:06:43.138733 kernel: IPI shorthand broadcast: enabled Dec 13 14:06:43.138748 kernel: sched_clock: Marking stable (1294004512, 238801987)->(1663789153, -130982654) Dec 13 14:06:43.138763 kernel: registered taskstats version 1 Dec 13 14:06:43.138777 kernel: Loading compiled-in X.509 certificates Dec 13 14:06:43.138792 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 14:06:43.138806 kernel: Key type .fscrypt registered Dec 13 14:06:43.138820 kernel: Key type fscrypt-provisioning registered Dec 13 14:06:43.138835 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:06:43.138849 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:06:43.138873 kernel: ima: No architecture policies found Dec 13 14:06:43.138887 kernel: clk: Disabling unused clocks Dec 13 14:06:43.138902 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 14:06:43.138917 kernel: Write protecting the kernel read-only data: 38912k Dec 13 14:06:43.138931 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 14:06:43.138946 kernel: Run /init as init process Dec 13 14:06:43.138960 kernel: with arguments: Dec 13 14:06:43.138974 kernel: /init Dec 13 14:06:43.139002 kernel: with environment: Dec 13 14:06:43.139022 kernel: HOME=/ Dec 13 14:06:43.139036 kernel: TERM=linux Dec 13 14:06:43.139051 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:06:43.139076 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 14:06:43.139096 systemd[1]: Detected virtualization kvm. Dec 13 14:06:43.139112 systemd[1]: Detected architecture x86-64. Dec 13 14:06:43.139127 systemd[1]: Running in initrd. Dec 13 14:06:43.139142 systemd[1]: No hostname configured, using default hostname. Dec 13 14:06:43.139163 systemd[1]: Hostname set to . Dec 13 14:06:43.139179 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:06:43.139194 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:06:43.139209 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:06:43.139241 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:06:43.139258 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 14:06:43.139274 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 14:06:43.139289 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 14:06:43.139311 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 14:06:43.139329 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 14:06:43.139345 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 14:06:43.139360 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:06:43.139376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:06:43.139391 systemd[1]: Reached target paths.target - Path Units. Dec 13 14:06:43.139412 systemd[1]: Reached target slices.target - Slice Units. Dec 13 14:06:43.139427 systemd[1]: Reached target swap.target - Swaps. Dec 13 14:06:43.139442 systemd[1]: Reached target timers.target - Timer Units. Dec 13 14:06:43.139458 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 14:06:43.139474 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 14:06:43.139489 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 14:06:43.139505 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 14:06:43.139520 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:06:43.139535 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 14:06:43.139556 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:06:43.139571 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 14:06:43.139587 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 14:06:43.139603 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 14:06:43.139618 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 14:06:43.139633 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:06:43.139649 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 14:06:43.139664 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 14:06:43.139679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:06:43.139749 systemd-journald[201]: Collecting audit messages is disabled. Dec 13 14:06:43.139785 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 14:06:43.139801 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:06:43.139816 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:06:43.139839 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 14:06:43.139855 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:06:43.139870 kernel: Bridge firewalling registered Dec 13 14:06:43.139893 systemd-journald[201]: Journal started Dec 13 14:06:43.139935 systemd-journald[201]: Runtime Journal (/run/log/journal/3234635479e54268a7e080b386056a28) is 4.7M, max 37.9M, 33.2M free. Dec 13 14:06:43.072306 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 14:06:43.122170 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 14:06:43.189241 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 14:06:43.190172 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 14:06:43.192378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:06:43.194295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 14:06:43.202558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:06:43.207493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:06:43.209075 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 14:06:43.215440 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 14:06:43.234144 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:06:43.246556 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:06:43.247741 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:06:43.250814 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:06:43.257449 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 14:06:43.260435 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 14:06:43.280762 dracut-cmdline[237]: dracut-dracut-053 Dec 13 14:06:43.284938 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 14:06:43.317462 systemd-resolved[238]: Positive Trust Anchors: Dec 13 14:06:43.317494 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:06:43.317540 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 14:06:43.322095 systemd-resolved[238]: Defaulting to hostname 'linux'. Dec 13 14:06:43.324177 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 14:06:43.327603 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:06:43.406285 kernel: SCSI subsystem initialized Dec 13 14:06:43.420340 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:06:43.434283 kernel: iscsi: registered transport (tcp) Dec 13 14:06:43.461455 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:06:43.461543 kernel: QLogic iSCSI HBA Driver Dec 13 14:06:43.523042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 14:06:43.531524 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 14:06:43.567365 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:06:43.567480 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:06:43.569903 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 14:06:43.619304 kernel: raid6: sse2x4 gen() 13599 MB/s Dec 13 14:06:43.637288 kernel: raid6: sse2x2 gen() 9216 MB/s Dec 13 14:06:43.655998 kernel: raid6: sse2x1 gen() 9590 MB/s Dec 13 14:06:43.656086 kernel: raid6: using algorithm sse2x4 gen() 13599 MB/s Dec 13 14:06:43.675100 kernel: raid6: .... xor() 7436 MB/s, rmw enabled Dec 13 14:06:43.675189 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 14:06:43.701272 kernel: xor: automatically using best checksumming function avx Dec 13 14:06:43.874268 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 14:06:43.889346 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 14:06:43.897541 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:06:43.924345 systemd-udevd[421]: Using default interface naming scheme 'v255'. Dec 13 14:06:43.932567 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:06:43.940384 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 14:06:43.964515 dracut-pre-trigger[431]: rd.md=0: removing MD RAID activation Dec 13 14:06:44.007165 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 14:06:44.018826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 14:06:44.130864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:06:44.142214 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 14:06:44.175272 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 14:06:44.180020 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 14:06:44.182002 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:06:44.183636 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 14:06:44.193079 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 14:06:44.212872 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 14:06:44.279186 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 13 14:06:44.349161 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 14:06:44.349396 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:06:44.349420 kernel: libata version 3.00 loaded. Dec 13 14:06:44.349448 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:06:44.370686 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:06:44.370725 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:06:44.372686 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:06:44.372907 kernel: scsi host0: ahci Dec 13 14:06:44.373139 kernel: scsi host1: ahci Dec 13 14:06:44.373380 kernel: scsi host2: ahci Dec 13 14:06:44.373593 kernel: scsi host3: ahci Dec 13 14:06:44.374629 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:06:44.374661 kernel: scsi host4: ahci Dec 13 14:06:44.375718 kernel: GPT:17805311 != 125829119 Dec 13 14:06:44.375742 kernel: scsi host5: ahci Dec 13 14:06:44.375972 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:06:44.375998 kernel: GPT:17805311 != 125829119 Dec 13 14:06:44.376017 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 14:06:44.376045 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:06:44.376064 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:06:44.376083 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 14:06:44.376102 kernel: AVX version of gcm_enc/dec engaged. Dec 13 14:06:44.376121 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 14:06:44.376139 kernel: AES CTR mode by8 optimization enabled Dec 13 14:06:44.376158 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 14:06:44.376176 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 14:06:44.376195 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 14:06:44.376240 kernel: ACPI: bus type USB registered Dec 13 14:06:44.376264 kernel: usbcore: registered new interface driver usbfs Dec 13 14:06:44.376283 kernel: usbcore: registered new interface driver hub Dec 13 14:06:44.376302 kernel: usbcore: registered new device driver usb Dec 13 14:06:44.308636 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:06:44.308828 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:06:44.311470 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:06:44.312276 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:06:44.312476 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:06:44.314894 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:06:44.326541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:06:44.439254 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (482) Dec 13 14:06:44.441241 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) Dec 13 14:06:44.469875 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 14:06:44.489250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:06:44.497878 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 14:06:44.504330 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 14:06:44.505204 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 14:06:44.513719 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 14:06:44.520406 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 14:06:44.525616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:06:44.531501 disk-uuid[560]: Primary Header is updated. Dec 13 14:06:44.531501 disk-uuid[560]: Secondary Entries is updated. Dec 13 14:06:44.531501 disk-uuid[560]: Secondary Header is updated. Dec 13 14:06:44.537494 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:06:44.559657 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:06:44.673257 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:06:44.673326 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 14:06:44.675387 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:06:44.678614 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:06:44.679253 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:06:44.684243 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:06:44.718980 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 14:06:44.737399 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 14:06:44.737655 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 14:06:44.737876 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 14:06:44.738134 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 14:06:44.738382 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 14:06:44.738605 kernel: hub 1-0:1.0: USB hub found Dec 13 14:06:44.738854 kernel: hub 1-0:1.0: 4 ports detected Dec 13 14:06:44.739090 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 14:06:44.739372 kernel: hub 2-0:1.0: USB hub found Dec 13 14:06:44.739612 kernel: hub 2-0:1.0: 4 ports detected Dec 13 14:06:44.973268 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 14:06:45.114249 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:06:45.121345 kernel: usbcore: registered new interface driver usbhid Dec 13 14:06:45.121428 kernel: usbhid: USB HID core driver Dec 13 14:06:45.128812 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 14:06:45.128880 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 14:06:45.552418 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:06:45.553812 disk-uuid[562]: The operation has completed successfully. Dec 13 14:06:45.608385 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:06:45.608581 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 14:06:45.643557 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 14:06:45.647955 sh[586]: Success Dec 13 14:06:45.667344 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 14:06:45.729909 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 14:06:45.732379 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 14:06:45.734778 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 14:06:45.771398 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 14:06:45.771494 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:06:45.771517 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 14:06:45.774404 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 14:06:45.777113 kernel: BTRFS info (device dm-0): using free space tree Dec 13 14:06:45.788245 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 14:06:45.789683 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 14:06:45.800480 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 14:06:45.805439 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 14:06:45.825475 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 14:06:45.825543 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:06:45.825566 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:06:45.831257 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 14:06:45.847920 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:06:45.851344 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 14:06:45.858847 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 14:06:45.869443 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 14:06:46.012549 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 14:06:46.013341 ignition[675]: Ignition 2.20.0 Dec 13 14:06:46.016976 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 14:06:46.013356 ignition[675]: Stage: fetch-offline Dec 13 14:06:46.013444 ignition[675]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:46.013464 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:06:46.013651 ignition[675]: parsed url from cmdline: "" Dec 13 14:06:46.013659 ignition[675]: no config URL provided Dec 13 14:06:46.013668 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:06:46.013684 ignition[675]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:06:46.013702 ignition[675]: failed to fetch config: resource requires networking Dec 13 14:06:46.014049 ignition[675]: Ignition finished successfully Dec 13 14:06:46.027540 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 14:06:46.059641 systemd-networkd[775]: lo: Link UP Dec 13 14:06:46.059657 systemd-networkd[775]: lo: Gained carrier Dec 13 14:06:46.062403 systemd-networkd[775]: Enumeration completed Dec 13 14:06:46.062970 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 14:06:46.063166 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:06:46.063173 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:06:46.063858 systemd[1]: Reached target network.target - Network. Dec 13 14:06:46.065164 systemd-networkd[775]: eth0: Link UP Dec 13 14:06:46.065171 systemd-networkd[775]: eth0: Gained carrier Dec 13 14:06:46.065183 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:06:46.073436 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 14:06:46.094285 ignition[777]: Ignition 2.20.0 Dec 13 14:06:46.094307 ignition[777]: Stage: fetch Dec 13 14:06:46.094589 ignition[777]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:46.094610 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:06:46.094763 ignition[777]: parsed url from cmdline: "" Dec 13 14:06:46.094771 ignition[777]: no config URL provided Dec 13 14:06:46.094780 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:06:46.094797 ignition[777]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:06:46.094987 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 14:06:46.095007 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 14:06:46.095051 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 14:06:46.095388 ignition[777]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 14:06:46.119382 systemd-networkd[775]: eth0: DHCPv4 address 10.244.26.14/30, gateway 10.244.26.13 acquired from 10.244.26.13 Dec 13 14:06:46.295648 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Dec 13 14:06:46.312798 ignition[777]: GET result: OK Dec 13 14:06:46.312981 ignition[777]: parsing config with SHA512: efa0ea84e379e8c1ae09b0a03877522a76cbccf09d2010981795890b8c93b475fcf9bd5fc77c72d84e03720efb3f62dc8dc2a41b437d3398f5a38ff901a51b76 Dec 13 14:06:46.319097 unknown[777]: fetched base config from "system" Dec 13 14:06:46.319115 unknown[777]: fetched base config from "system" Dec 13 14:06:46.319905 ignition[777]: fetch: fetch complete Dec 13 14:06:46.319124 unknown[777]: fetched user config from "openstack" Dec 13 14:06:46.319915 ignition[777]: fetch: fetch passed Dec 13 14:06:46.322493 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 14:06:46.320023 ignition[777]: Ignition finished successfully Dec 13 14:06:46.335494 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 14:06:46.367452 ignition[784]: Ignition 2.20.0 Dec 13 14:06:46.367476 ignition[784]: Stage: kargs Dec 13 14:06:46.367753 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:46.367774 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:06:46.370876 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 14:06:46.369553 ignition[784]: kargs: kargs passed Dec 13 14:06:46.369630 ignition[784]: Ignition finished successfully Dec 13 14:06:46.390261 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 14:06:46.409789 ignition[790]: Ignition 2.20.0 Dec 13 14:06:46.409812 ignition[790]: Stage: disks Dec 13 14:06:46.412989 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 14:06:46.410136 ignition[790]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:46.410158 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:06:46.414676 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 14:06:46.411535 ignition[790]: disks: disks passed Dec 13 14:06:46.416134 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 14:06:46.411648 ignition[790]: Ignition finished successfully Dec 13 14:06:46.417731 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 14:06:46.419191 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 14:06:46.420453 systemd[1]: Reached target basic.target - Basic System. Dec 13 14:06:46.431511 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 14:06:46.453466 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 14:06:46.478676 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 14:06:46.490379 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 14:06:46.615284 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 14:06:46.616750 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 14:06:46.618213 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 14:06:46.625384 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 14:06:46.629385 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 14:06:46.630564 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 14:06:46.633976 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 14:06:46.634825 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:06:46.634879 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 14:06:46.647239 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (806) Dec 13 14:06:46.648017 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 14:06:46.649314 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 14:06:46.649352 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:06:46.649373 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:06:46.672101 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 14:06:46.671253 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 14:06:46.678352 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 14:06:46.754503 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:06:46.762729 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:06:46.772860 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:06:46.783166 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:06:46.899827 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 14:06:46.906358 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 14:06:46.908986 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 14:06:46.920135 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 14:06:46.922580 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 14:06:46.952491 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 14:06:46.957592 ignition[924]: INFO : Ignition 2.20.0 Dec 13 14:06:46.958717 ignition[924]: INFO : Stage: mount Dec 13 14:06:46.958717 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:46.958717 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:06:46.961107 ignition[924]: INFO : mount: mount passed Dec 13 14:06:46.961107 ignition[924]: INFO : Ignition finished successfully Dec 13 14:06:46.961035 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 14:06:47.209639 systemd-networkd[775]: eth0: Gained IPv6LL Dec 13 14:06:48.719452 systemd-networkd[775]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:683:24:19ff:fef4:1a0e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:683:24:19ff:fef4:1a0e/64 assigned by NDisc. Dec 13 14:06:48.719469 systemd-networkd[775]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 14:06:53.827711 coreos-metadata[808]: Dec 13 14:06:53.827 WARN failed to locate config-drive, using the metadata service API instead Dec 13 14:06:53.851282 coreos-metadata[808]: Dec 13 14:06:53.851 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 14:06:53.866375 coreos-metadata[808]: Dec 13 14:06:53.866 INFO Fetch successful Dec 13 14:06:53.867739 coreos-metadata[808]: Dec 13 14:06:53.867 INFO wrote hostname srv-p3tlm.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 14:06:53.872273 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 14:06:53.872469 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 14:06:53.880340 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 14:06:53.904573 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 14:06:53.917259 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Dec 13 14:06:53.920971 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 14:06:53.921010 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:06:53.922681 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:06:53.929295 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 14:06:53.933271 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 14:06:53.965234 ignition[959]: INFO : Ignition 2.20.0 Dec 13 14:06:53.965234 ignition[959]: INFO : Stage: files Dec 13 14:06:53.967007 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:53.967007 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:06:53.967007 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:06:53.969834 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:06:53.969834 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:06:53.971866 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:06:53.971866 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:06:53.973860 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:06:53.972358 unknown[959]: wrote ssh authorized keys file for user: core Dec 13 14:06:53.976030 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:06:53.976030 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:06:54.185196 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:06:54.458744 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:06:54.458744 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:06:54.461414 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:06:55.052477 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:06:55.471330 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:06:55.471330 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:06:55.471330 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:06:55.471330 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:06:55.471330 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:06:55.471330 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:06:55.471330 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:06:55.471330 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:06:55.471330 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:06:55.482119 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:06:55.482119 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:06:55.482119 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:06:55.482119 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:06:55.482119 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:06:55.482119 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 14:06:55.970078 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 14:06:57.484156 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:06:57.486433 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 14:06:57.486433 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:06:57.486433 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:06:57.486433 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 14:06:57.486433 ignition[959]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:06:57.494103 ignition[959]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:06:57.494103 ignition[959]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:06:57.494103 ignition[959]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:06:57.494103 ignition[959]: INFO : files: files passed Dec 13 14:06:57.494103 ignition[959]: INFO : Ignition finished successfully Dec 13 14:06:57.488809 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 14:06:57.503631 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 14:06:57.509949 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 14:06:57.511462 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:06:57.511646 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 14:06:57.537802 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:06:57.537802 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:06:57.541636 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:06:57.543039 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 14:06:57.544522 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 14:06:57.548441 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 14:06:57.608918 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:06:57.609103 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 14:06:57.611096 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 14:06:57.612329 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 14:06:57.613876 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 14:06:57.625560 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 14:06:57.642986 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 14:06:57.647471 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 14:06:57.665373 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:06:57.667495 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:06:57.668455 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 14:06:57.669963 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:06:57.670156 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 14:06:57.672077 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 14:06:57.673013 systemd[1]: Stopped target basic.target - Basic System. Dec 13 14:06:57.674471 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 14:06:57.675777 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 14:06:57.677352 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 14:06:57.679069 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 14:06:57.680648 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 14:06:57.682207 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 14:06:57.683651 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 14:06:57.685171 systemd[1]: Stopped target swap.target - Swaps. Dec 13 14:06:57.686576 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:06:57.686835 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 14:06:57.688522 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:06:57.689505 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:06:57.690962 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 14:06:57.691158 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:06:57.692623 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:06:57.692834 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 14:06:57.695136 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:06:57.695401 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 14:06:57.697095 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:06:57.697309 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 14:06:57.710199 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 14:06:57.713560 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 14:06:57.714327 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:06:57.714591 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:06:57.717868 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:06:57.718409 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 14:06:57.733676 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:06:57.734988 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 14:06:57.740587 ignition[1012]: INFO : Ignition 2.20.0 Dec 13 14:06:57.740587 ignition[1012]: INFO : Stage: umount Dec 13 14:06:57.742257 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:06:57.742257 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:06:57.742257 ignition[1012]: INFO : umount: umount passed Dec 13 14:06:57.749125 ignition[1012]: INFO : Ignition finished successfully Dec 13 14:06:57.744617 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:06:57.744822 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 14:06:57.745798 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:06:57.745871 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 14:06:57.748369 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:06:57.748455 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 14:06:57.750045 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:06:57.750130 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 14:06:57.751818 systemd[1]: Stopped target network.target - Network. Dec 13 14:06:57.753190 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:06:57.753302 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 14:06:57.754843 systemd[1]: Stopped target paths.target - Path Units. Dec 13 14:06:57.757528 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:06:57.761327 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:06:57.762093 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 14:06:57.762777 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 14:06:57.764462 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:06:57.764558 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 14:06:57.765746 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:06:57.765820 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 14:06:57.767178 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:06:57.767276 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 14:06:57.768802 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 14:06:57.768885 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 14:06:57.770424 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 14:06:57.772858 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 14:06:57.775826 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:06:57.776371 systemd-networkd[775]: eth0: DHCPv6 lease lost Dec 13 14:06:57.777879 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:06:57.778027 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 14:06:57.779680 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:06:57.779882 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 14:06:57.783039 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:06:57.784174 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:06:57.785666 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:06:57.785776 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 14:06:57.794475 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 14:06:57.795231 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:06:57.795327 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 14:06:57.799271 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:06:57.801653 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:06:57.802334 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 14:06:57.815836 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:06:57.817303 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:06:57.820109 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:06:57.820293 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 14:06:57.824119 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:06:57.824232 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 14:06:57.825886 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:06:57.825949 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:06:57.827376 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:06:57.827457 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 14:06:57.830035 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:06:57.830111 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 14:06:57.831415 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:06:57.831503 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:06:57.840483 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 14:06:57.841345 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:06:57.841437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:06:57.846107 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:06:57.846199 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 14:06:57.847026 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:06:57.847095 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:06:57.850411 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 14:06:57.850516 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 14:06:57.851599 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:06:57.851676 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:06:57.852493 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 14:06:57.852559 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:06:57.854430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:06:57.854507 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:06:57.856920 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:06:57.857098 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 14:06:57.858569 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 14:06:57.865553 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 14:06:57.879714 systemd[1]: Switching root. Dec 13 14:06:57.918101 systemd-journald[201]: Journal stopped Dec 13 14:06:59.474275 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 14:06:59.474462 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:06:59.474514 kernel: SELinux: policy capability open_perms=1 Dec 13 14:06:59.474544 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:06:59.474579 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:06:59.474606 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:06:59.474632 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:06:59.474672 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:06:59.474693 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:06:59.474732 kernel: audit: type=1403 audit(1734098818.154:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:06:59.474771 systemd[1]: Successfully loaded SELinux policy in 60.625ms. Dec 13 14:06:59.474826 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.391ms. Dec 13 14:06:59.474857 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 14:06:59.474897 systemd[1]: Detected virtualization kvm. Dec 13 14:06:59.474920 systemd[1]: Detected architecture x86-64. Dec 13 14:06:59.474941 systemd[1]: Detected first boot. Dec 13 14:06:59.474961 systemd[1]: Hostname set to . Dec 13 14:06:59.475000 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:06:59.475023 zram_generator::config[1055]: No configuration found. Dec 13 14:06:59.475051 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:06:59.475088 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:06:59.475111 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 14:06:59.475150 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:06:59.475174 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 14:06:59.475201 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 14:06:59.477715 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 14:06:59.477760 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 14:06:59.477792 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 14:06:59.477821 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 14:06:59.477859 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 14:06:59.477889 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 14:06:59.477911 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:06:59.477933 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:06:59.477961 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 14:06:59.477996 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 14:06:59.478019 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 14:06:59.478041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 14:06:59.478062 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 14:06:59.478096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:06:59.478119 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 14:06:59.478147 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 14:06:59.478170 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 14:06:59.478191 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 14:06:59.479240 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:06:59.479296 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 14:06:59.479333 systemd[1]: Reached target slices.target - Slice Units. Dec 13 14:06:59.479396 systemd[1]: Reached target swap.target - Swaps. Dec 13 14:06:59.479421 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 14:06:59.479443 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 14:06:59.479471 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:06:59.479505 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 14:06:59.479536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:06:59.479565 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 14:06:59.479587 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 14:06:59.479614 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 14:06:59.479636 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 14:06:59.479658 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:06:59.479679 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 14:06:59.479714 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 14:06:59.479754 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 14:06:59.479778 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:06:59.479799 systemd[1]: Reached target machines.target - Containers. Dec 13 14:06:59.479827 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 14:06:59.479850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:06:59.479872 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 14:06:59.479893 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 14:06:59.479914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 14:06:59.479947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 14:06:59.479971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 14:06:59.480002 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 14:06:59.480025 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 14:06:59.480054 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:06:59.480086 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:06:59.480109 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 14:06:59.480130 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:06:59.480151 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:06:59.480188 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 14:06:59.481282 kernel: loop: module loaded Dec 13 14:06:59.481317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 14:06:59.481340 kernel: fuse: init (API version 7.39) Dec 13 14:06:59.481361 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 14:06:59.481393 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 14:06:59.481416 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 14:06:59.481443 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:06:59.481466 systemd[1]: Stopped verity-setup.service. Dec 13 14:06:59.481502 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:06:59.481525 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 14:06:59.481547 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 14:06:59.481576 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 14:06:59.481599 kernel: ACPI: bus type drm_connector registered Dec 13 14:06:59.481632 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 14:06:59.481656 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 14:06:59.481683 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 14:06:59.481717 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:06:59.481742 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:06:59.481763 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 14:06:59.481799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:59.481822 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 14:06:59.481856 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:06:59.481887 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 14:06:59.481921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:59.481945 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 14:06:59.481968 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:06:59.482001 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 14:06:59.482025 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:59.482085 systemd-journald[1144]: Collecting audit messages is disabled. Dec 13 14:06:59.482151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 14:06:59.482175 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 14:06:59.482197 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 14:06:59.482232 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 14:06:59.482270 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 14:06:59.482300 systemd-journald[1144]: Journal started Dec 13 14:06:59.483263 systemd-journald[1144]: Runtime Journal (/run/log/journal/3234635479e54268a7e080b386056a28) is 4.7M, max 37.9M, 33.2M free. Dec 13 14:06:59.002729 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:06:59.022060 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 14:06:59.022881 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:06:59.486290 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 14:06:59.501250 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 14:06:59.511347 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 14:06:59.518204 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 14:06:59.519741 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:06:59.519806 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 14:06:59.522244 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 14:06:59.530497 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 14:06:59.539497 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 14:06:59.541512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:06:59.554544 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 14:06:59.566312 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 14:06:59.567563 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:59.570498 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 14:06:59.571367 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 14:06:59.574435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:06:59.583647 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 14:06:59.586366 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 14:06:59.592080 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 14:06:59.593391 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 14:06:59.594530 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 14:06:59.626920 systemd-journald[1144]: Time spent on flushing to /var/log/journal/3234635479e54268a7e080b386056a28 is 161.468ms for 1145 entries. Dec 13 14:06:59.626920 systemd-journald[1144]: System Journal (/var/log/journal/3234635479e54268a7e080b386056a28) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:06:59.807543 systemd-journald[1144]: Received client request to flush runtime journal. Dec 13 14:06:59.807626 kernel: loop0: detected capacity change from 0 to 8 Dec 13 14:06:59.807657 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:06:59.807721 kernel: loop1: detected capacity change from 0 to 138184 Dec 13 14:06:59.701540 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 14:06:59.704572 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 14:06:59.714516 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 14:06:59.716281 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:06:59.762933 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:06:59.775651 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 14:06:59.780607 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:06:59.782004 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 14:06:59.798699 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Dec 13 14:06:59.798722 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Dec 13 14:06:59.819140 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 14:06:59.825814 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 14:06:59.839278 kernel: loop2: detected capacity change from 0 to 141000 Dec 13 14:06:59.837600 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 14:06:59.845728 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:06:59.899335 kernel: loop3: detected capacity change from 0 to 210664 Dec 13 14:06:59.924267 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 14:06:59.934721 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 14:06:59.983258 kernel: loop4: detected capacity change from 0 to 8 Dec 13 14:06:59.990472 kernel: loop5: detected capacity change from 0 to 138184 Dec 13 14:07:00.012768 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Dec 13 14:07:00.015448 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Dec 13 14:07:00.022265 kernel: loop6: detected capacity change from 0 to 141000 Dec 13 14:07:00.033367 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:07:00.063285 kernel: loop7: detected capacity change from 0 to 210664 Dec 13 14:07:00.083270 (sd-merge)[1216]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 14:07:00.084345 (sd-merge)[1216]: Merged extensions into '/usr'. Dec 13 14:07:00.094368 systemd[1]: Reloading requested from client PID 1188 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 14:07:00.094421 systemd[1]: Reloading... Dec 13 14:07:00.283263 zram_generator::config[1242]: No configuration found. Dec 13 14:07:00.487614 ldconfig[1183]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:07:00.531648 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:07:00.608959 systemd[1]: Reloading finished in 513 ms. Dec 13 14:07:00.640752 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 14:07:00.642610 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 14:07:00.657631 systemd[1]: Starting ensure-sysext.service... Dec 13 14:07:00.664257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 14:07:00.688421 systemd[1]: Reloading requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Dec 13 14:07:00.688458 systemd[1]: Reloading... Dec 13 14:07:00.744619 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:07:00.747752 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 14:07:00.749387 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:07:00.749976 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Dec 13 14:07:00.752496 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Dec 13 14:07:00.758497 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 14:07:00.760470 systemd-tmpfiles[1301]: Skipping /boot Dec 13 14:07:00.789252 zram_generator::config[1323]: No configuration found. Dec 13 14:07:00.814880 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 14:07:00.814901 systemd-tmpfiles[1301]: Skipping /boot Dec 13 14:07:01.006545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:07:01.080054 systemd[1]: Reloading finished in 390 ms. Dec 13 14:07:01.103810 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 14:07:01.112001 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:07:01.130504 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 14:07:01.138474 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 14:07:01.141289 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 14:07:01.153544 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 14:07:01.157742 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:07:01.172454 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 14:07:01.183007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:07:01.183365 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:07:01.188858 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 14:07:01.203300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 14:07:01.207406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 14:07:01.209009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:07:01.209200 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:07:01.217159 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:07:01.217621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:07:01.217963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:07:01.228821 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 14:07:01.229612 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:07:01.236311 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 14:07:01.240475 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:07:01.240870 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:07:01.248699 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 14:07:01.249744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:07:01.249856 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:07:01.250538 systemd[1]: Finished ensure-sysext.service. Dec 13 14:07:01.267478 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 14:07:01.276710 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 14:07:01.278177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:07:01.278596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 14:07:01.280684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:07:01.288923 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 14:07:01.322575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:07:01.322977 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 14:07:01.326743 systemd-udevd[1395]: Using default interface naming scheme 'v255'. Dec 13 14:07:01.327827 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:07:01.328524 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 14:07:01.330710 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 14:07:01.344458 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:07:01.344776 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 14:07:01.354158 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 14:07:01.355620 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:07:01.364567 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 14:07:01.371285 augenrules[1427]: No rules Dec 13 14:07:01.371516 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:07:01.371838 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 14:07:01.375880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:07:01.385480 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 14:07:01.417437 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 14:07:01.521525 systemd-networkd[1437]: lo: Link UP Dec 13 14:07:01.522184 systemd-networkd[1437]: lo: Gained carrier Dec 13 14:07:01.536161 systemd-networkd[1437]: Enumeration completed Dec 13 14:07:01.536821 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 14:07:01.547485 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 14:07:01.577246 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1454) Dec 13 14:07:01.589275 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1454) Dec 13 14:07:01.623875 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 14:07:01.639924 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 14:07:01.640860 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 14:07:01.650078 systemd-resolved[1389]: Positive Trust Anchors: Dec 13 14:07:01.650606 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:07:01.650657 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 14:07:01.658234 systemd-resolved[1389]: Using system hostname 'srv-p3tlm.gb1.brightbox.com'. Dec 13 14:07:01.661305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 14:07:01.662453 systemd[1]: Reached target network.target - Network. Dec 13 14:07:01.663099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:07:01.675251 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1446) Dec 13 14:07:01.759275 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 14:07:01.768537 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:07:01.768743 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:07:01.772566 systemd-networkd[1437]: eth0: Link UP Dec 13 14:07:01.772579 systemd-networkd[1437]: eth0: Gained carrier Dec 13 14:07:01.772606 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:07:01.775263 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:07:01.779306 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:07:01.810370 systemd-networkd[1437]: eth0: DHCPv4 address 10.244.26.14/30, gateway 10.244.26.13 acquired from 10.244.26.13 Dec 13 14:07:01.812853 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Dec 13 14:07:01.840261 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:07:01.846543 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:07:01.846853 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:07:01.851363 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 14:07:01.861254 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 14:07:01.864392 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 14:07:01.894301 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 14:07:01.959379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:07:02.096049 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 14:07:02.106703 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 14:07:02.173779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:07:02.196320 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:07:02.210776 systemd-timesyncd[1411]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Dec 13 14:07:02.210931 systemd-timesyncd[1411]: Initial clock synchronization to Fri 2024-12-13 14:07:02.505940 UTC. Dec 13 14:07:02.234402 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 14:07:02.242114 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:07:02.242950 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 14:07:02.243836 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 14:07:02.244713 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 14:07:02.246042 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 14:07:02.246919 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 14:07:02.247738 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 14:07:02.248544 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:07:02.248604 systemd[1]: Reached target paths.target - Path Units. Dec 13 14:07:02.249315 systemd[1]: Reached target timers.target - Timer Units. Dec 13 14:07:02.252297 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 14:07:02.255324 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 14:07:02.261412 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 14:07:02.264400 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 14:07:02.265890 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 14:07:02.266798 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 14:07:02.267580 systemd[1]: Reached target basic.target - Basic System. Dec 13 14:07:02.268441 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 14:07:02.268491 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 14:07:02.275503 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 14:07:02.281727 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 14:07:02.286250 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:07:02.289498 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 14:07:02.292020 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 14:07:02.301509 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 14:07:02.302333 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 14:07:02.305489 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 14:07:02.312109 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 14:07:02.315674 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 14:07:02.322521 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 14:07:02.336474 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 14:07:02.338146 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:07:02.338973 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:07:02.341131 jq[1485]: false Dec 13 14:07:02.344305 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 14:07:02.355449 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 14:07:02.369287 jq[1495]: true Dec 13 14:07:02.360288 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 14:07:02.370942 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:07:02.372352 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 14:07:02.400776 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:07:02.402474 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 14:07:02.433825 jq[1497]: true Dec 13 14:07:02.454338 update_engine[1494]: I20241213 14:07:02.452329 1494 main.cc:92] Flatcar Update Engine starting Dec 13 14:07:02.455686 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 14:07:02.479426 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:07:02.482190 extend-filesystems[1486]: Found loop4 Dec 13 14:07:02.482190 extend-filesystems[1486]: Found loop5 Dec 13 14:07:02.482190 extend-filesystems[1486]: Found loop6 Dec 13 14:07:02.482190 extend-filesystems[1486]: Found loop7 Dec 13 14:07:02.482190 extend-filesystems[1486]: Found vda Dec 13 14:07:02.482190 extend-filesystems[1486]: Found vda1 Dec 13 14:07:02.482190 extend-filesystems[1486]: Found vda2 Dec 13 14:07:02.482190 extend-filesystems[1486]: Found vda3 Dec 13 14:07:02.482190 extend-filesystems[1486]: Found usr Dec 13 14:07:02.482190 extend-filesystems[1486]: Found vda4 Dec 13 14:07:02.482190 extend-filesystems[1486]: Found vda6 Dec 13 14:07:02.482190 extend-filesystems[1486]: Found vda7 Dec 13 14:07:02.482190 extend-filesystems[1486]: Found vda9 Dec 13 14:07:02.482190 extend-filesystems[1486]: Checking size of /dev/vda9 Dec 13 14:07:02.479761 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 14:07:02.509009 dbus-daemon[1484]: [system] SELinux support is enabled Dec 13 14:07:02.515892 tar[1504]: linux-amd64/helm Dec 13 14:07:02.509364 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 14:07:02.520679 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:07:02.520722 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 14:07:02.523384 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:07:02.523417 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 14:07:02.535790 dbus-daemon[1484]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1437 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:07:02.541924 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:07:02.547476 update_engine[1494]: I20241213 14:07:02.547230 1494 update_check_scheduler.cc:74] Next update check in 2m40s Dec 13 14:07:02.548031 systemd[1]: Started update-engine.service - Update Engine. Dec 13 14:07:02.559495 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 14:07:02.569510 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 14:07:02.576662 extend-filesystems[1486]: Resized partition /dev/vda9 Dec 13 14:07:02.586041 extend-filesystems[1534]: resize2fs 1.47.1 (20-May-2024) Dec 13 14:07:02.606597 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 14:07:02.606736 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1448) Dec 13 14:07:02.724694 systemd-logind[1493]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 14:07:02.724747 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:07:02.727919 systemd-logind[1493]: New seat seat0. Dec 13 14:07:02.730917 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 14:07:02.763556 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:07:02.763930 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 14:07:02.776078 systemd[1]: Starting sshkeys.service... Dec 13 14:07:02.884855 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 14:07:02.893486 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 14:07:02.918299 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 14:07:02.956724 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:07:02.956724 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 14:07:02.956724 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 14:07:02.950815 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:07:02.966447 extend-filesystems[1486]: Resized filesystem in /dev/vda9 Dec 13 14:07:02.953356 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 14:07:02.992128 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:07:02.992377 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 14:07:02.998342 dbus-daemon[1484]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1529 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:07:03.011744 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 14:07:03.060055 polkitd[1557]: Started polkitd version 121 Dec 13 14:07:03.075072 polkitd[1557]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:07:03.076579 polkitd[1557]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:07:03.082002 containerd[1509]: time="2024-12-13T14:07:03.078689456Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 14:07:03.081110 polkitd[1557]: Finished loading, compiling and executing 2 rules Dec 13 14:07:03.084015 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:07:03.084372 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 14:07:03.085620 polkitd[1557]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:07:03.120850 systemd-hostnamed[1529]: Hostname set to (static) Dec 13 14:07:03.160126 containerd[1509]: time="2024-12-13T14:07:03.160015710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:03.166497 locksmithd[1530]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:07:03.172531 containerd[1509]: time="2024-12-13T14:07:03.172464177Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:07:03.172531 containerd[1509]: time="2024-12-13T14:07:03.172529543Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:07:03.172660 containerd[1509]: time="2024-12-13T14:07:03.172560020Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:07:03.172888 containerd[1509]: time="2024-12-13T14:07:03.172855599Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 14:07:03.172955 containerd[1509]: time="2024-12-13T14:07:03.172899064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:03.173037 containerd[1509]: time="2024-12-13T14:07:03.173006449Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:07:03.173101 containerd[1509]: time="2024-12-13T14:07:03.173038866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:03.173750 containerd[1509]: time="2024-12-13T14:07:03.173318543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:07:03.173750 containerd[1509]: time="2024-12-13T14:07:03.173351457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:03.173750 containerd[1509]: time="2024-12-13T14:07:03.173375107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:07:03.173750 containerd[1509]: time="2024-12-13T14:07:03.173392335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:03.173750 containerd[1509]: time="2024-12-13T14:07:03.173537155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:03.173983 containerd[1509]: time="2024-12-13T14:07:03.173940971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:07:03.174306 containerd[1509]: time="2024-12-13T14:07:03.174084356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:07:03.174306 containerd[1509]: time="2024-12-13T14:07:03.174116793Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:07:03.176503 containerd[1509]: time="2024-12-13T14:07:03.176363033Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:07:03.176503 containerd[1509]: time="2024-12-13T14:07:03.176458375Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:07:03.182593 containerd[1509]: time="2024-12-13T14:07:03.182450635Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:07:03.182844 containerd[1509]: time="2024-12-13T14:07:03.182598581Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:07:03.182844 containerd[1509]: time="2024-12-13T14:07:03.182635164Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 14:07:03.182844 containerd[1509]: time="2024-12-13T14:07:03.182671416Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 14:07:03.182844 containerd[1509]: time="2024-12-13T14:07:03.182694508Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:07:03.183020 containerd[1509]: time="2024-12-13T14:07:03.182964439Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:07:03.185733 containerd[1509]: time="2024-12-13T14:07:03.185373715Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:07:03.185733 containerd[1509]: time="2024-12-13T14:07:03.185576125Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 14:07:03.185733 containerd[1509]: time="2024-12-13T14:07:03.185603100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 14:07:03.185733 containerd[1509]: time="2024-12-13T14:07:03.185626117Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 14:07:03.185733 containerd[1509]: time="2024-12-13T14:07:03.185647821Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:07:03.185733 containerd[1509]: time="2024-12-13T14:07:03.185667698Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:07:03.185733 containerd[1509]: time="2024-12-13T14:07:03.185686554Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:07:03.185733 containerd[1509]: time="2024-12-13T14:07:03.185707727Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:07:03.185733 containerd[1509]: time="2024-12-13T14:07:03.185732013Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.185770617Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.185796589Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.185815480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.185850967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.185877237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.185898067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.185919054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.185938141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.185958038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.185976727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.185998333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.186030723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186064 containerd[1509]: time="2024-12-13T14:07:03.186055434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186683 containerd[1509]: time="2024-12-13T14:07:03.186080084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186683 containerd[1509]: time="2024-12-13T14:07:03.186101081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186683 containerd[1509]: time="2024-12-13T14:07:03.186120700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186683 containerd[1509]: time="2024-12-13T14:07:03.186150059Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 14:07:03.186683 containerd[1509]: time="2024-12-13T14:07:03.186192609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.186683 containerd[1509]: time="2024-12-13T14:07:03.186230051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.189857 containerd[1509]: time="2024-12-13T14:07:03.186249949Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:07:03.189857 containerd[1509]: time="2024-12-13T14:07:03.188437436Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:07:03.189857 containerd[1509]: time="2024-12-13T14:07:03.188471240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 14:07:03.189857 containerd[1509]: time="2024-12-13T14:07:03.188492982Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:07:03.189857 containerd[1509]: time="2024-12-13T14:07:03.188514695Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 14:07:03.189857 containerd[1509]: time="2024-12-13T14:07:03.188531609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.189857 containerd[1509]: time="2024-12-13T14:07:03.188557596Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 14:07:03.189857 containerd[1509]: time="2024-12-13T14:07:03.188586154Z" level=info msg="NRI interface is disabled by configuration." Dec 13 14:07:03.189857 containerd[1509]: time="2024-12-13T14:07:03.188607398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:07:03.190368 containerd[1509]: time="2024-12-13T14:07:03.189069368Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:07:03.190368 containerd[1509]: time="2024-12-13T14:07:03.189168613Z" level=info msg="Connect containerd service" Dec 13 14:07:03.190368 containerd[1509]: time="2024-12-13T14:07:03.189267990Z" level=info msg="using legacy CRI server" Dec 13 14:07:03.190368 containerd[1509]: time="2024-12-13T14:07:03.189286445Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 14:07:03.190368 containerd[1509]: time="2024-12-13T14:07:03.189489707Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:07:03.198044 containerd[1509]: time="2024-12-13T14:07:03.194603882Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:07:03.198044 containerd[1509]: time="2024-12-13T14:07:03.195060875Z" level=info msg="Start subscribing containerd event" Dec 13 14:07:03.198044 containerd[1509]: time="2024-12-13T14:07:03.195157307Z" level=info msg="Start recovering state" Dec 13 14:07:03.198044 containerd[1509]: time="2024-12-13T14:07:03.195354858Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:07:03.198044 containerd[1509]: time="2024-12-13T14:07:03.195359974Z" level=info msg="Start event monitor" Dec 13 14:07:03.198044 containerd[1509]: time="2024-12-13T14:07:03.195447932Z" level=info msg="Start snapshots syncer" Dec 13 14:07:03.198044 containerd[1509]: time="2024-12-13T14:07:03.195469671Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:07:03.198044 containerd[1509]: time="2024-12-13T14:07:03.195482890Z" level=info msg="Start streaming server" Dec 13 14:07:03.198044 containerd[1509]: time="2024-12-13T14:07:03.195494090Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:07:03.195788 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 14:07:03.204241 containerd[1509]: time="2024-12-13T14:07:03.202919568Z" level=info msg="containerd successfully booted in 0.130716s" Dec 13 14:07:03.329033 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:07:03.337479 systemd-networkd[1437]: eth0: Gained IPv6LL Dec 13 14:07:03.347373 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 14:07:03.350327 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 14:07:03.360751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:07:03.368830 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 14:07:03.395512 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 14:07:03.408541 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 14:07:03.420683 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 14:07:03.454119 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:07:03.454994 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 14:07:03.470463 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 14:07:03.492197 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 14:07:03.503071 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 14:07:03.512853 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 14:07:03.515093 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 14:07:03.620927 tar[1504]: linux-amd64/LICENSE Dec 13 14:07:03.620927 tar[1504]: linux-amd64/README.md Dec 13 14:07:03.639502 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 14:07:04.362617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:07:04.387041 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:07:04.850016 systemd-networkd[1437]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:683:24:19ff:fef4:1a0e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:683:24:19ff:fef4:1a0e/64 assigned by NDisc. Dec 13 14:07:04.850028 systemd-networkd[1437]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 14:07:05.110614 kubelet[1609]: E1213 14:07:05.110384 1609 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:05.113546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:05.113850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:05.115268 systemd[1]: kubelet.service: Consumed 1.110s CPU time. Dec 13 14:07:07.985908 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 14:07:07.993880 systemd[1]: Started sshd@0-10.244.26.14:22-139.178.68.195:54556.service - OpenSSH per-connection server daemon (139.178.68.195:54556). Dec 13 14:07:08.572670 agetty[1598]: failed to open credentials directory Dec 13 14:07:08.573445 agetty[1599]: failed to open credentials directory Dec 13 14:07:08.605996 login[1598]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Dec 13 14:07:08.608954 login[1599]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:07:08.628387 systemd-logind[1493]: New session 1 of user core. Dec 13 14:07:08.633481 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 14:07:08.640896 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 14:07:08.666791 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 14:07:08.682047 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 14:07:08.688352 (systemd)[1629]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:08.840507 systemd[1629]: Queued start job for default target default.target. Dec 13 14:07:08.852713 systemd[1629]: Created slice app.slice - User Application Slice. Dec 13 14:07:08.852988 systemd[1629]: Reached target paths.target - Paths. Dec 13 14:07:08.853137 systemd[1629]: Reached target timers.target - Timers. Dec 13 14:07:08.855585 systemd[1629]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 14:07:08.874562 systemd[1629]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 14:07:08.874803 systemd[1629]: Reached target sockets.target - Sockets. Dec 13 14:07:08.874832 systemd[1629]: Reached target basic.target - Basic System. Dec 13 14:07:08.874954 systemd[1629]: Reached target default.target - Main User Target. Dec 13 14:07:08.875019 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 14:07:08.875028 systemd[1629]: Startup finished in 175ms. Dec 13 14:07:08.885672 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 14:07:08.932418 sshd[1621]: Accepted publickey for core from 139.178.68.195 port 54556 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:07:08.934655 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:07:08.944167 systemd-logind[1493]: New session 3 of user core. Dec 13 14:07:08.950791 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 14:07:09.470972 coreos-metadata[1483]: Dec 13 14:07:09.470 WARN failed to locate config-drive, using the metadata service API instead Dec 13 14:07:09.497332 coreos-metadata[1483]: Dec 13 14:07:09.497 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 14:07:09.503854 coreos-metadata[1483]: Dec 13 14:07:09.503 INFO Fetch failed with 404: resource not found Dec 13 14:07:09.503854 coreos-metadata[1483]: Dec 13 14:07:09.503 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 14:07:09.505209 coreos-metadata[1483]: Dec 13 14:07:09.505 INFO Fetch successful Dec 13 14:07:09.505425 coreos-metadata[1483]: Dec 13 14:07:09.505 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 14:07:09.520456 coreos-metadata[1483]: Dec 13 14:07:09.520 INFO Fetch successful Dec 13 14:07:09.520685 coreos-metadata[1483]: Dec 13 14:07:09.520 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 14:07:09.541189 coreos-metadata[1483]: Dec 13 14:07:09.541 INFO Fetch successful Dec 13 14:07:09.541473 coreos-metadata[1483]: Dec 13 14:07:09.541 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 14:07:09.562204 coreos-metadata[1483]: Dec 13 14:07:09.562 INFO Fetch successful Dec 13 14:07:09.562425 coreos-metadata[1483]: Dec 13 14:07:09.562 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 14:07:09.583805 coreos-metadata[1483]: Dec 13 14:07:09.583 INFO Fetch successful Dec 13 14:07:09.607395 login[1598]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:07:09.618744 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 14:07:09.622675 systemd-logind[1493]: New session 2 of user core. Dec 13 14:07:09.635715 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 14:07:09.636314 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 14:07:09.711647 systemd[1]: Started sshd@1-10.244.26.14:22-139.178.68.195:54570.service - OpenSSH per-connection server daemon (139.178.68.195:54570). Dec 13 14:07:10.048510 coreos-metadata[1551]: Dec 13 14:07:10.048 WARN failed to locate config-drive, using the metadata service API instead Dec 13 14:07:10.074499 coreos-metadata[1551]: Dec 13 14:07:10.074 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 14:07:10.108869 coreos-metadata[1551]: Dec 13 14:07:10.108 INFO Fetch successful Dec 13 14:07:10.109222 coreos-metadata[1551]: Dec 13 14:07:10.109 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:07:10.147914 coreos-metadata[1551]: Dec 13 14:07:10.147 INFO Fetch successful Dec 13 14:07:10.152777 unknown[1551]: wrote ssh authorized keys file for user: core Dec 13 14:07:10.176722 update-ssh-keys[1671]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:07:10.178075 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 14:07:10.182202 systemd[1]: Finished sshkeys.service. Dec 13 14:07:10.183595 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 14:07:10.185345 systemd[1]: Startup finished in 1.478s (kernel) + 15.382s (initrd) + 12.091s (userspace) = 28.952s. Dec 13 14:07:10.629347 sshd[1667]: Accepted publickey for core from 139.178.68.195 port 54570 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:07:10.631396 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:07:10.638998 systemd-logind[1493]: New session 4 of user core. Dec 13 14:07:10.656651 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 14:07:11.254383 sshd[1675]: Connection closed by 139.178.68.195 port 54570 Dec 13 14:07:11.255435 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:11.260811 systemd[1]: sshd@1-10.244.26.14:22-139.178.68.195:54570.service: Deactivated successfully. Dec 13 14:07:11.262906 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:07:11.263818 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:07:11.265530 systemd-logind[1493]: Removed session 4. Dec 13 14:07:11.419912 systemd[1]: Started sshd@2-10.244.26.14:22-139.178.68.195:54572.service - OpenSSH per-connection server daemon (139.178.68.195:54572). Dec 13 14:07:12.316204 sshd[1680]: Accepted publickey for core from 139.178.68.195 port 54572 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:07:12.318276 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:07:12.324538 systemd-logind[1493]: New session 5 of user core. Dec 13 14:07:12.336623 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 14:07:12.932905 sshd[1682]: Connection closed by 139.178.68.195 port 54572 Dec 13 14:07:12.932717 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:12.938645 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:07:12.939746 systemd[1]: sshd@2-10.244.26.14:22-139.178.68.195:54572.service: Deactivated successfully. Dec 13 14:07:12.942216 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:07:12.943720 systemd-logind[1493]: Removed session 5. Dec 13 14:07:13.095942 systemd[1]: Started sshd@3-10.244.26.14:22-139.178.68.195:54584.service - OpenSSH per-connection server daemon (139.178.68.195:54584). Dec 13 14:07:13.996998 sshd[1687]: Accepted publickey for core from 139.178.68.195 port 54584 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:07:13.999342 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:07:14.007049 systemd-logind[1493]: New session 6 of user core. Dec 13 14:07:14.013491 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 14:07:14.621669 sshd[1689]: Connection closed by 139.178.68.195 port 54584 Dec 13 14:07:14.621477 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:14.626684 systemd[1]: sshd@3-10.244.26.14:22-139.178.68.195:54584.service: Deactivated successfully. Dec 13 14:07:14.629196 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:07:14.630901 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:07:14.632504 systemd-logind[1493]: Removed session 6. Dec 13 14:07:14.784593 systemd[1]: Started sshd@4-10.244.26.14:22-139.178.68.195:54596.service - OpenSSH per-connection server daemon (139.178.68.195:54596). Dec 13 14:07:15.364591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:07:15.376522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:07:15.536616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:07:15.545678 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:07:15.621688 kubelet[1704]: E1213 14:07:15.621381 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:15.627090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:15.627530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:15.699482 sshd[1694]: Accepted publickey for core from 139.178.68.195 port 54596 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:07:15.701749 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:07:15.709654 systemd-logind[1493]: New session 7 of user core. Dec 13 14:07:15.720548 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 14:07:16.195537 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 14:07:16.196726 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:07:16.212040 sudo[1713]: pam_unix(sudo:session): session closed for user root Dec 13 14:07:16.358257 sshd[1712]: Connection closed by 139.178.68.195 port 54596 Dec 13 14:07:16.357582 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:16.363001 systemd[1]: sshd@4-10.244.26.14:22-139.178.68.195:54596.service: Deactivated successfully. Dec 13 14:07:16.365746 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:07:16.367802 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:07:16.369929 systemd-logind[1493]: Removed session 7. Dec 13 14:07:16.523666 systemd[1]: Started sshd@5-10.244.26.14:22-139.178.68.195:38500.service - OpenSSH per-connection server daemon (139.178.68.195:38500). Dec 13 14:07:17.425694 sshd[1718]: Accepted publickey for core from 139.178.68.195 port 38500 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:07:17.427979 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:07:17.435460 systemd-logind[1493]: New session 8 of user core. Dec 13 14:07:17.442473 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 14:07:17.907040 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 14:07:17.908301 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:07:17.914672 sudo[1722]: pam_unix(sudo:session): session closed for user root Dec 13 14:07:17.924061 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 14:07:17.924615 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:07:17.945962 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 14:07:17.992965 augenrules[1744]: No rules Dec 13 14:07:17.993952 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:07:17.994277 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 14:07:17.996718 sudo[1721]: pam_unix(sudo:session): session closed for user root Dec 13 14:07:18.141319 sshd[1720]: Connection closed by 139.178.68.195 port 38500 Dec 13 14:07:18.142327 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:18.147013 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:07:18.148345 systemd[1]: sshd@5-10.244.26.14:22-139.178.68.195:38500.service: Deactivated successfully. Dec 13 14:07:18.150775 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:07:18.152903 systemd-logind[1493]: Removed session 8. Dec 13 14:07:18.312686 systemd[1]: Started sshd@6-10.244.26.14:22-139.178.68.195:38514.service - OpenSSH per-connection server daemon (139.178.68.195:38514). Dec 13 14:07:19.209431 sshd[1752]: Accepted publickey for core from 139.178.68.195 port 38514 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:07:19.212441 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:07:19.218842 systemd-logind[1493]: New session 9 of user core. Dec 13 14:07:19.227524 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 14:07:19.689678 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:07:19.690184 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:07:20.154962 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 14:07:20.167908 (dockerd)[1773]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 14:07:20.575115 dockerd[1773]: time="2024-12-13T14:07:20.574973004Z" level=info msg="Starting up" Dec 13 14:07:20.696331 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1129916875-merged.mount: Deactivated successfully. Dec 13 14:07:20.722909 dockerd[1773]: time="2024-12-13T14:07:20.721965588Z" level=info msg="Loading containers: start." Dec 13 14:07:20.967377 kernel: Initializing XFRM netlink socket Dec 13 14:07:21.090434 systemd-networkd[1437]: docker0: Link UP Dec 13 14:07:21.136784 dockerd[1773]: time="2024-12-13T14:07:21.136586012Z" level=info msg="Loading containers: done." Dec 13 14:07:21.163624 dockerd[1773]: time="2024-12-13T14:07:21.162840396Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:07:21.163624 dockerd[1773]: time="2024-12-13T14:07:21.163018391Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 14:07:21.163624 dockerd[1773]: time="2024-12-13T14:07:21.163203751Z" level=info msg="Daemon has completed initialization" Dec 13 14:07:21.209540 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 14:07:21.210407 dockerd[1773]: time="2024-12-13T14:07:21.210296196Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:07:22.655117 containerd[1509]: time="2024-12-13T14:07:22.654990036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 14:07:23.445941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551565777.mount: Deactivated successfully. Dec 13 14:07:25.759521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:07:25.771780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:07:25.956595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:07:25.967766 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:07:26.017415 containerd[1509]: time="2024-12-13T14:07:26.016000418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:26.018876 containerd[1509]: time="2024-12-13T14:07:26.018802083Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Dec 13 14:07:26.021459 containerd[1509]: time="2024-12-13T14:07:26.021384583Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:26.029164 containerd[1509]: time="2024-12-13T14:07:26.029021324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:26.033614 containerd[1509]: time="2024-12-13T14:07:26.032811214Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.377710446s" Dec 13 14:07:26.033614 containerd[1509]: time="2024-12-13T14:07:26.032918911Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 14:07:26.054850 kubelet[2033]: E1213 14:07:26.054750 2033 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:26.059105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:26.059448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:26.082839 containerd[1509]: time="2024-12-13T14:07:26.082665268Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 14:07:28.891847 containerd[1509]: time="2024-12-13T14:07:28.890051605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:28.894061 containerd[1509]: time="2024-12-13T14:07:28.893992909Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Dec 13 14:07:28.895387 containerd[1509]: time="2024-12-13T14:07:28.895342066Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:28.899478 containerd[1509]: time="2024-12-13T14:07:28.899430748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:28.901212 containerd[1509]: time="2024-12-13T14:07:28.901172383Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.81808882s" Dec 13 14:07:28.901378 containerd[1509]: time="2024-12-13T14:07:28.901348022Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 14:07:28.941173 containerd[1509]: time="2024-12-13T14:07:28.940341734Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 14:07:30.719191 containerd[1509]: time="2024-12-13T14:07:30.719038473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:30.721314 containerd[1509]: time="2024-12-13T14:07:30.721238767Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Dec 13 14:07:30.724256 containerd[1509]: time="2024-12-13T14:07:30.722152577Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:30.729406 containerd[1509]: time="2024-12-13T14:07:30.729340676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:30.731148 containerd[1509]: time="2024-12-13T14:07:30.731103189Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.789622643s" Dec 13 14:07:30.731381 containerd[1509]: time="2024-12-13T14:07:30.731351472Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 14:07:30.768486 containerd[1509]: time="2024-12-13T14:07:30.768390472Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:07:32.651138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907032454.mount: Deactivated successfully. Dec 13 14:07:33.336416 containerd[1509]: time="2024-12-13T14:07:33.336111259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:33.337720 containerd[1509]: time="2024-12-13T14:07:33.337665598Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Dec 13 14:07:33.339036 containerd[1509]: time="2024-12-13T14:07:33.338978398Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:33.341785 containerd[1509]: time="2024-12-13T14:07:33.341724858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:33.343334 containerd[1509]: time="2024-12-13T14:07:33.343094046Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.574630044s" Dec 13 14:07:33.343334 containerd[1509]: time="2024-12-13T14:07:33.343147045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 14:07:33.376928 containerd[1509]: time="2024-12-13T14:07:33.376599772Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:07:33.980137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287760587.mount: Deactivated successfully. Dec 13 14:07:34.878720 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:07:35.225409 containerd[1509]: time="2024-12-13T14:07:35.225129802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:35.227750 containerd[1509]: time="2024-12-13T14:07:35.227641826Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 14:07:35.228838 containerd[1509]: time="2024-12-13T14:07:35.228762946Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:35.234823 containerd[1509]: time="2024-12-13T14:07:35.234767640Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.858115287s" Dec 13 14:07:35.234933 containerd[1509]: time="2024-12-13T14:07:35.234854974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:07:35.235695 containerd[1509]: time="2024-12-13T14:07:35.235628548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:35.281609 containerd[1509]: time="2024-12-13T14:07:35.281554848Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:07:35.885749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3271778546.mount: Deactivated successfully. Dec 13 14:07:35.892932 containerd[1509]: time="2024-12-13T14:07:35.892817627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:35.894152 containerd[1509]: time="2024-12-13T14:07:35.894075467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 14:07:35.895336 containerd[1509]: time="2024-12-13T14:07:35.895268552Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:35.898563 containerd[1509]: time="2024-12-13T14:07:35.898479003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:35.900256 containerd[1509]: time="2024-12-13T14:07:35.899790179Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 618.176696ms" Dec 13 14:07:35.900256 containerd[1509]: time="2024-12-13T14:07:35.899833601Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:07:35.933050 containerd[1509]: time="2024-12-13T14:07:35.932985500Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 14:07:36.257020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:07:36.271603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:07:36.525493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:07:36.539104 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:07:36.621488 kubelet[2142]: E1213 14:07:36.621315 2142 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:36.627532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:36.627799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:36.695827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1474384149.mount: Deactivated successfully. Dec 13 14:07:40.987348 containerd[1509]: time="2024-12-13T14:07:40.986864679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:40.989652 containerd[1509]: time="2024-12-13T14:07:40.989573862Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Dec 13 14:07:40.990537 containerd[1509]: time="2024-12-13T14:07:40.990497996Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:40.996851 containerd[1509]: time="2024-12-13T14:07:40.996762659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:07:40.998809 containerd[1509]: time="2024-12-13T14:07:40.998569211Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.065522848s" Dec 13 14:07:40.998809 containerd[1509]: time="2024-12-13T14:07:40.998641268Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 14:07:45.526526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:07:45.541670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:07:45.577840 systemd[1]: Reloading requested from client PID 2258 ('systemctl') (unit session-9.scope)... Dec 13 14:07:45.578113 systemd[1]: Reloading... Dec 13 14:07:45.754349 zram_generator::config[2297]: No configuration found. Dec 13 14:07:45.943196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:07:46.064397 systemd[1]: Reloading finished in 485 ms. Dec 13 14:07:46.142834 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:07:46.142982 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:07:46.143486 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:07:46.153906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:07:46.301533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:07:46.305583 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 14:07:46.384768 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:07:46.384768 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:07:46.384768 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:07:46.387136 kubelet[2364]: I1213 14:07:46.387014 2364 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:07:47.259744 kubelet[2364]: I1213 14:07:47.259631 2364 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:07:47.259744 kubelet[2364]: I1213 14:07:47.259705 2364 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:07:47.260261 kubelet[2364]: I1213 14:07:47.260001 2364 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:07:47.284344 kubelet[2364]: I1213 14:07:47.283735 2364 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:07:47.287781 kubelet[2364]: E1213 14:07:47.287613 2364 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.26.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:47.306829 kubelet[2364]: I1213 14:07:47.306749 2364 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:07:47.308687 kubelet[2364]: I1213 14:07:47.308610 2364 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:07:47.309008 kubelet[2364]: I1213 14:07:47.308681 2364 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-p3tlm.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:07:47.309262 kubelet[2364]: I1213 14:07:47.309037 2364 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:07:47.309262 kubelet[2364]: I1213 14:07:47.309055 2364 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:07:47.309372 kubelet[2364]: I1213 14:07:47.309333 2364 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:07:47.310401 kubelet[2364]: I1213 14:07:47.310368 2364 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:07:47.310401 kubelet[2364]: I1213 14:07:47.310399 2364 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:07:47.312166 kubelet[2364]: I1213 14:07:47.310454 2364 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:07:47.312166 kubelet[2364]: I1213 14:07:47.310497 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:07:47.314027 kubelet[2364]: W1213 14:07:47.313948 2364 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.26.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:47.314792 kubelet[2364]: E1213 14:07:47.314311 2364 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.26.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:47.314792 kubelet[2364]: I1213 14:07:47.314466 2364 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 14:07:47.317247 kubelet[2364]: I1213 14:07:47.316591 2364 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:07:47.317247 kubelet[2364]: W1213 14:07:47.316714 2364 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:07:47.318054 kubelet[2364]: I1213 14:07:47.318031 2364 server.go:1264] "Started kubelet" Dec 13 14:07:47.321687 kubelet[2364]: W1213 14:07:47.321084 2364 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.26.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-p3tlm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:47.321687 kubelet[2364]: E1213 14:07:47.321149 2364 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.26.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-p3tlm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:47.321687 kubelet[2364]: I1213 14:07:47.321346 2364 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:07:47.324061 kubelet[2364]: I1213 14:07:47.324023 2364 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:07:47.326248 kubelet[2364]: I1213 14:07:47.325139 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:07:47.326248 kubelet[2364]: I1213 14:07:47.325595 2364 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:07:47.326248 kubelet[2364]: E1213 14:07:47.325810 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.26.14:6443/api/v1/namespaces/default/events\": dial tcp 10.244.26.14:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-p3tlm.gb1.brightbox.com.1810c1bb6fe55652 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-p3tlm.gb1.brightbox.com,UID:srv-p3tlm.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-p3tlm.gb1.brightbox.com,},FirstTimestamp:2024-12-13 14:07:47.317986898 +0000 UTC m=+1.006863018,LastTimestamp:2024-12-13 14:07:47.317986898 +0000 UTC m=+1.006863018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-p3tlm.gb1.brightbox.com,}" Dec 13 14:07:47.328107 kubelet[2364]: I1213 14:07:47.328082 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:07:47.334074 kubelet[2364]: I1213 14:07:47.334041 2364 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:07:47.334463 kubelet[2364]: I1213 14:07:47.334438 2364 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:07:47.334692 kubelet[2364]: I1213 14:07:47.334672 2364 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:07:47.335348 kubelet[2364]: W1213 14:07:47.335286 2364 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.26.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:47.335537 kubelet[2364]: E1213 14:07:47.335496 2364 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.26.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:47.336582 kubelet[2364]: E1213 14:07:47.336508 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p3tlm.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.14:6443: connect: connection refused" interval="200ms" Dec 13 14:07:47.340777 kubelet[2364]: E1213 14:07:47.340739 2364 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:07:47.341392 kubelet[2364]: I1213 14:07:47.341358 2364 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:07:47.341392 kubelet[2364]: I1213 14:07:47.341382 2364 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:07:47.341539 kubelet[2364]: I1213 14:07:47.341467 2364 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:07:47.364155 kubelet[2364]: I1213 14:07:47.364088 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:07:47.366034 kubelet[2364]: I1213 14:07:47.366008 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:07:47.366706 kubelet[2364]: I1213 14:07:47.366200 2364 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:07:47.366706 kubelet[2364]: I1213 14:07:47.366287 2364 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:07:47.366706 kubelet[2364]: E1213 14:07:47.366377 2364 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:07:47.381512 kubelet[2364]: W1213 14:07:47.381341 2364 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.26.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:47.381512 kubelet[2364]: E1213 14:07:47.381418 2364 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.26.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:47.394112 kubelet[2364]: I1213 14:07:47.393684 2364 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:07:47.394112 kubelet[2364]: I1213 14:07:47.393714 2364 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:07:47.394112 kubelet[2364]: I1213 14:07:47.393743 2364 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:07:47.396692 kubelet[2364]: I1213 14:07:47.396667 2364 policy_none.go:49] "None policy: Start" Dec 13 14:07:47.397835 kubelet[2364]: I1213 14:07:47.397796 2364 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:07:47.397835 kubelet[2364]: I1213 14:07:47.397837 2364 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:07:47.410470 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 14:07:47.428369 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 14:07:47.434774 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 14:07:47.438813 kubelet[2364]: I1213 14:07:47.438740 2364 kubelet_node_status.go:73] "Attempting to register node" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.439297 kubelet[2364]: E1213 14:07:47.439222 2364 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.26.14:6443/api/v1/nodes\": dial tcp 10.244.26.14:6443: connect: connection refused" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.443368 kubelet[2364]: I1213 14:07:47.442749 2364 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:07:47.443368 kubelet[2364]: I1213 14:07:47.443073 2364 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:07:47.443368 kubelet[2364]: I1213 14:07:47.443278 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:07:47.445843 kubelet[2364]: E1213 14:07:47.445809 2364 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-p3tlm.gb1.brightbox.com\" not found" Dec 13 14:07:47.467212 kubelet[2364]: I1213 14:07:47.467067 2364 topology_manager.go:215] "Topology Admit Handler" podUID="f283c139080cdef142a1cd76049832f7" podNamespace="kube-system" podName="kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.471527 kubelet[2364]: I1213 14:07:47.471212 2364 topology_manager.go:215] "Topology Admit Handler" podUID="8c5dfce2c2bc4c9ca64447aa61181212" podNamespace="kube-system" podName="kube-scheduler-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.474634 kubelet[2364]: I1213 14:07:47.474546 2364 topology_manager.go:215] "Topology Admit Handler" podUID="b219eb99c3b38fde4015241a7ae79738" podNamespace="kube-system" podName="kube-apiserver-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.485053 systemd[1]: Created slice kubepods-burstable-podf283c139080cdef142a1cd76049832f7.slice - libcontainer container kubepods-burstable-podf283c139080cdef142a1cd76049832f7.slice. Dec 13 14:07:47.504337 systemd[1]: Created slice kubepods-burstable-pod8c5dfce2c2bc4c9ca64447aa61181212.slice - libcontainer container kubepods-burstable-pod8c5dfce2c2bc4c9ca64447aa61181212.slice. Dec 13 14:07:47.512054 systemd[1]: Created slice kubepods-burstable-podb219eb99c3b38fde4015241a7ae79738.slice - libcontainer container kubepods-burstable-podb219eb99c3b38fde4015241a7ae79738.slice. Dec 13 14:07:47.537913 kubelet[2364]: E1213 14:07:47.537825 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p3tlm.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.14:6443: connect: connection refused" interval="400ms" Dec 13 14:07:47.635492 kubelet[2364]: I1213 14:07:47.635357 2364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b219eb99c3b38fde4015241a7ae79738-ca-certs\") pod \"kube-apiserver-srv-p3tlm.gb1.brightbox.com\" (UID: \"b219eb99c3b38fde4015241a7ae79738\") " pod="kube-system/kube-apiserver-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.635492 kubelet[2364]: I1213 14:07:47.635422 2364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b219eb99c3b38fde4015241a7ae79738-k8s-certs\") pod \"kube-apiserver-srv-p3tlm.gb1.brightbox.com\" (UID: \"b219eb99c3b38fde4015241a7ae79738\") " pod="kube-system/kube-apiserver-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.635492 kubelet[2364]: I1213 14:07:47.635453 2364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b219eb99c3b38fde4015241a7ae79738-usr-share-ca-certificates\") pod \"kube-apiserver-srv-p3tlm.gb1.brightbox.com\" (UID: \"b219eb99c3b38fde4015241a7ae79738\") " pod="kube-system/kube-apiserver-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.635492 kubelet[2364]: I1213 14:07:47.635497 2364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f283c139080cdef142a1cd76049832f7-ca-certs\") pod \"kube-controller-manager-srv-p3tlm.gb1.brightbox.com\" (UID: \"f283c139080cdef142a1cd76049832f7\") " pod="kube-system/kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.636116 kubelet[2364]: I1213 14:07:47.635544 2364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f283c139080cdef142a1cd76049832f7-flexvolume-dir\") pod \"kube-controller-manager-srv-p3tlm.gb1.brightbox.com\" (UID: \"f283c139080cdef142a1cd76049832f7\") " pod="kube-system/kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.636116 kubelet[2364]: I1213 14:07:47.635580 2364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f283c139080cdef142a1cd76049832f7-k8s-certs\") pod \"kube-controller-manager-srv-p3tlm.gb1.brightbox.com\" (UID: \"f283c139080cdef142a1cd76049832f7\") " pod="kube-system/kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.636116 kubelet[2364]: I1213 14:07:47.635622 2364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f283c139080cdef142a1cd76049832f7-kubeconfig\") pod \"kube-controller-manager-srv-p3tlm.gb1.brightbox.com\" (UID: \"f283c139080cdef142a1cd76049832f7\") " pod="kube-system/kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.636116 kubelet[2364]: I1213 14:07:47.635665 2364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f283c139080cdef142a1cd76049832f7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-p3tlm.gb1.brightbox.com\" (UID: \"f283c139080cdef142a1cd76049832f7\") " pod="kube-system/kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.636116 kubelet[2364]: I1213 14:07:47.635695 2364 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c5dfce2c2bc4c9ca64447aa61181212-kubeconfig\") pod \"kube-scheduler-srv-p3tlm.gb1.brightbox.com\" (UID: \"8c5dfce2c2bc4c9ca64447aa61181212\") " pod="kube-system/kube-scheduler-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.643103 kubelet[2364]: I1213 14:07:47.643010 2364 kubelet_node_status.go:73] "Attempting to register node" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.643489 kubelet[2364]: E1213 14:07:47.643444 2364 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.26.14:6443/api/v1/nodes\": dial tcp 10.244.26.14:6443: connect: connection refused" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:47.802411 containerd[1509]: time="2024-12-13T14:07:47.801847165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-p3tlm.gb1.brightbox.com,Uid:f283c139080cdef142a1cd76049832f7,Namespace:kube-system,Attempt:0,}" Dec 13 14:07:47.814260 containerd[1509]: time="2024-12-13T14:07:47.814044504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-p3tlm.gb1.brightbox.com,Uid:8c5dfce2c2bc4c9ca64447aa61181212,Namespace:kube-system,Attempt:0,}" Dec 13 14:07:47.818957 containerd[1509]: time="2024-12-13T14:07:47.818920690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-p3tlm.gb1.brightbox.com,Uid:b219eb99c3b38fde4015241a7ae79738,Namespace:kube-system,Attempt:0,}" Dec 13 14:07:47.939293 kubelet[2364]: E1213 14:07:47.939181 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p3tlm.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.14:6443: connect: connection refused" interval="800ms" Dec 13 14:07:48.046782 kubelet[2364]: I1213 14:07:48.046733 2364 kubelet_node_status.go:73] "Attempting to register node" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:48.047247 kubelet[2364]: E1213 14:07:48.047190 2364 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.26.14:6443/api/v1/nodes\": dial tcp 10.244.26.14:6443: connect: connection refused" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:48.196763 kubelet[2364]: W1213 14:07:48.196528 2364 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.26.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-p3tlm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:48.196763 kubelet[2364]: E1213 14:07:48.196613 2364 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.26.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-p3tlm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:48.234507 update_engine[1494]: I20241213 14:07:48.234380 1494 update_attempter.cc:509] Updating boot flags... Dec 13 14:07:48.283571 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2403) Dec 13 14:07:48.402005 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2403) Dec 13 14:07:48.464552 kubelet[2364]: W1213 14:07:48.464422 2364 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.26.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:48.465976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623739597.mount: Deactivated successfully. Dec 13 14:07:48.467493 kubelet[2364]: E1213 14:07:48.466400 2364 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.26.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:48.489370 containerd[1509]: time="2024-12-13T14:07:48.489276598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:07:48.491550 containerd[1509]: time="2024-12-13T14:07:48.491494223Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:07:48.493590 containerd[1509]: time="2024-12-13T14:07:48.493475811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 14:07:48.494555 containerd[1509]: time="2024-12-13T14:07:48.494478259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 14:07:48.498247 containerd[1509]: time="2024-12-13T14:07:48.497404340Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:07:48.500266 containerd[1509]: time="2024-12-13T14:07:48.500163394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 14:07:48.501837 containerd[1509]: time="2024-12-13T14:07:48.501698535Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:07:48.509912 containerd[1509]: time="2024-12-13T14:07:48.509852964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:07:48.511355 containerd[1509]: time="2024-12-13T14:07:48.511318852Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 692.295866ms" Dec 13 14:07:48.514749 containerd[1509]: time="2024-12-13T14:07:48.514710809Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 712.542176ms" Dec 13 14:07:48.518695 containerd[1509]: time="2024-12-13T14:07:48.517882906Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 703.66287ms" Dec 13 14:07:48.553941 kubelet[2364]: W1213 14:07:48.553832 2364 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.26.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:48.553941 kubelet[2364]: E1213 14:07:48.553907 2364 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.26.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:48.705963 containerd[1509]: time="2024-12-13T14:07:48.704763641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:48.705963 containerd[1509]: time="2024-12-13T14:07:48.704906354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:48.705963 containerd[1509]: time="2024-12-13T14:07:48.704930805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:48.705963 containerd[1509]: time="2024-12-13T14:07:48.705061764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:48.714900 containerd[1509]: time="2024-12-13T14:07:48.714064189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:48.714900 containerd[1509]: time="2024-12-13T14:07:48.714156072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:48.714900 containerd[1509]: time="2024-12-13T14:07:48.714179964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:48.714900 containerd[1509]: time="2024-12-13T14:07:48.714335565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:48.716204 containerd[1509]: time="2024-12-13T14:07:48.715187630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:48.716719 containerd[1509]: time="2024-12-13T14:07:48.715297197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:48.716719 containerd[1509]: time="2024-12-13T14:07:48.716307000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:48.716719 containerd[1509]: time="2024-12-13T14:07:48.716425705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:48.744521 kubelet[2364]: E1213 14:07:48.743445 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p3tlm.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.14:6443: connect: connection refused" interval="1.6s" Dec 13 14:07:48.758328 systemd[1]: Started cri-containerd-c17f2c6d27522203cb08f0d90e9c8ac4ddcad70e7a1e168efb8a495cdc8bdd2e.scope - libcontainer container c17f2c6d27522203cb08f0d90e9c8ac4ddcad70e7a1e168efb8a495cdc8bdd2e. Dec 13 14:07:48.772651 systemd[1]: Started cri-containerd-eb9875775f9c217c235ae5a062d242a0529736e1858aaac72fa5401eee5c1a86.scope - libcontainer container eb9875775f9c217c235ae5a062d242a0529736e1858aaac72fa5401eee5c1a86. Dec 13 14:07:48.790285 systemd[1]: Started cri-containerd-c168321d73e9cbdbf815d51e158cb64ced4142295bd1f34c6a64d43e192d9a3f.scope - libcontainer container c168321d73e9cbdbf815d51e158cb64ced4142295bd1f34c6a64d43e192d9a3f. Dec 13 14:07:48.858620 kubelet[2364]: I1213 14:07:48.858560 2364 kubelet_node_status.go:73] "Attempting to register node" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:48.862354 kubelet[2364]: E1213 14:07:48.860120 2364 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.26.14:6443/api/v1/nodes\": dial tcp 10.244.26.14:6443: connect: connection refused" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:48.871651 containerd[1509]: time="2024-12-13T14:07:48.871545875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-p3tlm.gb1.brightbox.com,Uid:f283c139080cdef142a1cd76049832f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c17f2c6d27522203cb08f0d90e9c8ac4ddcad70e7a1e168efb8a495cdc8bdd2e\"" Dec 13 14:07:48.894245 containerd[1509]: time="2024-12-13T14:07:48.892961071Z" level=info msg="CreateContainer within sandbox \"c17f2c6d27522203cb08f0d90e9c8ac4ddcad70e7a1e168efb8a495cdc8bdd2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:07:48.917686 containerd[1509]: time="2024-12-13T14:07:48.917387372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-p3tlm.gb1.brightbox.com,Uid:b219eb99c3b38fde4015241a7ae79738,Namespace:kube-system,Attempt:0,} returns sandbox id \"c168321d73e9cbdbf815d51e158cb64ced4142295bd1f34c6a64d43e192d9a3f\"" Dec 13 14:07:48.923493 kubelet[2364]: W1213 14:07:48.922768 2364 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.26.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:48.923493 kubelet[2364]: E1213 14:07:48.922931 2364 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.26.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:48.923839 containerd[1509]: time="2024-12-13T14:07:48.923740353Z" level=info msg="CreateContainer within sandbox \"c17f2c6d27522203cb08f0d90e9c8ac4ddcad70e7a1e168efb8a495cdc8bdd2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f5b896cf8900c1e0eb63fd075adc0720a41ed6cc75a8350f1ccb915adb8dc356\"" Dec 13 14:07:48.926245 containerd[1509]: time="2024-12-13T14:07:48.925010711Z" level=info msg="StartContainer for \"f5b896cf8900c1e0eb63fd075adc0720a41ed6cc75a8350f1ccb915adb8dc356\"" Dec 13 14:07:48.929598 containerd[1509]: time="2024-12-13T14:07:48.929552902Z" level=info msg="CreateContainer within sandbox \"c168321d73e9cbdbf815d51e158cb64ced4142295bd1f34c6a64d43e192d9a3f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:07:48.943616 containerd[1509]: time="2024-12-13T14:07:48.943564669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-p3tlm.gb1.brightbox.com,Uid:8c5dfce2c2bc4c9ca64447aa61181212,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb9875775f9c217c235ae5a062d242a0529736e1858aaac72fa5401eee5c1a86\"" Dec 13 14:07:48.950600 containerd[1509]: time="2024-12-13T14:07:48.950546105Z" level=info msg="CreateContainer within sandbox \"eb9875775f9c217c235ae5a062d242a0529736e1858aaac72fa5401eee5c1a86\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:07:48.959331 containerd[1509]: time="2024-12-13T14:07:48.959274871Z" level=info msg="CreateContainer within sandbox \"c168321d73e9cbdbf815d51e158cb64ced4142295bd1f34c6a64d43e192d9a3f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a02c5ab34810d418a5b51b97f5f301825c2a47c7322c163033a108e7372f910\"" Dec 13 14:07:48.959908 containerd[1509]: time="2024-12-13T14:07:48.959847438Z" level=info msg="StartContainer for \"0a02c5ab34810d418a5b51b97f5f301825c2a47c7322c163033a108e7372f910\"" Dec 13 14:07:48.980618 systemd[1]: Started cri-containerd-f5b896cf8900c1e0eb63fd075adc0720a41ed6cc75a8350f1ccb915adb8dc356.scope - libcontainer container f5b896cf8900c1e0eb63fd075adc0720a41ed6cc75a8350f1ccb915adb8dc356. Dec 13 14:07:48.991275 containerd[1509]: time="2024-12-13T14:07:48.990117403Z" level=info msg="CreateContainer within sandbox \"eb9875775f9c217c235ae5a062d242a0529736e1858aaac72fa5401eee5c1a86\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d6689cf59547316de1c417c61aca88be8b09141c00905e162c41127d227404df\"" Dec 13 14:07:48.993055 containerd[1509]: time="2024-12-13T14:07:48.992794681Z" level=info msg="StartContainer for \"d6689cf59547316de1c417c61aca88be8b09141c00905e162c41127d227404df\"" Dec 13 14:07:49.036466 systemd[1]: Started cri-containerd-0a02c5ab34810d418a5b51b97f5f301825c2a47c7322c163033a108e7372f910.scope - libcontainer container 0a02c5ab34810d418a5b51b97f5f301825c2a47c7322c163033a108e7372f910. Dec 13 14:07:49.068941 systemd[1]: Started cri-containerd-d6689cf59547316de1c417c61aca88be8b09141c00905e162c41127d227404df.scope - libcontainer container d6689cf59547316de1c417c61aca88be8b09141c00905e162c41127d227404df. Dec 13 14:07:49.112015 containerd[1509]: time="2024-12-13T14:07:49.111768692Z" level=info msg="StartContainer for \"f5b896cf8900c1e0eb63fd075adc0720a41ed6cc75a8350f1ccb915adb8dc356\" returns successfully" Dec 13 14:07:49.163642 containerd[1509]: time="2024-12-13T14:07:49.163582566Z" level=info msg="StartContainer for \"0a02c5ab34810d418a5b51b97f5f301825c2a47c7322c163033a108e7372f910\" returns successfully" Dec 13 14:07:49.201869 containerd[1509]: time="2024-12-13T14:07:49.201811392Z" level=info msg="StartContainer for \"d6689cf59547316de1c417c61aca88be8b09141c00905e162c41127d227404df\" returns successfully" Dec 13 14:07:49.414856 kubelet[2364]: E1213 14:07:49.414782 2364 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.26.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.26.14:6443: connect: connection refused Dec 13 14:07:50.468077 kubelet[2364]: I1213 14:07:50.467251 2364 kubelet_node_status.go:73] "Attempting to register node" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:52.121198 kubelet[2364]: E1213 14:07:52.121127 2364 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-p3tlm.gb1.brightbox.com\" not found" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:52.300566 kubelet[2364]: I1213 14:07:52.300495 2364 kubelet_node_status.go:76] "Successfully registered node" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:52.318950 kubelet[2364]: I1213 14:07:52.318603 2364 apiserver.go:52] "Watching apiserver" Dec 13 14:07:52.335124 kubelet[2364]: I1213 14:07:52.335040 2364 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:07:54.322675 systemd[1]: Reloading requested from client PID 2658 ('systemctl') (unit session-9.scope)... Dec 13 14:07:54.322739 systemd[1]: Reloading... Dec 13 14:07:54.469285 zram_generator::config[2697]: No configuration found. Dec 13 14:07:54.679419 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:07:54.822690 systemd[1]: Reloading finished in 499 ms. Dec 13 14:07:54.890211 kubelet[2364]: I1213 14:07:54.889566 2364 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:07:54.889873 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:07:54.902799 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:07:54.903336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:07:54.903459 systemd[1]: kubelet.service: Consumed 1.545s CPU time, 112.9M memory peak, 0B memory swap peak. Dec 13 14:07:54.914752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:07:55.107121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:07:55.123800 (kubelet)[2761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 14:07:55.225544 kubelet[2761]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:07:55.225544 kubelet[2761]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:07:55.225544 kubelet[2761]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:07:55.226346 kubelet[2761]: I1213 14:07:55.225657 2761 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:07:55.234274 kubelet[2761]: I1213 14:07:55.234156 2761 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:07:55.234274 kubelet[2761]: I1213 14:07:55.234193 2761 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:07:55.234660 kubelet[2761]: I1213 14:07:55.234628 2761 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:07:55.239915 kubelet[2761]: I1213 14:07:55.239004 2761 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:07:55.242046 kubelet[2761]: I1213 14:07:55.241777 2761 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:07:55.254575 kubelet[2761]: I1213 14:07:55.254519 2761 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:07:55.255016 kubelet[2761]: I1213 14:07:55.254948 2761 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:07:55.255336 kubelet[2761]: I1213 14:07:55.255011 2761 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-p3tlm.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:07:55.255336 kubelet[2761]: I1213 14:07:55.255341 2761 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:07:55.255336 kubelet[2761]: I1213 14:07:55.255360 2761 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:07:55.255809 kubelet[2761]: I1213 14:07:55.255451 2761 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:07:55.255809 kubelet[2761]: I1213 14:07:55.255613 2761 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:07:55.255809 kubelet[2761]: I1213 14:07:55.255653 2761 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:07:55.255809 kubelet[2761]: I1213 14:07:55.255701 2761 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:07:55.255809 kubelet[2761]: I1213 14:07:55.255731 2761 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:07:55.259256 kubelet[2761]: I1213 14:07:55.258070 2761 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 14:07:55.259256 kubelet[2761]: I1213 14:07:55.258344 2761 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:07:55.259256 kubelet[2761]: I1213 14:07:55.259000 2761 server.go:1264] "Started kubelet" Dec 13 14:07:55.266885 kubelet[2761]: I1213 14:07:55.263666 2761 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:07:55.273488 kubelet[2761]: I1213 14:07:55.273431 2761 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:07:55.276300 kubelet[2761]: I1213 14:07:55.276273 2761 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:07:55.287273 kubelet[2761]: I1213 14:07:55.276486 2761 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:07:55.299346 kubelet[2761]: I1213 14:07:55.299305 2761 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:07:55.303591 kubelet[2761]: I1213 14:07:55.280672 2761 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:07:55.303880 kubelet[2761]: I1213 14:07:55.280649 2761 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:07:55.307558 kubelet[2761]: I1213 14:07:55.307530 2761 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:07:55.316440 kubelet[2761]: I1213 14:07:55.308057 2761 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:07:55.332854 kubelet[2761]: I1213 14:07:55.332767 2761 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:07:55.336250 kubelet[2761]: I1213 14:07:55.335920 2761 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:07:55.336250 kubelet[2761]: I1213 14:07:55.335952 2761 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:07:55.338836 kubelet[2761]: E1213 14:07:55.338354 2761 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:07:55.340304 kubelet[2761]: I1213 14:07:55.339484 2761 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:07:55.340304 kubelet[2761]: I1213 14:07:55.339652 2761 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:07:55.340304 kubelet[2761]: I1213 14:07:55.339814 2761 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:07:55.340304 kubelet[2761]: E1213 14:07:55.340019 2761 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:07:55.368617 sudo[2784]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:07:55.371196 sudo[2784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 14:07:55.414527 kubelet[2761]: I1213 14:07:55.414481 2761 kubelet_node_status.go:73] "Attempting to register node" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.429210 kubelet[2761]: I1213 14:07:55.429157 2761 kubelet_node_status.go:112] "Node was previously registered" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.429462 kubelet[2761]: I1213 14:07:55.429316 2761 kubelet_node_status.go:76] "Successfully registered node" node="srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.446604 kubelet[2761]: E1213 14:07:55.446522 2761 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:07:55.483295 kubelet[2761]: I1213 14:07:55.482994 2761 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:07:55.483295 kubelet[2761]: I1213 14:07:55.483025 2761 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:07:55.483295 kubelet[2761]: I1213 14:07:55.483065 2761 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:07:55.484389 kubelet[2761]: I1213 14:07:55.484167 2761 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:07:55.484389 kubelet[2761]: I1213 14:07:55.484194 2761 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:07:55.484389 kubelet[2761]: I1213 14:07:55.484265 2761 policy_none.go:49] "None policy: Start" Dec 13 14:07:55.486811 kubelet[2761]: I1213 14:07:55.485587 2761 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:07:55.486811 kubelet[2761]: I1213 14:07:55.485632 2761 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:07:55.486811 kubelet[2761]: I1213 14:07:55.485916 2761 state_mem.go:75] "Updated machine memory state" Dec 13 14:07:55.496516 kubelet[2761]: I1213 14:07:55.496479 2761 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:07:55.497123 kubelet[2761]: I1213 14:07:55.497050 2761 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:07:55.498858 kubelet[2761]: I1213 14:07:55.498067 2761 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:07:55.647207 kubelet[2761]: I1213 14:07:55.647047 2761 topology_manager.go:215] "Topology Admit Handler" podUID="b219eb99c3b38fde4015241a7ae79738" podNamespace="kube-system" podName="kube-apiserver-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.647968 kubelet[2761]: I1213 14:07:55.647937 2761 topology_manager.go:215] "Topology Admit Handler" podUID="f283c139080cdef142a1cd76049832f7" podNamespace="kube-system" podName="kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.648963 kubelet[2761]: I1213 14:07:55.648172 2761 topology_manager.go:215] "Topology Admit Handler" podUID="8c5dfce2c2bc4c9ca64447aa61181212" podNamespace="kube-system" podName="kube-scheduler-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.665885 kubelet[2761]: W1213 14:07:55.665842 2761 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:07:55.666501 kubelet[2761]: W1213 14:07:55.665841 2761 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:07:55.667306 kubelet[2761]: W1213 14:07:55.665922 2761 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:07:55.724879 kubelet[2761]: I1213 14:07:55.724795 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f283c139080cdef142a1cd76049832f7-flexvolume-dir\") pod \"kube-controller-manager-srv-p3tlm.gb1.brightbox.com\" (UID: \"f283c139080cdef142a1cd76049832f7\") " pod="kube-system/kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.725885 kubelet[2761]: I1213 14:07:55.725653 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f283c139080cdef142a1cd76049832f7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-p3tlm.gb1.brightbox.com\" (UID: \"f283c139080cdef142a1cd76049832f7\") " pod="kube-system/kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.725885 kubelet[2761]: I1213 14:07:55.725832 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b219eb99c3b38fde4015241a7ae79738-usr-share-ca-certificates\") pod \"kube-apiserver-srv-p3tlm.gb1.brightbox.com\" (UID: \"b219eb99c3b38fde4015241a7ae79738\") " pod="kube-system/kube-apiserver-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.726808 kubelet[2761]: I1213 14:07:55.726676 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f283c139080cdef142a1cd76049832f7-ca-certs\") pod \"kube-controller-manager-srv-p3tlm.gb1.brightbox.com\" (UID: \"f283c139080cdef142a1cd76049832f7\") " pod="kube-system/kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.726808 kubelet[2761]: I1213 14:07:55.726759 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f283c139080cdef142a1cd76049832f7-k8s-certs\") pod \"kube-controller-manager-srv-p3tlm.gb1.brightbox.com\" (UID: \"f283c139080cdef142a1cd76049832f7\") " pod="kube-system/kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.727269 kubelet[2761]: I1213 14:07:55.727050 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f283c139080cdef142a1cd76049832f7-kubeconfig\") pod \"kube-controller-manager-srv-p3tlm.gb1.brightbox.com\" (UID: \"f283c139080cdef142a1cd76049832f7\") " pod="kube-system/kube-controller-manager-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.727269 kubelet[2761]: I1213 14:07:55.727092 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c5dfce2c2bc4c9ca64447aa61181212-kubeconfig\") pod \"kube-scheduler-srv-p3tlm.gb1.brightbox.com\" (UID: \"8c5dfce2c2bc4c9ca64447aa61181212\") " pod="kube-system/kube-scheduler-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.727269 kubelet[2761]: I1213 14:07:55.727141 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b219eb99c3b38fde4015241a7ae79738-ca-certs\") pod \"kube-apiserver-srv-p3tlm.gb1.brightbox.com\" (UID: \"b219eb99c3b38fde4015241a7ae79738\") " pod="kube-system/kube-apiserver-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:55.727269 kubelet[2761]: I1213 14:07:55.727173 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b219eb99c3b38fde4015241a7ae79738-k8s-certs\") pod \"kube-apiserver-srv-p3tlm.gb1.brightbox.com\" (UID: \"b219eb99c3b38fde4015241a7ae79738\") " pod="kube-system/kube-apiserver-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:56.259248 kubelet[2761]: I1213 14:07:56.257741 2761 apiserver.go:52] "Watching apiserver" Dec 13 14:07:56.258071 sudo[2784]: pam_unix(sudo:session): session closed for user root Dec 13 14:07:56.304442 kubelet[2761]: I1213 14:07:56.304348 2761 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:07:56.406806 kubelet[2761]: W1213 14:07:56.406310 2761 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:07:56.406806 kubelet[2761]: E1213 14:07:56.406442 2761 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-p3tlm.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-p3tlm.gb1.brightbox.com" Dec 13 14:07:56.449868 kubelet[2761]: I1213 14:07:56.449734 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-p3tlm.gb1.brightbox.com" podStartSLOduration=1.449656696 podStartE2EDuration="1.449656696s" podCreationTimestamp="2024-12-13 14:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:07:56.448155297 +0000 UTC m=+1.313174441" watchObservedRunningTime="2024-12-13 14:07:56.449656696 +0000 UTC m=+1.314675837" Dec 13 14:07:56.493958 kubelet[2761]: I1213 14:07:56.493336 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-p3tlm.gb1.brightbox.com" podStartSLOduration=1.493311937 podStartE2EDuration="1.493311937s" podCreationTimestamp="2024-12-13 14:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:07:56.48978823 +0000 UTC m=+1.354807376" watchObservedRunningTime="2024-12-13 14:07:56.493311937 +0000 UTC m=+1.358331074" Dec 13 14:07:56.493958 kubelet[2761]: I1213 14:07:56.493476 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-p3tlm.gb1.brightbox.com" podStartSLOduration=1.493466801 podStartE2EDuration="1.493466801s" podCreationTimestamp="2024-12-13 14:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:07:56.462928355 +0000 UTC m=+1.327947512" watchObservedRunningTime="2024-12-13 14:07:56.493466801 +0000 UTC m=+1.358485963" Dec 13 14:07:58.074075 sudo[1755]: pam_unix(sudo:session): session closed for user root Dec 13 14:07:58.217795 sshd[1754]: Connection closed by 139.178.68.195 port 38514 Dec 13 14:07:58.220240 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:58.225543 systemd[1]: sshd@6-10.244.26.14:22-139.178.68.195:38514.service: Deactivated successfully. Dec 13 14:07:58.228849 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:07:58.229165 systemd[1]: session-9.scope: Consumed 6.978s CPU time, 184.5M memory peak, 0B memory swap peak. Dec 13 14:07:58.231061 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:07:58.232994 systemd-logind[1493]: Removed session 9. Dec 13 14:08:09.495174 kubelet[2761]: I1213 14:08:09.495038 2761 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:08:09.497021 kubelet[2761]: I1213 14:08:09.496988 2761 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:08:09.497145 containerd[1509]: time="2024-12-13T14:08:09.496690908Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:08:10.368803 kubelet[2761]: I1213 14:08:10.368671 2761 topology_manager.go:215] "Topology Admit Handler" podUID="7deab058-b47c-4d13-94be-e5a2e7f61f66" podNamespace="kube-system" podName="kube-proxy-xcqz5" Dec 13 14:08:10.391290 systemd[1]: Created slice kubepods-besteffort-pod7deab058_b47c_4d13_94be_e5a2e7f61f66.slice - libcontainer container kubepods-besteffort-pod7deab058_b47c_4d13_94be_e5a2e7f61f66.slice. Dec 13 14:08:10.428340 kubelet[2761]: I1213 14:08:10.428272 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7deab058-b47c-4d13-94be-e5a2e7f61f66-kube-proxy\") pod \"kube-proxy-xcqz5\" (UID: \"7deab058-b47c-4d13-94be-e5a2e7f61f66\") " pod="kube-system/kube-proxy-xcqz5" Dec 13 14:08:10.428340 kubelet[2761]: I1213 14:08:10.428357 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7deab058-b47c-4d13-94be-e5a2e7f61f66-lib-modules\") pod \"kube-proxy-xcqz5\" (UID: \"7deab058-b47c-4d13-94be-e5a2e7f61f66\") " pod="kube-system/kube-proxy-xcqz5" Dec 13 14:08:10.428612 kubelet[2761]: I1213 14:08:10.428414 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nslwn\" (UniqueName: \"kubernetes.io/projected/7deab058-b47c-4d13-94be-e5a2e7f61f66-kube-api-access-nslwn\") pod \"kube-proxy-xcqz5\" (UID: \"7deab058-b47c-4d13-94be-e5a2e7f61f66\") " pod="kube-system/kube-proxy-xcqz5" Dec 13 14:08:10.428612 kubelet[2761]: I1213 14:08:10.428458 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7deab058-b47c-4d13-94be-e5a2e7f61f66-xtables-lock\") pod \"kube-proxy-xcqz5\" (UID: \"7deab058-b47c-4d13-94be-e5a2e7f61f66\") " pod="kube-system/kube-proxy-xcqz5" Dec 13 14:08:10.433305 kubelet[2761]: I1213 14:08:10.431782 2761 topology_manager.go:215] "Topology Admit Handler" podUID="a4774e33-b6c5-4900-b58b-65e19e36d863" podNamespace="kube-system" podName="cilium-lmznr" Dec 13 14:08:10.447512 systemd[1]: Created slice kubepods-burstable-poda4774e33_b6c5_4900_b58b_65e19e36d863.slice - libcontainer container kubepods-burstable-poda4774e33_b6c5_4900_b58b_65e19e36d863.slice. Dec 13 14:08:10.529700 kubelet[2761]: I1213 14:08:10.529644 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4774e33-b6c5-4900-b58b-65e19e36d863-hubble-tls\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.531916 kubelet[2761]: I1213 14:08:10.531368 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-xtables-lock\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.531916 kubelet[2761]: I1213 14:08:10.531480 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-etc-cni-netd\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.531916 kubelet[2761]: I1213 14:08:10.531537 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-host-proc-sys-net\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.531916 kubelet[2761]: I1213 14:08:10.531569 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-host-proc-sys-kernel\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.531916 kubelet[2761]: I1213 14:08:10.531621 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-hostproc\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.531916 kubelet[2761]: I1213 14:08:10.531646 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cni-path\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.532303 kubelet[2761]: I1213 14:08:10.531702 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm8nh\" (UniqueName: \"kubernetes.io/projected/a4774e33-b6c5-4900-b58b-65e19e36d863-kube-api-access-lm8nh\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.532303 kubelet[2761]: I1213 14:08:10.531779 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-bpf-maps\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.532303 kubelet[2761]: I1213 14:08:10.531852 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-run\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.533200 kubelet[2761]: I1213 14:08:10.532528 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-lib-modules\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.533200 kubelet[2761]: I1213 14:08:10.532572 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-cgroup\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.533200 kubelet[2761]: I1213 14:08:10.532623 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4774e33-b6c5-4900-b58b-65e19e36d863-clustermesh-secrets\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.533200 kubelet[2761]: I1213 14:08:10.532661 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-config-path\") pod \"cilium-lmznr\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " pod="kube-system/cilium-lmznr" Dec 13 14:08:10.574280 kubelet[2761]: I1213 14:08:10.573081 2761 topology_manager.go:215] "Topology Admit Handler" podUID="907e6629-cdc8-4c9d-bbc5-5b1517c14ba6" podNamespace="kube-system" podName="cilium-operator-599987898-tvjwj" Dec 13 14:08:10.591938 systemd[1]: Created slice kubepods-besteffort-pod907e6629_cdc8_4c9d_bbc5_5b1517c14ba6.slice - libcontainer container kubepods-besteffort-pod907e6629_cdc8_4c9d_bbc5_5b1517c14ba6.slice. Dec 13 14:08:10.634665 kubelet[2761]: I1213 14:08:10.633196 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2hdg\" (UniqueName: \"kubernetes.io/projected/907e6629-cdc8-4c9d-bbc5-5b1517c14ba6-kube-api-access-j2hdg\") pod \"cilium-operator-599987898-tvjwj\" (UID: \"907e6629-cdc8-4c9d-bbc5-5b1517c14ba6\") " pod="kube-system/cilium-operator-599987898-tvjwj" Dec 13 14:08:10.634665 kubelet[2761]: I1213 14:08:10.633283 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/907e6629-cdc8-4c9d-bbc5-5b1517c14ba6-cilium-config-path\") pod \"cilium-operator-599987898-tvjwj\" (UID: \"907e6629-cdc8-4c9d-bbc5-5b1517c14ba6\") " pod="kube-system/cilium-operator-599987898-tvjwj" Dec 13 14:08:10.710005 containerd[1509]: time="2024-12-13T14:08:10.709935066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xcqz5,Uid:7deab058-b47c-4d13-94be-e5a2e7f61f66,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:10.753385 containerd[1509]: time="2024-12-13T14:08:10.753085617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmznr,Uid:a4774e33-b6c5-4900-b58b-65e19e36d863,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:10.764783 containerd[1509]: time="2024-12-13T14:08:10.764606214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:10.764783 containerd[1509]: time="2024-12-13T14:08:10.764737461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:10.766254 containerd[1509]: time="2024-12-13T14:08:10.764757524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:10.766254 containerd[1509]: time="2024-12-13T14:08:10.764912418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:10.808015 containerd[1509]: time="2024-12-13T14:08:10.807252302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:10.808015 containerd[1509]: time="2024-12-13T14:08:10.807542495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:10.807598 systemd[1]: Started cri-containerd-fa7701799b2ad3783cab4e21e84bdbd11c1ce6394b5d7f4275083c5ff3b6adb0.scope - libcontainer container fa7701799b2ad3783cab4e21e84bdbd11c1ce6394b5d7f4275083c5ff3b6adb0. Dec 13 14:08:10.808422 containerd[1509]: time="2024-12-13T14:08:10.808076336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:10.808422 containerd[1509]: time="2024-12-13T14:08:10.808356396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:10.843561 systemd[1]: Started cri-containerd-6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7.scope - libcontainer container 6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7. Dec 13 14:08:10.893137 containerd[1509]: time="2024-12-13T14:08:10.892959345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xcqz5,Uid:7deab058-b47c-4d13-94be-e5a2e7f61f66,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa7701799b2ad3783cab4e21e84bdbd11c1ce6394b5d7f4275083c5ff3b6adb0\"" Dec 13 14:08:10.898720 containerd[1509]: time="2024-12-13T14:08:10.898675794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tvjwj,Uid:907e6629-cdc8-4c9d-bbc5-5b1517c14ba6,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:10.902039 containerd[1509]: time="2024-12-13T14:08:10.901997598Z" level=info msg="CreateContainer within sandbox \"fa7701799b2ad3783cab4e21e84bdbd11c1ce6394b5d7f4275083c5ff3b6adb0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:08:10.911929 containerd[1509]: time="2024-12-13T14:08:10.911865221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmznr,Uid:a4774e33-b6c5-4900-b58b-65e19e36d863,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\"" Dec 13 14:08:10.916445 containerd[1509]: time="2024-12-13T14:08:10.916349029Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:08:10.935369 containerd[1509]: time="2024-12-13T14:08:10.935001042Z" level=info msg="CreateContainer within sandbox \"fa7701799b2ad3783cab4e21e84bdbd11c1ce6394b5d7f4275083c5ff3b6adb0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5354164cd49d25136ccd3cc22819e32aace98f3453c9ef71e41b0b5d340442a4\"" Dec 13 14:08:10.936737 containerd[1509]: time="2024-12-13T14:08:10.936702689Z" level=info msg="StartContainer for \"5354164cd49d25136ccd3cc22819e32aace98f3453c9ef71e41b0b5d340442a4\"" Dec 13 14:08:10.956179 containerd[1509]: time="2024-12-13T14:08:10.955935024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:10.956502 containerd[1509]: time="2024-12-13T14:08:10.956277966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:10.957252 containerd[1509]: time="2024-12-13T14:08:10.956472020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:10.957252 containerd[1509]: time="2024-12-13T14:08:10.956918497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:11.000510 systemd[1]: Started cri-containerd-5354164cd49d25136ccd3cc22819e32aace98f3453c9ef71e41b0b5d340442a4.scope - libcontainer container 5354164cd49d25136ccd3cc22819e32aace98f3453c9ef71e41b0b5d340442a4. Dec 13 14:08:11.003857 systemd[1]: Started cri-containerd-6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c.scope - libcontainer container 6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c. Dec 13 14:08:11.078201 containerd[1509]: time="2024-12-13T14:08:11.078149926Z" level=info msg="StartContainer for \"5354164cd49d25136ccd3cc22819e32aace98f3453c9ef71e41b0b5d340442a4\" returns successfully" Dec 13 14:08:11.105597 containerd[1509]: time="2024-12-13T14:08:11.105492199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tvjwj,Uid:907e6629-cdc8-4c9d-bbc5-5b1517c14ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c\"" Dec 13 14:08:11.467448 kubelet[2761]: I1213 14:08:11.467323 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xcqz5" podStartSLOduration=1.467277455 podStartE2EDuration="1.467277455s" podCreationTimestamp="2024-12-13 14:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:11.466609114 +0000 UTC m=+16.331628255" watchObservedRunningTime="2024-12-13 14:08:11.467277455 +0000 UTC m=+16.332296615" Dec 13 14:08:18.390012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73672125.mount: Deactivated successfully. Dec 13 14:08:21.821777 containerd[1509]: time="2024-12-13T14:08:21.821560155Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:08:21.824105 containerd[1509]: time="2024-12-13T14:08:21.822214824Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735363" Dec 13 14:08:21.838929 containerd[1509]: time="2024-12-13T14:08:21.838879825Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:08:21.865268 containerd[1509]: time="2024-12-13T14:08:21.864091305Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.94768194s" Dec 13 14:08:21.865268 containerd[1509]: time="2024-12-13T14:08:21.864175778Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:08:21.869619 containerd[1509]: time="2024-12-13T14:08:21.869582329Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:08:21.871874 containerd[1509]: time="2024-12-13T14:08:21.871825382Z" level=info msg="CreateContainer within sandbox \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:08:21.931303 containerd[1509]: time="2024-12-13T14:08:21.931196329Z" level=info msg="CreateContainer within sandbox \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8\"" Dec 13 14:08:21.932534 containerd[1509]: time="2024-12-13T14:08:21.932490623Z" level=info msg="StartContainer for \"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8\"" Dec 13 14:08:22.192550 systemd[1]: Started cri-containerd-7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8.scope - libcontainer container 7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8. Dec 13 14:08:22.251733 containerd[1509]: time="2024-12-13T14:08:22.251659575Z" level=info msg="StartContainer for \"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8\" returns successfully" Dec 13 14:08:22.271551 systemd[1]: cri-containerd-7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8.scope: Deactivated successfully. Dec 13 14:08:22.356571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8-rootfs.mount: Deactivated successfully. Dec 13 14:08:22.486883 containerd[1509]: time="2024-12-13T14:08:22.460116584Z" level=info msg="shim disconnected" id=7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8 namespace=k8s.io Dec 13 14:08:22.486883 containerd[1509]: time="2024-12-13T14:08:22.486740553Z" level=warning msg="cleaning up after shim disconnected" id=7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8 namespace=k8s.io Dec 13 14:08:22.486883 containerd[1509]: time="2024-12-13T14:08:22.486775933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:08:22.604264 containerd[1509]: time="2024-12-13T14:08:22.602070136Z" level=info msg="CreateContainer within sandbox \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:08:22.643883 containerd[1509]: time="2024-12-13T14:08:22.643788070Z" level=info msg="CreateContainer within sandbox \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c\"" Dec 13 14:08:22.644773 containerd[1509]: time="2024-12-13T14:08:22.644705769Z" level=info msg="StartContainer for \"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c\"" Dec 13 14:08:22.690509 systemd[1]: Started cri-containerd-8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c.scope - libcontainer container 8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c. Dec 13 14:08:22.748596 containerd[1509]: time="2024-12-13T14:08:22.748377176Z" level=info msg="StartContainer for \"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c\" returns successfully" Dec 13 14:08:22.775736 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:08:22.776182 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:08:22.776388 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:08:22.786770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:08:22.787195 systemd[1]: cri-containerd-8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c.scope: Deactivated successfully. Dec 13 14:08:22.862407 containerd[1509]: time="2024-12-13T14:08:22.862287871Z" level=info msg="shim disconnected" id=8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c namespace=k8s.io Dec 13 14:08:22.863270 containerd[1509]: time="2024-12-13T14:08:22.862555013Z" level=warning msg="cleaning up after shim disconnected" id=8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c namespace=k8s.io Dec 13 14:08:22.863270 containerd[1509]: time="2024-12-13T14:08:22.862579908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:08:22.876047 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:08:23.607873 containerd[1509]: time="2024-12-13T14:08:23.607558248Z" level=info msg="CreateContainer within sandbox \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:08:23.645738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2092468437.mount: Deactivated successfully. Dec 13 14:08:23.649051 containerd[1509]: time="2024-12-13T14:08:23.648913330Z" level=info msg="CreateContainer within sandbox \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a\"" Dec 13 14:08:23.652603 containerd[1509]: time="2024-12-13T14:08:23.651436200Z" level=info msg="StartContainer for \"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a\"" Dec 13 14:08:23.699594 systemd[1]: Started cri-containerd-518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a.scope - libcontainer container 518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a. Dec 13 14:08:23.757670 containerd[1509]: time="2024-12-13T14:08:23.757586119Z" level=info msg="StartContainer for \"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a\" returns successfully" Dec 13 14:08:23.765028 systemd[1]: cri-containerd-518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a.scope: Deactivated successfully. Dec 13 14:08:23.813754 containerd[1509]: time="2024-12-13T14:08:23.813623093Z" level=info msg="shim disconnected" id=518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a namespace=k8s.io Dec 13 14:08:23.813754 containerd[1509]: time="2024-12-13T14:08:23.813720499Z" level=warning msg="cleaning up after shim disconnected" id=518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a namespace=k8s.io Dec 13 14:08:23.813754 containerd[1509]: time="2024-12-13T14:08:23.813737163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:08:23.924068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a-rootfs.mount: Deactivated successfully. Dec 13 14:08:24.614000 containerd[1509]: time="2024-12-13T14:08:24.613930683Z" level=info msg="CreateContainer within sandbox \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:08:24.661244 containerd[1509]: time="2024-12-13T14:08:24.661089773Z" level=info msg="CreateContainer within sandbox \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c\"" Dec 13 14:08:24.662186 containerd[1509]: time="2024-12-13T14:08:24.662105960Z" level=info msg="StartContainer for \"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c\"" Dec 13 14:08:24.716500 systemd[1]: Started cri-containerd-d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c.scope - libcontainer container d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c. Dec 13 14:08:24.770268 systemd[1]: cri-containerd-d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c.scope: Deactivated successfully. Dec 13 14:08:24.772023 containerd[1509]: time="2024-12-13T14:08:24.771372921Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4774e33_b6c5_4900_b58b_65e19e36d863.slice/cri-containerd-d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c.scope/memory.events\": no such file or directory" Dec 13 14:08:24.775126 containerd[1509]: time="2024-12-13T14:08:24.775017205Z" level=info msg="StartContainer for \"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c\" returns successfully" Dec 13 14:08:24.806654 containerd[1509]: time="2024-12-13T14:08:24.806528097Z" level=info msg="shim disconnected" id=d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c namespace=k8s.io Dec 13 14:08:24.807364 containerd[1509]: time="2024-12-13T14:08:24.807057451Z" level=warning msg="cleaning up after shim disconnected" id=d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c namespace=k8s.io Dec 13 14:08:24.807364 containerd[1509]: time="2024-12-13T14:08:24.807085786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:08:24.925540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c-rootfs.mount: Deactivated successfully. Dec 13 14:08:25.494294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611410061.mount: Deactivated successfully. Dec 13 14:08:25.618243 containerd[1509]: time="2024-12-13T14:08:25.618156114Z" level=info msg="CreateContainer within sandbox \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:08:25.651549 containerd[1509]: time="2024-12-13T14:08:25.651495948Z" level=info msg="CreateContainer within sandbox \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\"" Dec 13 14:08:25.654051 containerd[1509]: time="2024-12-13T14:08:25.653990128Z" level=info msg="StartContainer for \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\"" Dec 13 14:08:25.724618 systemd[1]: Started cri-containerd-8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6.scope - libcontainer container 8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6. Dec 13 14:08:25.787901 containerd[1509]: time="2024-12-13T14:08:25.787841564Z" level=info msg="StartContainer for \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\" returns successfully" Dec 13 14:08:26.034700 kubelet[2761]: I1213 14:08:26.034629 2761 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:08:26.088160 kubelet[2761]: I1213 14:08:26.087959 2761 topology_manager.go:215] "Topology Admit Handler" podUID="f8fbf120-55e3-47a8-a3e8-d5067495f269" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rjtkp" Dec 13 14:08:26.093314 kubelet[2761]: I1213 14:08:26.093265 2761 topology_manager.go:215] "Topology Admit Handler" podUID="41202a59-4f0a-496f-ad38-58c116dbbb08" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h52ss" Dec 13 14:08:26.109930 systemd[1]: Created slice kubepods-burstable-podf8fbf120_55e3_47a8_a3e8_d5067495f269.slice - libcontainer container kubepods-burstable-podf8fbf120_55e3_47a8_a3e8_d5067495f269.slice. Dec 13 14:08:26.131736 systemd[1]: Created slice kubepods-burstable-pod41202a59_4f0a_496f_ad38_58c116dbbb08.slice - libcontainer container kubepods-burstable-pod41202a59_4f0a_496f_ad38_58c116dbbb08.slice. Dec 13 14:08:26.158576 kubelet[2761]: I1213 14:08:26.158515 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41202a59-4f0a-496f-ad38-58c116dbbb08-config-volume\") pod \"coredns-7db6d8ff4d-h52ss\" (UID: \"41202a59-4f0a-496f-ad38-58c116dbbb08\") " pod="kube-system/coredns-7db6d8ff4d-h52ss" Dec 13 14:08:26.159124 kubelet[2761]: I1213 14:08:26.159075 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z4hd\" (UniqueName: \"kubernetes.io/projected/41202a59-4f0a-496f-ad38-58c116dbbb08-kube-api-access-8z4hd\") pod \"coredns-7db6d8ff4d-h52ss\" (UID: \"41202a59-4f0a-496f-ad38-58c116dbbb08\") " pod="kube-system/coredns-7db6d8ff4d-h52ss" Dec 13 14:08:26.159458 kubelet[2761]: I1213 14:08:26.159426 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8fbf120-55e3-47a8-a3e8-d5067495f269-config-volume\") pod \"coredns-7db6d8ff4d-rjtkp\" (UID: \"f8fbf120-55e3-47a8-a3e8-d5067495f269\") " pod="kube-system/coredns-7db6d8ff4d-rjtkp" Dec 13 14:08:26.160215 kubelet[2761]: I1213 14:08:26.159776 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rpq\" (UniqueName: \"kubernetes.io/projected/f8fbf120-55e3-47a8-a3e8-d5067495f269-kube-api-access-n7rpq\") pod \"coredns-7db6d8ff4d-rjtkp\" (UID: \"f8fbf120-55e3-47a8-a3e8-d5067495f269\") " pod="kube-system/coredns-7db6d8ff4d-rjtkp" Dec 13 14:08:26.426735 containerd[1509]: time="2024-12-13T14:08:26.425467294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rjtkp,Uid:f8fbf120-55e3-47a8-a3e8-d5067495f269,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:26.454260 containerd[1509]: time="2024-12-13T14:08:26.453094318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h52ss,Uid:41202a59-4f0a-496f-ad38-58c116dbbb08,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:26.660032 kubelet[2761]: I1213 14:08:26.656663 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lmznr" podStartSLOduration=5.702981592 podStartE2EDuration="16.656620567s" podCreationTimestamp="2024-12-13 14:08:10 +0000 UTC" firstStartedPulling="2024-12-13 14:08:10.914726895 +0000 UTC m=+15.779746026" lastFinishedPulling="2024-12-13 14:08:21.868365868 +0000 UTC m=+26.733385001" observedRunningTime="2024-12-13 14:08:26.65399603 +0000 UTC m=+31.519015181" watchObservedRunningTime="2024-12-13 14:08:26.656620567 +0000 UTC m=+31.521639711" Dec 13 14:08:27.838096 containerd[1509]: time="2024-12-13T14:08:27.837991485Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:08:27.839607 containerd[1509]: time="2024-12-13T14:08:27.839537670Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906633" Dec 13 14:08:27.840665 containerd[1509]: time="2024-12-13T14:08:27.840295638Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:08:27.845761 containerd[1509]: time="2024-12-13T14:08:27.845723069Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.975873041s" Dec 13 14:08:27.846015 containerd[1509]: time="2024-12-13T14:08:27.845983237Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:08:27.850669 containerd[1509]: time="2024-12-13T14:08:27.850632136Z" level=info msg="CreateContainer within sandbox \"6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:08:27.873363 containerd[1509]: time="2024-12-13T14:08:27.873308387Z" level=info msg="CreateContainer within sandbox \"6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\"" Dec 13 14:08:27.874682 containerd[1509]: time="2024-12-13T14:08:27.874642470Z" level=info msg="StartContainer for \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\"" Dec 13 14:08:27.932458 systemd[1]: run-containerd-runc-k8s.io-5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd-runc.IIaDmn.mount: Deactivated successfully. Dec 13 14:08:27.945527 systemd[1]: Started cri-containerd-5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd.scope - libcontainer container 5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd. Dec 13 14:08:28.025767 containerd[1509]: time="2024-12-13T14:08:28.025694947Z" level=info msg="StartContainer for \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\" returns successfully" Dec 13 14:08:28.708191 kubelet[2761]: I1213 14:08:28.703431 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-tvjwj" podStartSLOduration=1.963413451 podStartE2EDuration="18.703325505s" podCreationTimestamp="2024-12-13 14:08:10 +0000 UTC" firstStartedPulling="2024-12-13 14:08:11.107351194 +0000 UTC m=+15.972370330" lastFinishedPulling="2024-12-13 14:08:27.847263246 +0000 UTC m=+32.712282384" observedRunningTime="2024-12-13 14:08:28.700858045 +0000 UTC m=+33.565877181" watchObservedRunningTime="2024-12-13 14:08:28.703325505 +0000 UTC m=+33.568344644" Dec 13 14:08:31.706177 systemd-networkd[1437]: cilium_host: Link UP Dec 13 14:08:31.707843 systemd-networkd[1437]: cilium_net: Link UP Dec 13 14:08:31.708282 systemd-networkd[1437]: cilium_net: Gained carrier Dec 13 14:08:31.708616 systemd-networkd[1437]: cilium_host: Gained carrier Dec 13 14:08:31.733373 systemd-networkd[1437]: cilium_host: Gained IPv6LL Dec 13 14:08:31.907540 systemd-networkd[1437]: cilium_vxlan: Link UP Dec 13 14:08:31.907561 systemd-networkd[1437]: cilium_vxlan: Gained carrier Dec 13 14:08:32.556414 kernel: NET: Registered PF_ALG protocol family Dec 13 14:08:32.681539 systemd-networkd[1437]: cilium_net: Gained IPv6LL Dec 13 14:08:33.385492 systemd-networkd[1437]: cilium_vxlan: Gained IPv6LL Dec 13 14:08:33.707853 systemd-networkd[1437]: lxc_health: Link UP Dec 13 14:08:33.718395 systemd-networkd[1437]: lxc_health: Gained carrier Dec 13 14:08:34.087270 kernel: eth0: renamed from tmp465c4 Dec 13 14:08:34.092285 systemd-networkd[1437]: lxcdf2a79cdef38: Link UP Dec 13 14:08:34.099351 systemd-networkd[1437]: lxcdf2a79cdef38: Gained carrier Dec 13 14:08:34.129728 kernel: eth0: renamed from tmp58f1c Dec 13 14:08:34.127866 systemd-networkd[1437]: lxca14634739eb8: Link UP Dec 13 14:08:34.142296 systemd-networkd[1437]: lxca14634739eb8: Gained carrier Dec 13 14:08:35.689651 systemd-networkd[1437]: lxc_health: Gained IPv6LL Dec 13 14:08:35.882559 systemd-networkd[1437]: lxcdf2a79cdef38: Gained IPv6LL Dec 13 14:08:36.073543 systemd-networkd[1437]: lxca14634739eb8: Gained IPv6LL Dec 13 14:08:40.392038 containerd[1509]: time="2024-12-13T14:08:40.390504599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:40.392038 containerd[1509]: time="2024-12-13T14:08:40.390663476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:40.392038 containerd[1509]: time="2024-12-13T14:08:40.390689196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:40.392038 containerd[1509]: time="2024-12-13T14:08:40.390862230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:40.405354 containerd[1509]: time="2024-12-13T14:08:40.404380385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:40.405354 containerd[1509]: time="2024-12-13T14:08:40.404493038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:40.405354 containerd[1509]: time="2024-12-13T14:08:40.404515844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:40.410085 containerd[1509]: time="2024-12-13T14:08:40.407125383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:40.466966 systemd[1]: Started cri-containerd-58f1ca515499035dc43d09f3abdcff739ea18f34e8ef3c0a63b50c7ad69ede0b.scope - libcontainer container 58f1ca515499035dc43d09f3abdcff739ea18f34e8ef3c0a63b50c7ad69ede0b. Dec 13 14:08:40.500066 systemd[1]: Started cri-containerd-465c4e838f543ffd85c08bfe82e0948cb2f09f974580df835cdc1722867d3020.scope - libcontainer container 465c4e838f543ffd85c08bfe82e0948cb2f09f974580df835cdc1722867d3020. Dec 13 14:08:40.618167 containerd[1509]: time="2024-12-13T14:08:40.617973845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rjtkp,Uid:f8fbf120-55e3-47a8-a3e8-d5067495f269,Namespace:kube-system,Attempt:0,} returns sandbox id \"58f1ca515499035dc43d09f3abdcff739ea18f34e8ef3c0a63b50c7ad69ede0b\"" Dec 13 14:08:40.628261 containerd[1509]: time="2024-12-13T14:08:40.626962235Z" level=info msg="CreateContainer within sandbox \"58f1ca515499035dc43d09f3abdcff739ea18f34e8ef3c0a63b50c7ad69ede0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:08:40.661284 containerd[1509]: time="2024-12-13T14:08:40.660052556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h52ss,Uid:41202a59-4f0a-496f-ad38-58c116dbbb08,Namespace:kube-system,Attempt:0,} returns sandbox id \"465c4e838f543ffd85c08bfe82e0948cb2f09f974580df835cdc1722867d3020\"" Dec 13 14:08:40.667101 containerd[1509]: time="2024-12-13T14:08:40.666960455Z" level=info msg="CreateContainer within sandbox \"58f1ca515499035dc43d09f3abdcff739ea18f34e8ef3c0a63b50c7ad69ede0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04c3ee702511426bb9d9289c0b2f5a012e61fdccc80fa2faca2ced1dc098145f\"" Dec 13 14:08:40.668394 containerd[1509]: time="2024-12-13T14:08:40.667598828Z" level=info msg="CreateContainer within sandbox \"465c4e838f543ffd85c08bfe82e0948cb2f09f974580df835cdc1722867d3020\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:08:40.668700 containerd[1509]: time="2024-12-13T14:08:40.668235883Z" level=info msg="StartContainer for \"04c3ee702511426bb9d9289c0b2f5a012e61fdccc80fa2faca2ced1dc098145f\"" Dec 13 14:08:40.696939 containerd[1509]: time="2024-12-13T14:08:40.696865117Z" level=info msg="CreateContainer within sandbox \"465c4e838f543ffd85c08bfe82e0948cb2f09f974580df835cdc1722867d3020\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"712df361a215ca9ae3366b5d5117a8bcb1e05d960c2ff3f2b2b0433b2880a72f\"" Dec 13 14:08:40.698769 containerd[1509]: time="2024-12-13T14:08:40.698716319Z" level=info msg="StartContainer for \"712df361a215ca9ae3366b5d5117a8bcb1e05d960c2ff3f2b2b0433b2880a72f\"" Dec 13 14:08:40.721514 systemd[1]: Started cri-containerd-04c3ee702511426bb9d9289c0b2f5a012e61fdccc80fa2faca2ced1dc098145f.scope - libcontainer container 04c3ee702511426bb9d9289c0b2f5a012e61fdccc80fa2faca2ced1dc098145f. Dec 13 14:08:40.763553 systemd[1]: Started cri-containerd-712df361a215ca9ae3366b5d5117a8bcb1e05d960c2ff3f2b2b0433b2880a72f.scope - libcontainer container 712df361a215ca9ae3366b5d5117a8bcb1e05d960c2ff3f2b2b0433b2880a72f. Dec 13 14:08:40.796674 containerd[1509]: time="2024-12-13T14:08:40.796625835Z" level=info msg="StartContainer for \"04c3ee702511426bb9d9289c0b2f5a012e61fdccc80fa2faca2ced1dc098145f\" returns successfully" Dec 13 14:08:40.830760 containerd[1509]: time="2024-12-13T14:08:40.830630549Z" level=info msg="StartContainer for \"712df361a215ca9ae3366b5d5117a8bcb1e05d960c2ff3f2b2b0433b2880a72f\" returns successfully" Dec 13 14:08:41.404628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1650388905.mount: Deactivated successfully. Dec 13 14:08:41.749723 kubelet[2761]: I1213 14:08:41.749482 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h52ss" podStartSLOduration=31.749393845 podStartE2EDuration="31.749393845s" podCreationTimestamp="2024-12-13 14:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:41.72894245 +0000 UTC m=+46.593961594" watchObservedRunningTime="2024-12-13 14:08:41.749393845 +0000 UTC m=+46.614412987" Dec 13 14:08:41.750667 kubelet[2761]: I1213 14:08:41.750014 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rjtkp" podStartSLOduration=31.749996633 podStartE2EDuration="31.749996633s" podCreationTimestamp="2024-12-13 14:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:41.747752158 +0000 UTC m=+46.612771306" watchObservedRunningTime="2024-12-13 14:08:41.749996633 +0000 UTC m=+46.615015784" Dec 13 14:09:06.532821 systemd[1]: Started sshd@7-10.244.26.14:22-139.178.68.195:46544.service - OpenSSH per-connection server daemon (139.178.68.195:46544). Dec 13 14:09:07.507442 sshd[4141]: Accepted publickey for core from 139.178.68.195 port 46544 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:09:07.510545 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:07.521331 systemd-logind[1493]: New session 10 of user core. Dec 13 14:09:07.530613 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 14:09:08.683181 sshd[4143]: Connection closed by 139.178.68.195 port 46544 Dec 13 14:09:08.685529 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:08.690108 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:09:08.690812 systemd[1]: sshd@7-10.244.26.14:22-139.178.68.195:46544.service: Deactivated successfully. Dec 13 14:09:08.694525 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:09:08.697544 systemd-logind[1493]: Removed session 10. Dec 13 14:09:13.854928 systemd[1]: Started sshd@8-10.244.26.14:22-139.178.68.195:46554.service - OpenSSH per-connection server daemon (139.178.68.195:46554). Dec 13 14:09:14.778022 sshd[4159]: Accepted publickey for core from 139.178.68.195 port 46554 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:09:14.780414 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:14.787893 systemd-logind[1493]: New session 11 of user core. Dec 13 14:09:14.796466 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 14:09:15.518836 sshd[4161]: Connection closed by 139.178.68.195 port 46554 Dec 13 14:09:15.518315 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:15.525666 systemd[1]: sshd@8-10.244.26.14:22-139.178.68.195:46554.service: Deactivated successfully. Dec 13 14:09:15.526069 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:09:15.529911 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:09:15.532709 systemd-logind[1493]: Removed session 11. Dec 13 14:09:20.681811 systemd[1]: Started sshd@9-10.244.26.14:22-139.178.68.195:54034.service - OpenSSH per-connection server daemon (139.178.68.195:54034). Dec 13 14:09:21.596872 sshd[4173]: Accepted publickey for core from 139.178.68.195 port 54034 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:09:21.599341 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:21.608958 systemd-logind[1493]: New session 12 of user core. Dec 13 14:09:21.613510 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 14:09:22.317581 sshd[4175]: Connection closed by 139.178.68.195 port 54034 Dec 13 14:09:22.318559 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:22.324982 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:09:22.325565 systemd[1]: sshd@9-10.244.26.14:22-139.178.68.195:54034.service: Deactivated successfully. Dec 13 14:09:22.330513 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:09:22.332709 systemd-logind[1493]: Removed session 12. Dec 13 14:09:27.478640 systemd[1]: Started sshd@10-10.244.26.14:22-139.178.68.195:34544.service - OpenSSH per-connection server daemon (139.178.68.195:34544). Dec 13 14:09:28.391256 sshd[4186]: Accepted publickey for core from 139.178.68.195 port 34544 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:09:28.394058 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:28.402424 systemd-logind[1493]: New session 13 of user core. Dec 13 14:09:28.409468 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 14:09:29.120470 sshd[4188]: Connection closed by 139.178.68.195 port 34544 Dec 13 14:09:29.120291 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:29.124720 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:09:29.125904 systemd[1]: sshd@10-10.244.26.14:22-139.178.68.195:34544.service: Deactivated successfully. Dec 13 14:09:29.129529 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:09:29.132491 systemd-logind[1493]: Removed session 13. Dec 13 14:09:29.284753 systemd[1]: Started sshd@11-10.244.26.14:22-139.178.68.195:34554.service - OpenSSH per-connection server daemon (139.178.68.195:34554). Dec 13 14:09:30.192784 sshd[4200]: Accepted publickey for core from 139.178.68.195 port 34554 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:09:30.195185 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:30.203862 systemd-logind[1493]: New session 14 of user core. Dec 13 14:09:30.209518 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 14:09:30.982692 sshd[4203]: Connection closed by 139.178.68.195 port 34554 Dec 13 14:09:30.983821 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:30.988938 systemd[1]: sshd@11-10.244.26.14:22-139.178.68.195:34554.service: Deactivated successfully. Dec 13 14:09:30.991392 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:09:30.993858 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:09:30.996002 systemd-logind[1493]: Removed session 14. Dec 13 14:09:31.142329 systemd[1]: Started sshd@12-10.244.26.14:22-139.178.68.195:34558.service - OpenSSH per-connection server daemon (139.178.68.195:34558). Dec 13 14:09:32.064573 sshd[4211]: Accepted publickey for core from 139.178.68.195 port 34558 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:09:32.066684 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:32.074550 systemd-logind[1493]: New session 15 of user core. Dec 13 14:09:32.080524 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 14:09:32.783929 sshd[4213]: Connection closed by 139.178.68.195 port 34558 Dec 13 14:09:32.785331 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:32.789852 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:09:32.792665 systemd[1]: sshd@12-10.244.26.14:22-139.178.68.195:34558.service: Deactivated successfully. Dec 13 14:09:32.795952 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:09:32.798391 systemd-logind[1493]: Removed session 15. Dec 13 14:09:37.956499 systemd[1]: Started sshd@13-10.244.26.14:22-139.178.68.195:55312.service - OpenSSH per-connection server daemon (139.178.68.195:55312). Dec 13 14:09:38.853754 sshd[4224]: Accepted publickey for core from 139.178.68.195 port 55312 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:09:38.856341 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:38.866780 systemd-logind[1493]: New session 16 of user core. Dec 13 14:09:38.875535 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 14:09:39.583968 sshd[4226]: Connection closed by 139.178.68.195 port 55312 Dec 13 14:09:39.585215 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:39.590167 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:09:39.591417 systemd[1]: sshd@13-10.244.26.14:22-139.178.68.195:55312.service: Deactivated successfully. Dec 13 14:09:39.594181 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:09:39.596461 systemd-logind[1493]: Removed session 16. Dec 13 14:09:43.279346 update_engine[1494]: I20241213 14:09:43.279152 1494 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 14:09:43.279346 update_engine[1494]: I20241213 14:09:43.279331 1494 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 14:09:43.280622 update_engine[1494]: I20241213 14:09:43.280534 1494 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 14:09:43.281727 update_engine[1494]: I20241213 14:09:43.281659 1494 omaha_request_params.cc:62] Current group set to alpha Dec 13 14:09:43.282444 update_engine[1494]: I20241213 14:09:43.282399 1494 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 14:09:43.282444 update_engine[1494]: I20241213 14:09:43.282433 1494 update_attempter.cc:643] Scheduling an action processor start. Dec 13 14:09:43.283254 update_engine[1494]: I20241213 14:09:43.282485 1494 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 14:09:43.283254 update_engine[1494]: I20241213 14:09:43.282600 1494 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 14:09:43.283254 update_engine[1494]: I20241213 14:09:43.282718 1494 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 14:09:43.283254 update_engine[1494]: I20241213 14:09:43.282739 1494 omaha_request_action.cc:272] Request: Dec 13 14:09:43.283254 update_engine[1494]: Dec 13 14:09:43.283254 update_engine[1494]: Dec 13 14:09:43.283254 update_engine[1494]: Dec 13 14:09:43.283254 update_engine[1494]: Dec 13 14:09:43.283254 update_engine[1494]: Dec 13 14:09:43.283254 update_engine[1494]: Dec 13 14:09:43.283254 update_engine[1494]: Dec 13 14:09:43.283254 update_engine[1494]: Dec 13 14:09:43.283254 update_engine[1494]: I20241213 14:09:43.282759 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:09:43.294889 update_engine[1494]: I20241213 14:09:43.293206 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:09:43.294889 update_engine[1494]: I20241213 14:09:43.294000 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:09:43.306281 update_engine[1494]: E20241213 14:09:43.303007 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:09:43.306281 update_engine[1494]: I20241213 14:09:43.303164 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 14:09:43.307580 locksmithd[1530]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 14:09:44.743743 systemd[1]: Started sshd@14-10.244.26.14:22-139.178.68.195:55326.service - OpenSSH per-connection server daemon (139.178.68.195:55326). Dec 13 14:09:45.658166 sshd[4239]: Accepted publickey for core from 139.178.68.195 port 55326 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:09:45.660674 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:45.669404 systemd-logind[1493]: New session 17 of user core. Dec 13 14:09:45.674693 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 14:09:46.379095 sshd[4241]: Connection closed by 139.178.68.195 port 55326 Dec 13 14:09:46.379599 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:46.388653 systemd[1]: sshd@14-10.244.26.14:22-139.178.68.195:55326.service: Deactivated successfully. Dec 13 14:09:46.392182 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:09:46.395786 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:09:46.398153 systemd-logind[1493]: Removed session 17. Dec 13 14:09:51.539732 systemd[1]: Started sshd@15-10.244.26.14:22-139.178.68.195:48798.service - OpenSSH per-connection server daemon (139.178.68.195:48798). Dec 13 14:09:52.478695 sshd[4252]: Accepted publickey for core from 139.178.68.195 port 48798 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:09:52.481451 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:52.491216 systemd-logind[1493]: New session 18 of user core. Dec 13 14:09:52.503419 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 14:09:53.215620 sshd[4254]: Connection closed by 139.178.68.195 port 48798 Dec 13 14:09:53.216960 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:53.223156 systemd[1]: sshd@15-10.244.26.14:22-139.178.68.195:48798.service: Deactivated successfully. Dec 13 14:09:53.226150 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:09:53.227356 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:09:53.229768 systemd-logind[1493]: Removed session 18. Dec 13 14:09:53.233712 update_engine[1494]: I20241213 14:09:53.233588 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:09:53.234358 update_engine[1494]: I20241213 14:09:53.234283 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:09:53.234847 update_engine[1494]: I20241213 14:09:53.234769 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:09:53.235318 update_engine[1494]: E20241213 14:09:53.235260 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:09:53.235403 update_engine[1494]: I20241213 14:09:53.235357 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 14:09:53.376813 systemd[1]: Started sshd@16-10.244.26.14:22-139.178.68.195:48810.service - OpenSSH per-connection server daemon (139.178.68.195:48810). Dec 13 14:09:54.286370 sshd[4265]: Accepted publickey for core from 139.178.68.195 port 48810 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:09:54.288560 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:54.295995 systemd-logind[1493]: New session 19 of user core. Dec 13 14:09:54.300454 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 14:09:55.350791 sshd[4267]: Connection closed by 139.178.68.195 port 48810 Dec 13 14:09:55.352182 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:55.362957 systemd[1]: sshd@16-10.244.26.14:22-139.178.68.195:48810.service: Deactivated successfully. Dec 13 14:09:55.365828 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:09:55.367379 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:09:55.370251 systemd-logind[1493]: Removed session 19. Dec 13 14:09:55.510779 systemd[1]: Started sshd@17-10.244.26.14:22-139.178.68.195:48822.service - OpenSSH per-connection server daemon (139.178.68.195:48822). Dec 13 14:09:56.434128 sshd[4280]: Accepted publickey for core from 139.178.68.195 port 48822 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:09:56.435079 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:09:56.443887 systemd-logind[1493]: New session 20 of user core. Dec 13 14:09:56.447527 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 14:09:59.650271 sshd[4282]: Connection closed by 139.178.68.195 port 48822 Dec 13 14:09:59.652469 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Dec 13 14:09:59.659159 systemd[1]: sshd@17-10.244.26.14:22-139.178.68.195:48822.service: Deactivated successfully. Dec 13 14:09:59.663519 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:09:59.664069 systemd[1]: session-20.scope: Consumed 1.057s CPU time. Dec 13 14:09:59.666865 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:09:59.668991 systemd-logind[1493]: Removed session 20. Dec 13 14:09:59.808745 systemd[1]: Started sshd@18-10.244.26.14:22-139.178.68.195:38552.service - OpenSSH per-connection server daemon (139.178.68.195:38552). Dec 13 14:10:00.719531 sshd[4298]: Accepted publickey for core from 139.178.68.195 port 38552 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:10:00.725135 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:10:00.734872 systemd-logind[1493]: New session 21 of user core. Dec 13 14:10:00.740577 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 14:10:01.865833 sshd[4300]: Connection closed by 139.178.68.195 port 38552 Dec 13 14:10:01.866455 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:01.873460 systemd[1]: sshd@18-10.244.26.14:22-139.178.68.195:38552.service: Deactivated successfully. Dec 13 14:10:01.876897 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:10:01.879364 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:10:01.887853 systemd-logind[1493]: Removed session 21. Dec 13 14:10:02.028424 systemd[1]: Started sshd@19-10.244.26.14:22-139.178.68.195:38556.service - OpenSSH per-connection server daemon (139.178.68.195:38556). Dec 13 14:10:02.947568 sshd[4309]: Accepted publickey for core from 139.178.68.195 port 38556 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:10:02.949888 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:10:02.956946 systemd-logind[1493]: New session 22 of user core. Dec 13 14:10:02.965542 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 14:10:03.234111 update_engine[1494]: I20241213 14:10:03.233775 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:10:03.235801 update_engine[1494]: I20241213 14:10:03.235754 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:10:03.236331 update_engine[1494]: I20241213 14:10:03.236282 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:10:03.237203 update_engine[1494]: E20241213 14:10:03.237100 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:10:03.237309 update_engine[1494]: I20241213 14:10:03.237196 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 14:10:03.648284 sshd[4311]: Connection closed by 139.178.68.195 port 38556 Dec 13 14:10:03.649373 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:03.654745 systemd[1]: sshd@19-10.244.26.14:22-139.178.68.195:38556.service: Deactivated successfully. Dec 13 14:10:03.657553 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:10:03.659674 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:10:03.661696 systemd-logind[1493]: Removed session 22. Dec 13 14:10:08.812913 systemd[1]: Started sshd@20-10.244.26.14:22-139.178.68.195:45986.service - OpenSSH per-connection server daemon (139.178.68.195:45986). Dec 13 14:10:09.723969 sshd[4325]: Accepted publickey for core from 139.178.68.195 port 45986 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:10:09.727396 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:10:09.737372 systemd-logind[1493]: New session 23 of user core. Dec 13 14:10:09.743505 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 14:10:10.450554 sshd[4327]: Connection closed by 139.178.68.195 port 45986 Dec 13 14:10:10.452593 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:10.458987 systemd[1]: sshd@20-10.244.26.14:22-139.178.68.195:45986.service: Deactivated successfully. Dec 13 14:10:10.463617 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:10:10.465154 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:10:10.466842 systemd-logind[1493]: Removed session 23. Dec 13 14:10:13.236288 update_engine[1494]: I20241213 14:10:13.236020 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:10:13.237480 update_engine[1494]: I20241213 14:10:13.236935 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:10:13.237844 update_engine[1494]: I20241213 14:10:13.237797 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:10:13.238547 update_engine[1494]: E20241213 14:10:13.238500 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:10:13.238681 update_engine[1494]: I20241213 14:10:13.238590 1494 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 14:10:13.238681 update_engine[1494]: I20241213 14:10:13.238623 1494 omaha_request_action.cc:617] Omaha request response: Dec 13 14:10:13.238857 update_engine[1494]: E20241213 14:10:13.238819 1494 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 14:10:13.247711 update_engine[1494]: I20241213 14:10:13.247649 1494 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 14:10:13.247711 update_engine[1494]: I20241213 14:10:13.247685 1494 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:10:13.247711 update_engine[1494]: I20241213 14:10:13.247705 1494 update_attempter.cc:306] Processing Done. Dec 13 14:10:13.253511 update_engine[1494]: E20241213 14:10:13.253373 1494 update_attempter.cc:619] Update failed. Dec 13 14:10:13.253511 update_engine[1494]: I20241213 14:10:13.253427 1494 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 14:10:13.253511 update_engine[1494]: I20241213 14:10:13.253463 1494 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 14:10:13.253511 update_engine[1494]: I20241213 14:10:13.253477 1494 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 14:10:13.253774 update_engine[1494]: I20241213 14:10:13.253643 1494 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 14:10:13.253774 update_engine[1494]: I20241213 14:10:13.253720 1494 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 14:10:13.253774 update_engine[1494]: I20241213 14:10:13.253737 1494 omaha_request_action.cc:272] Request: Dec 13 14:10:13.253774 update_engine[1494]: Dec 13 14:10:13.253774 update_engine[1494]: Dec 13 14:10:13.253774 update_engine[1494]: Dec 13 14:10:13.253774 update_engine[1494]: Dec 13 14:10:13.253774 update_engine[1494]: Dec 13 14:10:13.253774 update_engine[1494]: Dec 13 14:10:13.253774 update_engine[1494]: I20241213 14:10:13.253755 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:10:13.254267 update_engine[1494]: I20241213 14:10:13.254083 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:10:13.255105 update_engine[1494]: I20241213 14:10:13.254540 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:10:13.255105 update_engine[1494]: E20241213 14:10:13.254828 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:10:13.255105 update_engine[1494]: I20241213 14:10:13.254887 1494 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 14:10:13.255105 update_engine[1494]: I20241213 14:10:13.254904 1494 omaha_request_action.cc:617] Omaha request response: Dec 13 14:10:13.255105 update_engine[1494]: I20241213 14:10:13.254917 1494 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:10:13.255105 update_engine[1494]: I20241213 14:10:13.254928 1494 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:10:13.255105 update_engine[1494]: I20241213 14:10:13.254940 1494 update_attempter.cc:306] Processing Done. Dec 13 14:10:13.255105 update_engine[1494]: I20241213 14:10:13.254952 1494 update_attempter.cc:310] Error event sent. Dec 13 14:10:13.255105 update_engine[1494]: I20241213 14:10:13.254978 1494 update_check_scheduler.cc:74] Next update check in 42m23s Dec 13 14:10:13.255936 locksmithd[1530]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 14:10:13.255936 locksmithd[1530]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 14:10:15.619415 systemd[1]: Started sshd@21-10.244.26.14:22-139.178.68.195:45996.service - OpenSSH per-connection server daemon (139.178.68.195:45996). Dec 13 14:10:16.540585 sshd[4340]: Accepted publickey for core from 139.178.68.195 port 45996 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:10:16.543093 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:10:16.552010 systemd-logind[1493]: New session 24 of user core. Dec 13 14:10:16.563896 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 14:10:17.268513 sshd[4342]: Connection closed by 139.178.68.195 port 45996 Dec 13 14:10:17.269749 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:17.275509 systemd[1]: sshd@21-10.244.26.14:22-139.178.68.195:45996.service: Deactivated successfully. Dec 13 14:10:17.278137 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:10:17.279555 systemd-logind[1493]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:10:17.281057 systemd-logind[1493]: Removed session 24. Dec 13 14:10:22.434836 systemd[1]: Started sshd@22-10.244.26.14:22-139.178.68.195:42970.service - OpenSSH per-connection server daemon (139.178.68.195:42970). Dec 13 14:10:23.358519 sshd[4353]: Accepted publickey for core from 139.178.68.195 port 42970 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:10:23.360999 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:10:23.369537 systemd-logind[1493]: New session 25 of user core. Dec 13 14:10:23.378510 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 14:10:24.090568 sshd[4355]: Connection closed by 139.178.68.195 port 42970 Dec 13 14:10:24.091623 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:24.096133 systemd-logind[1493]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:10:24.097198 systemd[1]: sshd@22-10.244.26.14:22-139.178.68.195:42970.service: Deactivated successfully. Dec 13 14:10:24.100141 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:10:24.102167 systemd-logind[1493]: Removed session 25. Dec 13 14:10:24.251886 systemd[1]: Started sshd@23-10.244.26.14:22-139.178.68.195:42974.service - OpenSSH per-connection server daemon (139.178.68.195:42974). Dec 13 14:10:25.150488 sshd[4365]: Accepted publickey for core from 139.178.68.195 port 42974 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:10:25.152705 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:10:25.159965 systemd-logind[1493]: New session 26 of user core. Dec 13 14:10:25.167594 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 14:10:27.397166 containerd[1509]: time="2024-12-13T14:10:27.396632921Z" level=info msg="StopContainer for \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\" with timeout 30 (s)" Dec 13 14:10:27.402698 containerd[1509]: time="2024-12-13T14:10:27.402653433Z" level=info msg="Stop container \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\" with signal terminated" Dec 13 14:10:27.444754 systemd[1]: cri-containerd-5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd.scope: Deactivated successfully. Dec 13 14:10:27.460171 systemd[1]: run-containerd-runc-k8s.io-8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6-runc.PlGEPy.mount: Deactivated successfully. Dec 13 14:10:27.492446 containerd[1509]: time="2024-12-13T14:10:27.492286702Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:10:27.501769 containerd[1509]: time="2024-12-13T14:10:27.501586848Z" level=info msg="StopContainer for \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\" with timeout 2 (s)" Dec 13 14:10:27.503285 containerd[1509]: time="2024-12-13T14:10:27.503254987Z" level=info msg="Stop container \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\" with signal terminated" Dec 13 14:10:27.527578 systemd-networkd[1437]: lxc_health: Link DOWN Dec 13 14:10:27.528878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd-rootfs.mount: Deactivated successfully. Dec 13 14:10:27.530061 systemd-networkd[1437]: lxc_health: Lost carrier Dec 13 14:10:27.536003 containerd[1509]: time="2024-12-13T14:10:27.535827273Z" level=info msg="shim disconnected" id=5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd namespace=k8s.io Dec 13 14:10:27.536162 containerd[1509]: time="2024-12-13T14:10:27.536003443Z" level=warning msg="cleaning up after shim disconnected" id=5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd namespace=k8s.io Dec 13 14:10:27.536162 containerd[1509]: time="2024-12-13T14:10:27.536050935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:27.557521 systemd[1]: cri-containerd-8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6.scope: Deactivated successfully. Dec 13 14:10:27.557966 systemd[1]: cri-containerd-8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6.scope: Consumed 11.247s CPU time. Dec 13 14:10:27.582650 containerd[1509]: time="2024-12-13T14:10:27.582485721Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 14:10:27.587705 containerd[1509]: time="2024-12-13T14:10:27.587548580Z" level=info msg="StopContainer for \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\" returns successfully" Dec 13 14:10:27.588682 containerd[1509]: time="2024-12-13T14:10:27.588563392Z" level=info msg="StopPodSandbox for \"6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c\"" Dec 13 14:10:27.588790 containerd[1509]: time="2024-12-13T14:10:27.588619473Z" level=info msg="Container to stop \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:27.593531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c-shm.mount: Deactivated successfully. Dec 13 14:10:27.609308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6-rootfs.mount: Deactivated successfully. Dec 13 14:10:27.619250 containerd[1509]: time="2024-12-13T14:10:27.617085507Z" level=info msg="shim disconnected" id=8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6 namespace=k8s.io Dec 13 14:10:27.619250 containerd[1509]: time="2024-12-13T14:10:27.617713222Z" level=warning msg="cleaning up after shim disconnected" id=8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6 namespace=k8s.io Dec 13 14:10:27.619250 containerd[1509]: time="2024-12-13T14:10:27.617730028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:27.617800 systemd[1]: cri-containerd-6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c.scope: Deactivated successfully. Dec 13 14:10:27.656782 containerd[1509]: time="2024-12-13T14:10:27.654599739Z" level=info msg="StopContainer for \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\" returns successfully" Dec 13 14:10:27.658021 containerd[1509]: time="2024-12-13T14:10:27.657777575Z" level=info msg="StopPodSandbox for \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\"" Dec 13 14:10:27.658021 containerd[1509]: time="2024-12-13T14:10:27.657830057Z" level=info msg="Container to stop \"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:27.658021 containerd[1509]: time="2024-12-13T14:10:27.657920641Z" level=info msg="Container to stop \"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:27.658808 containerd[1509]: time="2024-12-13T14:10:27.657945510Z" level=info msg="Container to stop \"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:27.658808 containerd[1509]: time="2024-12-13T14:10:27.658613458Z" level=info msg="Container to stop \"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:27.658808 containerd[1509]: time="2024-12-13T14:10:27.658642503Z" level=info msg="Container to stop \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:10:27.671249 containerd[1509]: time="2024-12-13T14:10:27.670902528Z" level=info msg="shim disconnected" id=6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c namespace=k8s.io Dec 13 14:10:27.671249 containerd[1509]: time="2024-12-13T14:10:27.670985116Z" level=warning msg="cleaning up after shim disconnected" id=6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c namespace=k8s.io Dec 13 14:10:27.671249 containerd[1509]: time="2024-12-13T14:10:27.671056805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:27.676269 systemd[1]: cri-containerd-6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7.scope: Deactivated successfully. Dec 13 14:10:27.702991 containerd[1509]: time="2024-12-13T14:10:27.702763076Z" level=info msg="TearDown network for sandbox \"6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c\" successfully" Dec 13 14:10:27.702991 containerd[1509]: time="2024-12-13T14:10:27.702825463Z" level=info msg="StopPodSandbox for \"6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c\" returns successfully" Dec 13 14:10:27.728613 containerd[1509]: time="2024-12-13T14:10:27.728283935Z" level=info msg="shim disconnected" id=6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7 namespace=k8s.io Dec 13 14:10:27.728613 containerd[1509]: time="2024-12-13T14:10:27.728560334Z" level=warning msg="cleaning up after shim disconnected" id=6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7 namespace=k8s.io Dec 13 14:10:27.728613 containerd[1509]: time="2024-12-13T14:10:27.728587987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:27.760356 containerd[1509]: time="2024-12-13T14:10:27.760260854Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:10:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 14:10:27.762461 containerd[1509]: time="2024-12-13T14:10:27.762405413Z" level=info msg="TearDown network for sandbox \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" successfully" Dec 13 14:10:27.762552 containerd[1509]: time="2024-12-13T14:10:27.762462140Z" level=info msg="StopPodSandbox for \"6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7\" returns successfully" Dec 13 14:10:27.781343 kubelet[2761]: I1213 14:10:27.780583 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/907e6629-cdc8-4c9d-bbc5-5b1517c14ba6-cilium-config-path\") pod \"907e6629-cdc8-4c9d-bbc5-5b1517c14ba6\" (UID: \"907e6629-cdc8-4c9d-bbc5-5b1517c14ba6\") " Dec 13 14:10:27.781343 kubelet[2761]: I1213 14:10:27.780724 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2hdg\" (UniqueName: \"kubernetes.io/projected/907e6629-cdc8-4c9d-bbc5-5b1517c14ba6-kube-api-access-j2hdg\") pod \"907e6629-cdc8-4c9d-bbc5-5b1517c14ba6\" (UID: \"907e6629-cdc8-4c9d-bbc5-5b1517c14ba6\") " Dec 13 14:10:27.789817 kubelet[2761]: I1213 14:10:27.787975 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/907e6629-cdc8-4c9d-bbc5-5b1517c14ba6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "907e6629-cdc8-4c9d-bbc5-5b1517c14ba6" (UID: "907e6629-cdc8-4c9d-bbc5-5b1517c14ba6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:10:27.791436 kubelet[2761]: I1213 14:10:27.791278 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907e6629-cdc8-4c9d-bbc5-5b1517c14ba6-kube-api-access-j2hdg" (OuterVolumeSpecName: "kube-api-access-j2hdg") pod "907e6629-cdc8-4c9d-bbc5-5b1517c14ba6" (UID: "907e6629-cdc8-4c9d-bbc5-5b1517c14ba6"). InnerVolumeSpecName "kube-api-access-j2hdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:10:27.881724 kubelet[2761]: I1213 14:10:27.881672 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-config-path\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.881724 kubelet[2761]: I1213 14:10:27.881724 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-etc-cni-netd\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882139 kubelet[2761]: I1213 14:10:27.881776 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm8nh\" (UniqueName: \"kubernetes.io/projected/a4774e33-b6c5-4900-b58b-65e19e36d863-kube-api-access-lm8nh\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882139 kubelet[2761]: I1213 14:10:27.881810 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cni-path\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882139 kubelet[2761]: I1213 14:10:27.881844 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4774e33-b6c5-4900-b58b-65e19e36d863-clustermesh-secrets\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882139 kubelet[2761]: I1213 14:10:27.881880 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-xtables-lock\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882139 kubelet[2761]: I1213 14:10:27.881902 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-lib-modules\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882139 kubelet[2761]: I1213 14:10:27.881935 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-bpf-maps\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882931 kubelet[2761]: I1213 14:10:27.881962 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-host-proc-sys-net\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882931 kubelet[2761]: I1213 14:10:27.882018 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-cgroup\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882931 kubelet[2761]: I1213 14:10:27.882045 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-host-proc-sys-kernel\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882931 kubelet[2761]: I1213 14:10:27.882068 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-run\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882931 kubelet[2761]: I1213 14:10:27.882120 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4774e33-b6c5-4900-b58b-65e19e36d863-hubble-tls\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.882931 kubelet[2761]: I1213 14:10:27.882146 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-hostproc\") pod \"a4774e33-b6c5-4900-b58b-65e19e36d863\" (UID: \"a4774e33-b6c5-4900-b58b-65e19e36d863\") " Dec 13 14:10:27.883286 kubelet[2761]: I1213 14:10:27.882502 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:27.884809 kubelet[2761]: I1213 14:10:27.884730 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:27.886100 kubelet[2761]: I1213 14:10:27.886058 2761 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/907e6629-cdc8-4c9d-bbc5-5b1517c14ba6-cilium-config-path\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.886204 kubelet[2761]: I1213 14:10:27.886102 2761 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j2hdg\" (UniqueName: \"kubernetes.io/projected/907e6629-cdc8-4c9d-bbc5-5b1517c14ba6-kube-api-access-j2hdg\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.886204 kubelet[2761]: I1213 14:10:27.886142 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-hostproc" (OuterVolumeSpecName: "hostproc") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:27.886204 kubelet[2761]: I1213 14:10:27.886176 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:27.886397 kubelet[2761]: I1213 14:10:27.886204 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:27.886397 kubelet[2761]: I1213 14:10:27.886250 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:27.886397 kubelet[2761]: I1213 14:10:27.886279 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:27.886397 kubelet[2761]: I1213 14:10:27.886329 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:27.889710 kubelet[2761]: I1213 14:10:27.889291 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cni-path" (OuterVolumeSpecName: "cni-path") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:27.890302 kubelet[2761]: I1213 14:10:27.890204 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:10:27.893782 kubelet[2761]: I1213 14:10:27.893738 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4774e33-b6c5-4900-b58b-65e19e36d863-kube-api-access-lm8nh" (OuterVolumeSpecName: "kube-api-access-lm8nh") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "kube-api-access-lm8nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:10:27.894498 kubelet[2761]: I1213 14:10:27.894469 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4774e33-b6c5-4900-b58b-65e19e36d863-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:10:27.894639 kubelet[2761]: I1213 14:10:27.894598 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4774e33-b6c5-4900-b58b-65e19e36d863-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:10:27.896830 kubelet[2761]: I1213 14:10:27.896759 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a4774e33-b6c5-4900-b58b-65e19e36d863" (UID: "a4774e33-b6c5-4900-b58b-65e19e36d863"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:10:27.986555 kubelet[2761]: I1213 14:10:27.986381 2761 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lm8nh\" (UniqueName: \"kubernetes.io/projected/a4774e33-b6c5-4900-b58b-65e19e36d863-kube-api-access-lm8nh\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.986555 kubelet[2761]: I1213 14:10:27.986440 2761 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cni-path\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.986555 kubelet[2761]: I1213 14:10:27.986460 2761 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4774e33-b6c5-4900-b58b-65e19e36d863-clustermesh-secrets\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.986555 kubelet[2761]: I1213 14:10:27.986479 2761 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-xtables-lock\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.986555 kubelet[2761]: I1213 14:10:27.986494 2761 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-lib-modules\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.986555 kubelet[2761]: I1213 14:10:27.986509 2761 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-bpf-maps\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.986555 kubelet[2761]: I1213 14:10:27.986523 2761 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-host-proc-sys-net\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.986555 kubelet[2761]: I1213 14:10:27.986546 2761 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-cgroup\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.987148 kubelet[2761]: I1213 14:10:27.986570 2761 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-host-proc-sys-kernel\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.987148 kubelet[2761]: I1213 14:10:27.986584 2761 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-run\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.987148 kubelet[2761]: I1213 14:10:27.986598 2761 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4774e33-b6c5-4900-b58b-65e19e36d863-hubble-tls\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.987148 kubelet[2761]: I1213 14:10:27.986611 2761 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-hostproc\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.987148 kubelet[2761]: I1213 14:10:27.986634 2761 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4774e33-b6c5-4900-b58b-65e19e36d863-cilium-config-path\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:27.987148 kubelet[2761]: I1213 14:10:27.986670 2761 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4774e33-b6c5-4900-b58b-65e19e36d863-etc-cni-netd\") on node \"srv-p3tlm.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:10:28.006624 kubelet[2761]: I1213 14:10:28.005430 2761 scope.go:117] "RemoveContainer" containerID="5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd" Dec 13 14:10:28.018203 systemd[1]: Removed slice kubepods-besteffort-pod907e6629_cdc8_4c9d_bbc5_5b1517c14ba6.slice - libcontainer container kubepods-besteffort-pod907e6629_cdc8_4c9d_bbc5_5b1517c14ba6.slice. Dec 13 14:10:28.023121 containerd[1509]: time="2024-12-13T14:10:28.021990952Z" level=info msg="RemoveContainer for \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\"" Dec 13 14:10:28.032550 containerd[1509]: time="2024-12-13T14:10:28.032044931Z" level=info msg="RemoveContainer for \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\" returns successfully" Dec 13 14:10:28.032745 kubelet[2761]: I1213 14:10:28.032592 2761 scope.go:117] "RemoveContainer" containerID="5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd" Dec 13 14:10:28.033763 containerd[1509]: time="2024-12-13T14:10:28.033681907Z" level=error msg="ContainerStatus for \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\": not found" Dec 13 14:10:28.062235 systemd[1]: Removed slice kubepods-burstable-poda4774e33_b6c5_4900_b58b_65e19e36d863.slice - libcontainer container kubepods-burstable-poda4774e33_b6c5_4900_b58b_65e19e36d863.slice. Dec 13 14:10:28.062399 systemd[1]: kubepods-burstable-poda4774e33_b6c5_4900_b58b_65e19e36d863.slice: Consumed 11.389s CPU time. Dec 13 14:10:28.063907 kubelet[2761]: E1213 14:10:28.063836 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\": not found" containerID="5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd" Dec 13 14:10:28.065436 kubelet[2761]: I1213 14:10:28.065290 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd"} err="failed to get container status \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\": rpc error: code = NotFound desc = an error occurred when try to find container \"5da60f00c0f95f49d7af907afb716baa4aeacaed106a1afae4c00b6b9b704bdd\": not found" Dec 13 14:10:28.065519 kubelet[2761]: I1213 14:10:28.065437 2761 scope.go:117] "RemoveContainer" containerID="8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6" Dec 13 14:10:28.074693 containerd[1509]: time="2024-12-13T14:10:28.074534393Z" level=info msg="RemoveContainer for \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\"" Dec 13 14:10:28.083990 containerd[1509]: time="2024-12-13T14:10:28.083524196Z" level=info msg="RemoveContainer for \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\" returns successfully" Dec 13 14:10:28.084177 kubelet[2761]: I1213 14:10:28.083815 2761 scope.go:117] "RemoveContainer" containerID="d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c" Dec 13 14:10:28.088651 containerd[1509]: time="2024-12-13T14:10:28.087542465Z" level=info msg="RemoveContainer for \"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c\"" Dec 13 14:10:28.092878 containerd[1509]: time="2024-12-13T14:10:28.092291142Z" level=info msg="RemoveContainer for \"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c\" returns successfully" Dec 13 14:10:28.093190 kubelet[2761]: I1213 14:10:28.092662 2761 scope.go:117] "RemoveContainer" containerID="518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a" Dec 13 14:10:28.095244 containerd[1509]: time="2024-12-13T14:10:28.095135088Z" level=info msg="RemoveContainer for \"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a\"" Dec 13 14:10:28.100605 containerd[1509]: time="2024-12-13T14:10:28.100558502Z" level=info msg="RemoveContainer for \"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a\" returns successfully" Dec 13 14:10:28.101098 kubelet[2761]: I1213 14:10:28.100845 2761 scope.go:117] "RemoveContainer" containerID="8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c" Dec 13 14:10:28.106264 containerd[1509]: time="2024-12-13T14:10:28.105353178Z" level=info msg="RemoveContainer for \"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c\"" Dec 13 14:10:28.118067 containerd[1509]: time="2024-12-13T14:10:28.117980682Z" level=info msg="RemoveContainer for \"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c\" returns successfully" Dec 13 14:10:28.118426 kubelet[2761]: I1213 14:10:28.118396 2761 scope.go:117] "RemoveContainer" containerID="7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8" Dec 13 14:10:28.120332 containerd[1509]: time="2024-12-13T14:10:28.119917338Z" level=info msg="RemoveContainer for \"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8\"" Dec 13 14:10:28.123745 containerd[1509]: time="2024-12-13T14:10:28.123684677Z" level=info msg="RemoveContainer for \"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8\" returns successfully" Dec 13 14:10:28.124367 kubelet[2761]: I1213 14:10:28.124337 2761 scope.go:117] "RemoveContainer" containerID="8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6" Dec 13 14:10:28.124690 containerd[1509]: time="2024-12-13T14:10:28.124594358Z" level=error msg="ContainerStatus for \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\": not found" Dec 13 14:10:28.124790 kubelet[2761]: E1213 14:10:28.124758 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\": not found" containerID="8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6" Dec 13 14:10:28.124842 kubelet[2761]: I1213 14:10:28.124791 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6"} err="failed to get container status \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cd49edfb922e60c5c1139dddd4a0bd9455fb4ed5a0723e831136c266cf555d6\": not found" Dec 13 14:10:28.124842 kubelet[2761]: I1213 14:10:28.124820 2761 scope.go:117] "RemoveContainer" containerID="d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c" Dec 13 14:10:28.125202 containerd[1509]: time="2024-12-13T14:10:28.125111908Z" level=error msg="ContainerStatus for \"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c\": not found" Dec 13 14:10:28.125591 kubelet[2761]: E1213 14:10:28.125562 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c\": not found" containerID="d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c" Dec 13 14:10:28.125696 kubelet[2761]: I1213 14:10:28.125596 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c"} err="failed to get container status \"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d08ba1405d19b31e6e0c4abe5eed14094f12a47e2bdf8291c6f8eafab637956c\": not found" Dec 13 14:10:28.125696 kubelet[2761]: I1213 14:10:28.125621 2761 scope.go:117] "RemoveContainer" containerID="518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a" Dec 13 14:10:28.126189 containerd[1509]: time="2024-12-13T14:10:28.126082050Z" level=error msg="ContainerStatus for \"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a\": not found" Dec 13 14:10:28.126672 kubelet[2761]: E1213 14:10:28.126287 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a\": not found" containerID="518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a" Dec 13 14:10:28.126672 kubelet[2761]: I1213 14:10:28.126346 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a"} err="failed to get container status \"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a\": rpc error: code = NotFound desc = an error occurred when try to find container \"518597a4ab6d6579aa7d5a4e95bc6a4eacf4a9943762218514b3f7cdd8c7758a\": not found" Dec 13 14:10:28.126672 kubelet[2761]: I1213 14:10:28.126370 2761 scope.go:117] "RemoveContainer" containerID="8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c" Dec 13 14:10:28.127339 containerd[1509]: time="2024-12-13T14:10:28.126581129Z" level=error msg="ContainerStatus for \"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c\": not found" Dec 13 14:10:28.127396 kubelet[2761]: E1213 14:10:28.127336 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c\": not found" containerID="8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c" Dec 13 14:10:28.127396 kubelet[2761]: I1213 14:10:28.127365 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c"} err="failed to get container status \"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ea14b448abc62ebf275c62213372be1434cabf08dd9dbae9c020ea48c9ead2c\": not found" Dec 13 14:10:28.127396 kubelet[2761]: I1213 14:10:28.127386 2761 scope.go:117] "RemoveContainer" containerID="7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8" Dec 13 14:10:28.127788 containerd[1509]: time="2024-12-13T14:10:28.127674168Z" level=error msg="ContainerStatus for \"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8\": not found" Dec 13 14:10:28.127996 kubelet[2761]: E1213 14:10:28.127954 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8\": not found" containerID="7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8" Dec 13 14:10:28.128131 kubelet[2761]: I1213 14:10:28.128081 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8"} err="failed to get container status \"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b4faca5e336b57fec106d2a0059f8d4a7efb51548289f5e68aea1c724ed99d8\": not found" Dec 13 14:10:28.449072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e11fc7fc78e3c9354834d7212345a84c9c3d8fe2bf982c6cefbbec54ea3c68c-rootfs.mount: Deactivated successfully. Dec 13 14:10:28.449268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7-rootfs.mount: Deactivated successfully. Dec 13 14:10:28.449406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a799eca18c43b7f74c4d6693fae7f91a2acdfdeb45ee4eb2175d76dc015b5b7-shm.mount: Deactivated successfully. Dec 13 14:10:28.449554 systemd[1]: var-lib-kubelet-pods-907e6629\x2dcdc8\x2d4c9d\x2dbbc5\x2d5b1517c14ba6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj2hdg.mount: Deactivated successfully. Dec 13 14:10:28.449753 systemd[1]: var-lib-kubelet-pods-a4774e33\x2db6c5\x2d4900\x2db58b\x2d65e19e36d863-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlm8nh.mount: Deactivated successfully. Dec 13 14:10:28.449972 systemd[1]: var-lib-kubelet-pods-a4774e33\x2db6c5\x2d4900\x2db58b\x2d65e19e36d863-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:10:28.450208 systemd[1]: var-lib-kubelet-pods-a4774e33\x2db6c5\x2d4900\x2db58b\x2d65e19e36d863-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:10:29.346258 kubelet[2761]: I1213 14:10:29.344699 2761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="907e6629-cdc8-4c9d-bbc5-5b1517c14ba6" path="/var/lib/kubelet/pods/907e6629-cdc8-4c9d-bbc5-5b1517c14ba6/volumes" Dec 13 14:10:29.346258 kubelet[2761]: I1213 14:10:29.345659 2761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4774e33-b6c5-4900-b58b-65e19e36d863" path="/var/lib/kubelet/pods/a4774e33-b6c5-4900-b58b-65e19e36d863/volumes" Dec 13 14:10:29.372030 sshd[4367]: Connection closed by 139.178.68.195 port 42974 Dec 13 14:10:29.374150 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:29.380199 systemd[1]: sshd@23-10.244.26.14:22-139.178.68.195:42974.service: Deactivated successfully. Dec 13 14:10:29.384195 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:10:29.384687 systemd[1]: session-26.scope: Consumed 1.029s CPU time. Dec 13 14:10:29.386786 systemd-logind[1493]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:10:29.388583 systemd-logind[1493]: Removed session 26. Dec 13 14:10:29.531676 systemd[1]: Started sshd@24-10.244.26.14:22-139.178.68.195:33268.service - OpenSSH per-connection server daemon (139.178.68.195:33268). Dec 13 14:10:30.446971 sshd[4528]: Accepted publickey for core from 139.178.68.195 port 33268 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:10:30.449207 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:10:30.457349 systemd-logind[1493]: New session 27 of user core. Dec 13 14:10:30.462652 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 14:10:30.557824 kubelet[2761]: E1213 14:10:30.557696 2761 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:10:31.883833 kubelet[2761]: I1213 14:10:31.879799 2761 topology_manager.go:215] "Topology Admit Handler" podUID="11ceca6f-d70f-4464-a9ff-4b44e4e81443" podNamespace="kube-system" podName="cilium-5g8x5" Dec 13 14:10:31.890257 kubelet[2761]: E1213 14:10:31.889413 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4774e33-b6c5-4900-b58b-65e19e36d863" containerName="apply-sysctl-overwrites" Dec 13 14:10:31.890257 kubelet[2761]: E1213 14:10:31.889461 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4774e33-b6c5-4900-b58b-65e19e36d863" containerName="mount-cgroup" Dec 13 14:10:31.890257 kubelet[2761]: E1213 14:10:31.889478 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4774e33-b6c5-4900-b58b-65e19e36d863" containerName="mount-bpf-fs" Dec 13 14:10:31.890257 kubelet[2761]: E1213 14:10:31.889489 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4774e33-b6c5-4900-b58b-65e19e36d863" containerName="clean-cilium-state" Dec 13 14:10:31.890257 kubelet[2761]: E1213 14:10:31.889500 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4774e33-b6c5-4900-b58b-65e19e36d863" containerName="cilium-agent" Dec 13 14:10:31.890257 kubelet[2761]: E1213 14:10:31.889512 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="907e6629-cdc8-4c9d-bbc5-5b1517c14ba6" containerName="cilium-operator" Dec 13 14:10:31.890257 kubelet[2761]: I1213 14:10:31.889591 2761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4774e33-b6c5-4900-b58b-65e19e36d863" containerName="cilium-agent" Dec 13 14:10:31.890257 kubelet[2761]: I1213 14:10:31.889611 2761 memory_manager.go:354] "RemoveStaleState removing state" podUID="907e6629-cdc8-4c9d-bbc5-5b1517c14ba6" containerName="cilium-operator" Dec 13 14:10:31.947806 systemd[1]: Created slice kubepods-burstable-pod11ceca6f_d70f_4464_a9ff_4b44e4e81443.slice - libcontainer container kubepods-burstable-pod11ceca6f_d70f_4464_a9ff_4b44e4e81443.slice. Dec 13 14:10:31.977466 sshd[4530]: Connection closed by 139.178.68.195 port 33268 Dec 13 14:10:31.978895 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:31.988198 systemd[1]: sshd@24-10.244.26.14:22-139.178.68.195:33268.service: Deactivated successfully. Dec 13 14:10:31.993112 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:10:31.996655 systemd-logind[1493]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:10:32.001521 systemd-logind[1493]: Removed session 27. Dec 13 14:10:32.023869 kubelet[2761]: I1213 14:10:32.023700 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11ceca6f-d70f-4464-a9ff-4b44e4e81443-bpf-maps\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.023869 kubelet[2761]: I1213 14:10:32.023781 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11ceca6f-d70f-4464-a9ff-4b44e4e81443-host-proc-sys-kernel\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024100 kubelet[2761]: I1213 14:10:32.023940 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11ceca6f-d70f-4464-a9ff-4b44e4e81443-cilium-run\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024100 kubelet[2761]: I1213 14:10:32.024013 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11ceca6f-d70f-4464-a9ff-4b44e4e81443-cilium-cgroup\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024100 kubelet[2761]: I1213 14:10:32.024056 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11ceca6f-d70f-4464-a9ff-4b44e4e81443-hubble-tls\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024318 kubelet[2761]: I1213 14:10:32.024116 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11ceca6f-d70f-4464-a9ff-4b44e4e81443-xtables-lock\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024318 kubelet[2761]: I1213 14:10:32.024159 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11ceca6f-d70f-4464-a9ff-4b44e4e81443-host-proc-sys-net\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024318 kubelet[2761]: I1213 14:10:32.024245 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11ceca6f-d70f-4464-a9ff-4b44e4e81443-hostproc\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024318 kubelet[2761]: I1213 14:10:32.024284 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11ceca6f-d70f-4464-a9ff-4b44e4e81443-lib-modules\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024492 kubelet[2761]: I1213 14:10:32.024331 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11ceca6f-d70f-4464-a9ff-4b44e4e81443-cni-path\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024492 kubelet[2761]: I1213 14:10:32.024359 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11ceca6f-d70f-4464-a9ff-4b44e4e81443-etc-cni-netd\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024492 kubelet[2761]: I1213 14:10:32.024385 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11ceca6f-d70f-4464-a9ff-4b44e4e81443-cilium-config-path\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024492 kubelet[2761]: I1213 14:10:32.024419 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11ceca6f-d70f-4464-a9ff-4b44e4e81443-cilium-ipsec-secrets\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024492 kubelet[2761]: I1213 14:10:32.024447 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96bp2\" (UniqueName: \"kubernetes.io/projected/11ceca6f-d70f-4464-a9ff-4b44e4e81443-kube-api-access-96bp2\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.024723 kubelet[2761]: I1213 14:10:32.024485 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11ceca6f-d70f-4464-a9ff-4b44e4e81443-clustermesh-secrets\") pod \"cilium-5g8x5\" (UID: \"11ceca6f-d70f-4464-a9ff-4b44e4e81443\") " pod="kube-system/cilium-5g8x5" Dec 13 14:10:32.184703 systemd[1]: Started sshd@25-10.244.26.14:22-139.178.68.195:33278.service - OpenSSH per-connection server daemon (139.178.68.195:33278). Dec 13 14:10:32.261856 containerd[1509]: time="2024-12-13T14:10:32.261716037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5g8x5,Uid:11ceca6f-d70f-4464-a9ff-4b44e4e81443,Namespace:kube-system,Attempt:0,}" Dec 13 14:10:32.302720 containerd[1509]: time="2024-12-13T14:10:32.302051613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:10:32.302720 containerd[1509]: time="2024-12-13T14:10:32.302173512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:10:32.302720 containerd[1509]: time="2024-12-13T14:10:32.302278734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:32.302720 containerd[1509]: time="2024-12-13T14:10:32.302474206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:10:32.332515 systemd[1]: Started cri-containerd-7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b.scope - libcontainer container 7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b. Dec 13 14:10:32.376606 containerd[1509]: time="2024-12-13T14:10:32.376539647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5g8x5,Uid:11ceca6f-d70f-4464-a9ff-4b44e4e81443,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b\"" Dec 13 14:10:32.386823 containerd[1509]: time="2024-12-13T14:10:32.386758135Z" level=info msg="CreateContainer within sandbox \"7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:10:32.402300 containerd[1509]: time="2024-12-13T14:10:32.402059343Z" level=info msg="CreateContainer within sandbox \"7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"35d3504d050d06a442b5f8af87b431a41cd5cf474877108ad12bd5de49e4d3f8\"" Dec 13 14:10:32.404644 containerd[1509]: time="2024-12-13T14:10:32.403358308Z" level=info msg="StartContainer for \"35d3504d050d06a442b5f8af87b431a41cd5cf474877108ad12bd5de49e4d3f8\"" Dec 13 14:10:32.448503 systemd[1]: Started cri-containerd-35d3504d050d06a442b5f8af87b431a41cd5cf474877108ad12bd5de49e4d3f8.scope - libcontainer container 35d3504d050d06a442b5f8af87b431a41cd5cf474877108ad12bd5de49e4d3f8. Dec 13 14:10:32.500616 containerd[1509]: time="2024-12-13T14:10:32.500523439Z" level=info msg="StartContainer for \"35d3504d050d06a442b5f8af87b431a41cd5cf474877108ad12bd5de49e4d3f8\" returns successfully" Dec 13 14:10:32.529602 systemd[1]: cri-containerd-35d3504d050d06a442b5f8af87b431a41cd5cf474877108ad12bd5de49e4d3f8.scope: Deactivated successfully. Dec 13 14:10:32.574051 containerd[1509]: time="2024-12-13T14:10:32.573816994Z" level=info msg="shim disconnected" id=35d3504d050d06a442b5f8af87b431a41cd5cf474877108ad12bd5de49e4d3f8 namespace=k8s.io Dec 13 14:10:32.574051 containerd[1509]: time="2024-12-13T14:10:32.574045238Z" level=warning msg="cleaning up after shim disconnected" id=35d3504d050d06a442b5f8af87b431a41cd5cf474877108ad12bd5de49e4d3f8 namespace=k8s.io Dec 13 14:10:32.574470 containerd[1509]: time="2024-12-13T14:10:32.574094231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:33.059339 containerd[1509]: time="2024-12-13T14:10:33.059039361Z" level=info msg="CreateContainer within sandbox \"7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:10:33.076655 containerd[1509]: time="2024-12-13T14:10:33.076471469Z" level=info msg="CreateContainer within sandbox \"7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"26c5250bb4aba0420ceb04689a1015b171feb997922e9c3a60b0cb6994dbc40c\"" Dec 13 14:10:33.080268 containerd[1509]: time="2024-12-13T14:10:33.079104158Z" level=info msg="StartContainer for \"26c5250bb4aba0420ceb04689a1015b171feb997922e9c3a60b0cb6994dbc40c\"" Dec 13 14:10:33.095955 sshd[4543]: Accepted publickey for core from 139.178.68.195 port 33278 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:10:33.101820 sshd-session[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:10:33.112421 systemd-logind[1493]: New session 28 of user core. Dec 13 14:10:33.117481 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 14:10:33.136894 systemd[1]: Started cri-containerd-26c5250bb4aba0420ceb04689a1015b171feb997922e9c3a60b0cb6994dbc40c.scope - libcontainer container 26c5250bb4aba0420ceb04689a1015b171feb997922e9c3a60b0cb6994dbc40c. Dec 13 14:10:33.190295 containerd[1509]: time="2024-12-13T14:10:33.189418292Z" level=info msg="StartContainer for \"26c5250bb4aba0420ceb04689a1015b171feb997922e9c3a60b0cb6994dbc40c\" returns successfully" Dec 13 14:10:33.209970 systemd[1]: cri-containerd-26c5250bb4aba0420ceb04689a1015b171feb997922e9c3a60b0cb6994dbc40c.scope: Deactivated successfully. Dec 13 14:10:33.250715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26c5250bb4aba0420ceb04689a1015b171feb997922e9c3a60b0cb6994dbc40c-rootfs.mount: Deactivated successfully. Dec 13 14:10:33.256142 containerd[1509]: time="2024-12-13T14:10:33.256003722Z" level=info msg="shim disconnected" id=26c5250bb4aba0420ceb04689a1015b171feb997922e9c3a60b0cb6994dbc40c namespace=k8s.io Dec 13 14:10:33.256142 containerd[1509]: time="2024-12-13T14:10:33.256103149Z" level=warning msg="cleaning up after shim disconnected" id=26c5250bb4aba0420ceb04689a1015b171feb997922e9c3a60b0cb6994dbc40c namespace=k8s.io Dec 13 14:10:33.256142 containerd[1509]: time="2024-12-13T14:10:33.256119875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:33.713085 sshd[4664]: Connection closed by 139.178.68.195 port 33278 Dec 13 14:10:33.714528 sshd-session[4543]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:33.719400 systemd[1]: sshd@25-10.244.26.14:22-139.178.68.195:33278.service: Deactivated successfully. Dec 13 14:10:33.723484 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 14:10:33.726207 systemd-logind[1493]: Session 28 logged out. Waiting for processes to exit. Dec 13 14:10:33.727968 systemd-logind[1493]: Removed session 28. Dec 13 14:10:33.873671 systemd[1]: Started sshd@26-10.244.26.14:22-139.178.68.195:33280.service - OpenSSH per-connection server daemon (139.178.68.195:33280). Dec 13 14:10:34.089596 containerd[1509]: time="2024-12-13T14:10:34.089359978Z" level=info msg="CreateContainer within sandbox \"7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:10:34.114277 containerd[1509]: time="2024-12-13T14:10:34.114044173Z" level=info msg="CreateContainer within sandbox \"7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aeba2828f53f30b797f1368a987498c250395696c1e77993b7b8e9e1ea5403c3\"" Dec 13 14:10:34.114969 containerd[1509]: time="2024-12-13T14:10:34.114832389Z" level=info msg="StartContainer for \"aeba2828f53f30b797f1368a987498c250395696c1e77993b7b8e9e1ea5403c3\"" Dec 13 14:10:34.178544 systemd[1]: Started cri-containerd-aeba2828f53f30b797f1368a987498c250395696c1e77993b7b8e9e1ea5403c3.scope - libcontainer container aeba2828f53f30b797f1368a987498c250395696c1e77993b7b8e9e1ea5403c3. Dec 13 14:10:34.232260 containerd[1509]: time="2024-12-13T14:10:34.232025140Z" level=info msg="StartContainer for \"aeba2828f53f30b797f1368a987498c250395696c1e77993b7b8e9e1ea5403c3\" returns successfully" Dec 13 14:10:34.240517 systemd[1]: cri-containerd-aeba2828f53f30b797f1368a987498c250395696c1e77993b7b8e9e1ea5403c3.scope: Deactivated successfully. Dec 13 14:10:34.273380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aeba2828f53f30b797f1368a987498c250395696c1e77993b7b8e9e1ea5403c3-rootfs.mount: Deactivated successfully. Dec 13 14:10:34.277577 containerd[1509]: time="2024-12-13T14:10:34.277251380Z" level=info msg="shim disconnected" id=aeba2828f53f30b797f1368a987498c250395696c1e77993b7b8e9e1ea5403c3 namespace=k8s.io Dec 13 14:10:34.277577 containerd[1509]: time="2024-12-13T14:10:34.277359117Z" level=warning msg="cleaning up after shim disconnected" id=aeba2828f53f30b797f1368a987498c250395696c1e77993b7b8e9e1ea5403c3 namespace=k8s.io Dec 13 14:10:34.277577 containerd[1509]: time="2024-12-13T14:10:34.277376131Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:34.782250 sshd[4713]: Accepted publickey for core from 139.178.68.195 port 33280 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:10:34.784811 sshd-session[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:10:34.793570 systemd-logind[1493]: New session 29 of user core. Dec 13 14:10:34.799748 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 14:10:35.069431 containerd[1509]: time="2024-12-13T14:10:35.068938404Z" level=info msg="CreateContainer within sandbox \"7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:10:35.113909 containerd[1509]: time="2024-12-13T14:10:35.113195506Z" level=info msg="CreateContainer within sandbox \"7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8c30374aed1531a943b4bb3f4569b84ee98707965af852c8dff4f9d3923de921\"" Dec 13 14:10:35.115888 containerd[1509]: time="2024-12-13T14:10:35.115806313Z" level=info msg="StartContainer for \"8c30374aed1531a943b4bb3f4569b84ee98707965af852c8dff4f9d3923de921\"" Dec 13 14:10:35.173565 systemd[1]: Started cri-containerd-8c30374aed1531a943b4bb3f4569b84ee98707965af852c8dff4f9d3923de921.scope - libcontainer container 8c30374aed1531a943b4bb3f4569b84ee98707965af852c8dff4f9d3923de921. Dec 13 14:10:35.215643 systemd[1]: cri-containerd-8c30374aed1531a943b4bb3f4569b84ee98707965af852c8dff4f9d3923de921.scope: Deactivated successfully. Dec 13 14:10:35.220657 containerd[1509]: time="2024-12-13T14:10:35.220342379Z" level=info msg="StartContainer for \"8c30374aed1531a943b4bb3f4569b84ee98707965af852c8dff4f9d3923de921\" returns successfully" Dec 13 14:10:35.265417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c30374aed1531a943b4bb3f4569b84ee98707965af852c8dff4f9d3923de921-rootfs.mount: Deactivated successfully. Dec 13 14:10:35.269836 containerd[1509]: time="2024-12-13T14:10:35.269481445Z" level=info msg="shim disconnected" id=8c30374aed1531a943b4bb3f4569b84ee98707965af852c8dff4f9d3923de921 namespace=k8s.io Dec 13 14:10:35.269836 containerd[1509]: time="2024-12-13T14:10:35.269728401Z" level=warning msg="cleaning up after shim disconnected" id=8c30374aed1531a943b4bb3f4569b84ee98707965af852c8dff4f9d3923de921 namespace=k8s.io Dec 13 14:10:35.269836 containerd[1509]: time="2024-12-13T14:10:35.269745959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:10:35.560023 kubelet[2761]: E1213 14:10:35.559864 2761 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:10:36.077616 containerd[1509]: time="2024-12-13T14:10:36.077491721Z" level=info msg="CreateContainer within sandbox \"7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:10:36.110129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262208844.mount: Deactivated successfully. Dec 13 14:10:36.111785 containerd[1509]: time="2024-12-13T14:10:36.111707115Z" level=info msg="CreateContainer within sandbox \"7a6bcdd5a579bdb0e2112bbdefd04803b559c3fe0e871ed963081aa38617465b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9522685105c5d97bff7fae07f61619755398e530c8f146644b83ae43a820e97\"" Dec 13 14:10:36.113464 containerd[1509]: time="2024-12-13T14:10:36.113421829Z" level=info msg="StartContainer for \"b9522685105c5d97bff7fae07f61619755398e530c8f146644b83ae43a820e97\"" Dec 13 14:10:36.176542 systemd[1]: Started cri-containerd-b9522685105c5d97bff7fae07f61619755398e530c8f146644b83ae43a820e97.scope - libcontainer container b9522685105c5d97bff7fae07f61619755398e530c8f146644b83ae43a820e97. Dec 13 14:10:36.231945 containerd[1509]: time="2024-12-13T14:10:36.231880051Z" level=info msg="StartContainer for \"b9522685105c5d97bff7fae07f61619755398e530c8f146644b83ae43a820e97\" returns successfully" Dec 13 14:10:37.013302 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:10:37.112897 kubelet[2761]: I1213 14:10:37.112594 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5g8x5" podStartSLOduration=6.112546067 podStartE2EDuration="6.112546067s" podCreationTimestamp="2024-12-13 14:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:10:37.109699111 +0000 UTC m=+161.974718264" watchObservedRunningTime="2024-12-13 14:10:37.112546067 +0000 UTC m=+161.977565210" Dec 13 14:10:39.390189 kubelet[2761]: I1213 14:10:39.384584 2761 setters.go:580] "Node became not ready" node="srv-p3tlm.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:10:39Z","lastTransitionTime":"2024-12-13T14:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:10:40.024421 kubelet[2761]: E1213 14:10:40.024269 2761 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41338->127.0.0.1:38473: write tcp 127.0.0.1:41338->127.0.0.1:38473: write: broken pipe Dec 13 14:10:40.978422 systemd-networkd[1437]: lxc_health: Link UP Dec 13 14:10:40.987772 systemd-networkd[1437]: lxc_health: Gained carrier Dec 13 14:10:42.729569 systemd-networkd[1437]: lxc_health: Gained IPv6LL Dec 13 14:10:44.632448 systemd[1]: run-containerd-runc-k8s.io-b9522685105c5d97bff7fae07f61619755398e530c8f146644b83ae43a820e97-runc.7cAlqq.mount: Deactivated successfully. Dec 13 14:10:47.103193 sshd[4775]: Connection closed by 139.178.68.195 port 33280 Dec 13 14:10:47.106177 sshd-session[4713]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:47.113995 systemd-logind[1493]: Session 29 logged out. Waiting for processes to exit. Dec 13 14:10:47.114972 systemd[1]: sshd@26-10.244.26.14:22-139.178.68.195:33280.service: Deactivated successfully. Dec 13 14:10:47.122698 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 14:10:47.133041 systemd-logind[1493]: Removed session 29.