Dec 13 06:47:57.938963 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 06:47:57.939005 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:47:57.939025 kernel: BIOS-provided physical RAM map: Dec 13 06:47:57.939036 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 06:47:57.939045 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 06:47:57.939066 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 06:47:57.939077 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 06:47:57.939088 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 06:47:57.939098 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 06:47:57.939108 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 06:47:57.939122 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 06:47:57.939132 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 06:47:57.939142 kernel: NX (Execute Disable) protection: active Dec 13 06:47:57.939152 kernel: SMBIOS 2.8 present. Dec 13 06:47:57.939165 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 06:47:57.939176 kernel: Hypervisor detected: KVM Dec 13 06:47:57.939190 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 06:47:57.939201 kernel: kvm-clock: cpu 0, msr 7919b001, primary cpu clock Dec 13 06:47:57.939212 kernel: kvm-clock: using sched offset of 4903260218 cycles Dec 13 06:47:57.939223 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 06:47:57.939234 kernel: tsc: Detected 2499.998 MHz processor Dec 13 06:47:57.939245 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 06:47:57.939257 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 06:47:57.939268 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 06:47:57.939279 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 06:47:57.939293 kernel: Using GB pages for direct mapping Dec 13 06:47:57.939304 kernel: ACPI: Early table checksum verification disabled Dec 13 06:47:57.939325 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 06:47:57.939338 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:47:57.939349 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:47:57.939360 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:47:57.939371 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 06:47:57.939382 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:47:57.939393 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:47:57.939408 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:47:57.939420 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:47:57.939430 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 06:47:57.939441 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 06:47:57.939452 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 06:47:57.939463 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 06:47:57.939479 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 06:47:57.939495 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 06:47:57.939506 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 06:47:57.939518 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 06:47:57.939530 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 06:47:57.939541 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 06:47:57.939553 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 06:47:57.939564 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 06:47:57.939579 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 06:47:57.939591 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 06:47:57.939602 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 06:47:57.939614 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 06:47:57.939625 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 06:47:57.939637 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 06:47:57.939648 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 06:47:57.939659 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 06:47:57.939671 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 06:47:57.939682 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 06:47:57.939697 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 06:47:57.939709 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 06:47:57.939721 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 06:47:57.939732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 06:47:57.939744 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 06:47:57.939756 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 06:47:57.939768 kernel: Zone ranges: Dec 13 06:47:57.939779 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 06:47:57.939791 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 06:47:57.939806 kernel: Normal empty Dec 13 06:47:57.939818 kernel: Movable zone start for each node Dec 13 06:47:57.939829 kernel: Early memory node ranges Dec 13 06:47:57.939841 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 06:47:57.939852 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 06:47:57.939864 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 06:47:57.939875 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 06:47:57.939887 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 06:47:57.939898 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 06:47:57.939913 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 06:47:57.939925 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 06:47:57.939937 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 06:47:57.939948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 06:47:57.939960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 06:47:57.939971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 06:47:57.939983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 06:47:57.939994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 06:47:57.940006 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 06:47:57.940021 kernel: TSC deadline timer available Dec 13 06:47:57.940032 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 06:47:57.940044 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 06:47:57.940065 kernel: Booting paravirtualized kernel on KVM Dec 13 06:47:57.940077 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 06:47:57.940089 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 06:47:57.940101 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 06:47:57.940112 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 06:47:57.940124 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 06:47:57.940139 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Dec 13 06:47:57.940151 kernel: kvm-guest: PV spinlocks enabled Dec 13 06:47:57.940163 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 06:47:57.940174 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 06:47:57.940186 kernel: Policy zone: DMA32 Dec 13 06:47:57.940199 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:47:57.940211 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 06:47:57.940223 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 06:47:57.940238 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 06:47:57.940250 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 06:47:57.940262 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 192524K reserved, 0K cma-reserved) Dec 13 06:47:57.940274 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 06:47:57.940285 kernel: Kernel/User page tables isolation: enabled Dec 13 06:47:57.940297 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 06:47:57.940309 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 06:47:57.944356 kernel: rcu: Hierarchical RCU implementation. Dec 13 06:47:57.944375 kernel: rcu: RCU event tracing is enabled. Dec 13 06:47:57.944394 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 06:47:57.944407 kernel: Rude variant of Tasks RCU enabled. Dec 13 06:47:57.944419 kernel: Tracing variant of Tasks RCU enabled. Dec 13 06:47:57.944431 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 06:47:57.944443 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 06:47:57.944454 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 06:47:57.944467 kernel: random: crng init done Dec 13 06:47:57.944491 kernel: Console: colour VGA+ 80x25 Dec 13 06:47:57.944504 kernel: printk: console [tty0] enabled Dec 13 06:47:57.944516 kernel: printk: console [ttyS0] enabled Dec 13 06:47:57.944528 kernel: ACPI: Core revision 20210730 Dec 13 06:47:57.944540 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 06:47:57.944556 kernel: x2apic enabled Dec 13 06:47:57.944568 kernel: Switched APIC routing to physical x2apic. Dec 13 06:47:57.944580 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 06:47:57.944593 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 06:47:57.944605 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 06:47:57.944621 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 06:47:57.944633 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 06:47:57.944645 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 06:47:57.944657 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 06:47:57.944669 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 06:47:57.944681 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 06:47:57.944694 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 06:47:57.944706 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 06:47:57.944718 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 06:47:57.944730 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 06:47:57.944742 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 06:47:57.944757 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 06:47:57.944770 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 06:47:57.944782 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 06:47:57.944794 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 06:47:57.944806 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 06:47:57.944818 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 06:47:57.944831 kernel: Freeing SMP alternatives memory: 32K Dec 13 06:47:57.944843 kernel: pid_max: default: 32768 minimum: 301 Dec 13 06:47:57.944854 kernel: LSM: Security Framework initializing Dec 13 06:47:57.944866 kernel: SELinux: Initializing. Dec 13 06:47:57.944879 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 06:47:57.944894 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 06:47:57.944907 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 06:47:57.944919 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 06:47:57.944931 kernel: signal: max sigframe size: 1776 Dec 13 06:47:57.944944 kernel: rcu: Hierarchical SRCU implementation. Dec 13 06:47:57.944956 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 06:47:57.944968 kernel: smp: Bringing up secondary CPUs ... Dec 13 06:47:57.944980 kernel: x86: Booting SMP configuration: Dec 13 06:47:57.944992 kernel: .... node #0, CPUs: #1 Dec 13 06:47:57.945008 kernel: kvm-clock: cpu 1, msr 7919b041, secondary cpu clock Dec 13 06:47:57.945020 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 06:47:57.945032 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Dec 13 06:47:57.945044 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 06:47:57.945069 kernel: smpboot: Max logical packages: 16 Dec 13 06:47:57.945082 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 06:47:57.945094 kernel: devtmpfs: initialized Dec 13 06:47:57.945106 kernel: x86/mm: Memory block size: 128MB Dec 13 06:47:57.945118 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 06:47:57.945130 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 06:47:57.945147 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 06:47:57.945159 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 06:47:57.945172 kernel: audit: initializing netlink subsys (disabled) Dec 13 06:47:57.945184 kernel: audit: type=2000 audit(1734072476.679:1): state=initialized audit_enabled=0 res=1 Dec 13 06:47:57.945196 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 06:47:57.945208 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 06:47:57.945220 kernel: cpuidle: using governor menu Dec 13 06:47:57.945232 kernel: ACPI: bus type PCI registered Dec 13 06:47:57.945245 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 06:47:57.945260 kernel: dca service started, version 1.12.1 Dec 13 06:47:57.945273 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 06:47:57.945285 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 06:47:57.945297 kernel: PCI: Using configuration type 1 for base access Dec 13 06:47:57.945310 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 06:47:57.945338 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 06:47:57.945350 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 06:47:57.945363 kernel: ACPI: Added _OSI(Module Device) Dec 13 06:47:57.945380 kernel: ACPI: Added _OSI(Processor Device) Dec 13 06:47:57.945392 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 06:47:57.945404 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 06:47:57.945416 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 06:47:57.945428 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 06:47:57.945441 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 06:47:57.945453 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 06:47:57.945465 kernel: ACPI: Interpreter enabled Dec 13 06:47:57.945477 kernel: ACPI: PM: (supports S0 S5) Dec 13 06:47:57.945489 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 06:47:57.945505 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 06:47:57.945517 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 06:47:57.945530 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 06:47:57.945818 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 06:47:57.945985 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 06:47:57.946164 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 06:47:57.946184 kernel: PCI host bridge to bus 0000:00 Dec 13 06:47:57.946374 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 06:47:57.946520 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 06:47:57.946661 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 06:47:57.946800 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 06:47:57.946939 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 06:47:57.947094 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 06:47:57.947235 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 06:47:57.947433 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 06:47:57.947603 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 06:47:57.947761 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 06:47:57.947916 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 06:47:57.948083 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 06:47:57.948238 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 06:47:57.952518 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 06:47:57.952689 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 06:47:57.952868 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 06:47:57.953030 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 06:47:57.953212 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 06:47:57.953394 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 06:47:57.953567 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 06:47:57.953721 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 06:47:57.953909 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 06:47:57.954076 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 06:47:57.954240 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 06:47:57.954411 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 06:47:57.954582 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 06:47:57.954736 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 06:47:57.954896 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 06:47:57.955080 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 06:47:57.955243 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 06:47:57.955422 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 06:47:57.955578 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 06:47:57.955738 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 06:47:57.955894 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 06:47:57.956067 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 06:47:57.956226 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 06:47:57.963432 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 06:47:57.963605 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 06:47:57.963778 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 06:47:57.963948 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 06:47:57.964132 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 06:47:57.964290 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 06:47:57.964462 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 06:47:57.964651 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 06:47:57.964808 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 06:47:57.964985 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 06:47:57.965162 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 06:47:57.965329 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 06:47:57.965487 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 06:47:57.965639 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:47:57.965809 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 06:47:57.966001 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 06:47:57.966188 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 06:47:57.966372 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 06:47:57.966536 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 06:47:57.966705 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 06:47:57.966865 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 06:47:57.967022 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 06:47:57.967197 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 06:47:57.967377 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:47:57.967573 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 06:47:57.967768 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 06:47:57.967940 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 06:47:57.968111 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 06:47:57.968267 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:47:57.968443 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 06:47:57.968605 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 06:47:57.968758 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:47:57.968914 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 06:47:57.969079 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 06:47:57.969235 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:47:57.969405 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 06:47:57.969560 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 06:47:57.969710 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:47:57.969873 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 06:47:57.970029 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 06:47:57.970198 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:47:57.978419 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 06:47:57.978600 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 06:47:57.978759 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:47:57.978780 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 06:47:57.978794 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 06:47:57.978814 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 06:47:57.978827 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 06:47:57.978839 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 06:47:57.978852 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 06:47:57.978864 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 06:47:57.978877 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 06:47:57.978889 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 06:47:57.978902 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 06:47:57.978914 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 06:47:57.978931 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 06:47:57.978943 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 06:47:57.978956 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 06:47:57.978968 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 06:47:57.978980 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 06:47:57.978993 kernel: iommu: Default domain type: Translated Dec 13 06:47:57.979005 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 06:47:57.979177 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 06:47:57.979355 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 06:47:57.979518 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 06:47:57.979537 kernel: vgaarb: loaded Dec 13 06:47:57.979550 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 06:47:57.979562 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 06:47:57.979575 kernel: PTP clock support registered Dec 13 06:47:57.979587 kernel: PCI: Using ACPI for IRQ routing Dec 13 06:47:57.979600 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 06:47:57.979612 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 06:47:57.979630 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 06:47:57.979655 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 06:47:57.979667 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 06:47:57.979679 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 06:47:57.979691 kernel: pnp: PnP ACPI init Dec 13 06:47:57.979903 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 06:47:57.979924 kernel: pnp: PnP ACPI: found 5 devices Dec 13 06:47:57.979937 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 06:47:57.979955 kernel: NET: Registered PF_INET protocol family Dec 13 06:47:57.979968 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 06:47:57.979981 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 06:47:57.979993 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 06:47:57.980006 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 06:47:57.980018 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 06:47:57.980031 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 06:47:57.980043 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 06:47:57.980071 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 06:47:57.980089 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 06:47:57.980102 kernel: NET: Registered PF_XDP protocol family Dec 13 06:47:57.980270 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 06:47:57.980443 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 06:47:57.980599 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 06:47:57.980753 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 06:47:57.980908 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 06:47:57.981083 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 06:47:57.981240 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 06:47:57.981416 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 06:47:57.981570 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 06:47:57.981722 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 06:47:57.981876 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 06:47:57.982038 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 06:47:57.982206 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 06:47:57.982382 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 06:47:57.982535 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 06:47:57.982686 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 06:47:57.982846 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 06:47:57.983008 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 06:47:57.983175 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 06:47:57.983350 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 06:47:57.983514 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 06:47:57.983674 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:47:57.983850 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 06:47:57.984007 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 06:47:57.984182 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 06:47:57.992421 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:47:57.992608 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 06:47:57.992769 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 06:47:57.992950 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 06:47:57.993128 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:47:57.993287 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 06:47:57.993457 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 06:47:57.993611 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 06:47:57.993765 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:47:57.993922 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 06:47:57.994099 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 06:47:57.994254 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 06:47:57.994425 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:47:57.994581 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 06:47:57.994734 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 06:47:57.994886 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 06:47:57.995039 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:47:57.995210 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 06:47:57.995393 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 06:47:57.995551 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 06:47:57.995704 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:47:57.995862 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 06:47:57.996021 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 06:47:57.996196 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 06:47:57.996367 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:47:57.996520 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 06:47:57.996665 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 06:47:57.996807 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 06:47:57.996950 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 06:47:57.997108 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 06:47:57.997253 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 06:47:57.997442 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 06:47:57.997607 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 06:47:57.997756 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:47:57.997928 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 06:47:57.998116 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 06:47:57.998278 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 06:47:57.998453 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:47:57.998631 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 06:47:57.998793 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 06:47:57.998952 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:47:57.999135 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 06:47:57.999286 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 06:47:58.005873 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:47:58.006042 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 06:47:58.006216 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 06:47:58.006386 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:47:58.006550 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 06:47:58.006701 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 06:47:58.006850 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:47:58.007031 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 06:47:58.007209 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 06:47:58.007477 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:47:58.007639 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 06:47:58.007787 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 06:47:58.007933 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:47:58.007953 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 06:47:58.007967 kernel: PCI: CLS 0 bytes, default 64 Dec 13 06:47:58.007980 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 06:47:58.008000 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 06:47:58.008014 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 06:47:58.008028 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 06:47:58.008041 kernel: Initialise system trusted keyrings Dec 13 06:47:58.008066 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 06:47:58.008080 kernel: Key type asymmetric registered Dec 13 06:47:58.008093 kernel: Asymmetric key parser 'x509' registered Dec 13 06:47:58.008106 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 06:47:58.008119 kernel: io scheduler mq-deadline registered Dec 13 06:47:58.008137 kernel: io scheduler kyber registered Dec 13 06:47:58.008150 kernel: io scheduler bfq registered Dec 13 06:47:58.008308 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 06:47:58.008481 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 06:47:58.008638 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:47:58.008795 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 06:47:58.008949 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 06:47:58.009124 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:47:58.009281 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 06:47:58.009464 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 06:47:58.009619 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:47:58.009772 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 06:47:58.009924 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 06:47:58.010095 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:47:58.010250 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 06:47:58.010419 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 06:47:58.010573 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:47:58.010734 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 06:47:58.010887 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 06:47:58.011045 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:47:58.011224 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 06:47:58.011423 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 06:47:58.011578 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:47:58.011731 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 06:47:58.011883 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 06:47:58.012044 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:47:58.012077 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 06:47:58.012092 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 06:47:58.012105 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 06:47:58.012118 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 06:47:58.012131 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 06:47:58.012144 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 06:47:58.012157 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 06:47:58.012176 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 06:47:58.012361 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 06:47:58.012383 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 06:47:58.012525 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 06:47:58.012668 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T06:47:57 UTC (1734072477) Dec 13 06:47:58.012809 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 06:47:58.012828 kernel: intel_pstate: CPU model not supported Dec 13 06:47:58.012847 kernel: NET: Registered PF_INET6 protocol family Dec 13 06:47:58.012861 kernel: Segment Routing with IPv6 Dec 13 06:47:58.012886 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 06:47:58.012899 kernel: NET: Registered PF_PACKET protocol family Dec 13 06:47:58.012911 kernel: Key type dns_resolver registered Dec 13 06:47:58.012924 kernel: IPI shorthand broadcast: enabled Dec 13 06:47:58.012949 kernel: sched_clock: Marking stable (971022121, 221847404)->(1482773768, -289904243) Dec 13 06:47:58.012961 kernel: registered taskstats version 1 Dec 13 06:47:58.012973 kernel: Loading compiled-in X.509 certificates Dec 13 06:47:58.012986 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 06:47:58.013014 kernel: Key type .fscrypt registered Dec 13 06:47:58.013027 kernel: Key type fscrypt-provisioning registered Dec 13 06:47:58.013040 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 06:47:58.013064 kernel: ima: Allocated hash algorithm: sha1 Dec 13 06:47:58.013078 kernel: ima: No architecture policies found Dec 13 06:47:58.013091 kernel: clk: Disabling unused clocks Dec 13 06:47:58.013104 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 06:47:58.013117 kernel: Write protecting the kernel read-only data: 28672k Dec 13 06:47:58.013134 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 06:47:58.013148 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 06:47:58.013161 kernel: Run /init as init process Dec 13 06:47:58.013174 kernel: with arguments: Dec 13 06:47:58.013187 kernel: /init Dec 13 06:47:58.013200 kernel: with environment: Dec 13 06:47:58.013213 kernel: HOME=/ Dec 13 06:47:58.013225 kernel: TERM=linux Dec 13 06:47:58.013238 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 06:47:58.013260 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 06:47:58.013284 systemd[1]: Detected virtualization kvm. Dec 13 06:47:58.013298 systemd[1]: Detected architecture x86-64. Dec 13 06:47:58.013311 systemd[1]: Running in initrd. Dec 13 06:47:58.013342 systemd[1]: No hostname configured, using default hostname. Dec 13 06:47:58.013356 systemd[1]: Hostname set to . Dec 13 06:47:58.013370 systemd[1]: Initializing machine ID from VM UUID. Dec 13 06:47:58.013388 systemd[1]: Queued start job for default target initrd.target. Dec 13 06:47:58.013402 systemd[1]: Started systemd-ask-password-console.path. Dec 13 06:47:58.013416 systemd[1]: Reached target cryptsetup.target. Dec 13 06:47:58.013429 systemd[1]: Reached target paths.target. Dec 13 06:47:58.013442 systemd[1]: Reached target slices.target. Dec 13 06:47:58.013460 systemd[1]: Reached target swap.target. Dec 13 06:47:58.013473 systemd[1]: Reached target timers.target. Dec 13 06:47:58.013488 systemd[1]: Listening on iscsid.socket. Dec 13 06:47:58.013505 systemd[1]: Listening on iscsiuio.socket. Dec 13 06:47:58.013519 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 06:47:58.013537 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 06:47:58.013550 systemd[1]: Listening on systemd-journald.socket. Dec 13 06:47:58.013564 systemd[1]: Listening on systemd-networkd.socket. Dec 13 06:47:58.013578 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 06:47:58.013591 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 06:47:58.013605 systemd[1]: Reached target sockets.target. Dec 13 06:47:58.013619 systemd[1]: Starting kmod-static-nodes.service... Dec 13 06:47:58.013636 systemd[1]: Finished network-cleanup.service. Dec 13 06:47:58.013650 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 06:47:58.013663 systemd[1]: Starting systemd-journald.service... Dec 13 06:47:58.013677 systemd[1]: Starting systemd-modules-load.service... Dec 13 06:47:58.013691 systemd[1]: Starting systemd-resolved.service... Dec 13 06:47:58.013704 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 06:47:58.013718 systemd[1]: Finished kmod-static-nodes.service. Dec 13 06:47:58.013732 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 06:47:58.013756 systemd-journald[201]: Journal started Dec 13 06:47:58.013834 systemd-journald[201]: Runtime Journal (/run/log/journal/1d7af8c79b534a86b5fd685c2ee6af9f) is 4.7M, max 38.1M, 33.3M free. Dec 13 06:47:57.940740 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 06:47:58.033949 kernel: Bridge firewalling registered Dec 13 06:47:57.994039 systemd-resolved[203]: Positive Trust Anchors: Dec 13 06:47:58.048029 systemd[1]: Started systemd-resolved.service. Dec 13 06:47:58.048069 kernel: audit: type=1130 audit(1734072478.034:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.048098 systemd[1]: Started systemd-journald.service. Dec 13 06:47:58.048118 kernel: audit: type=1130 audit(1734072478.041:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:57.994068 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 06:47:58.065453 kernel: audit: type=1130 audit(1734072478.048:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.065483 kernel: SCSI subsystem initialized Dec 13 06:47:58.065511 kernel: audit: type=1130 audit(1734072478.055:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:57.994115 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 06:47:58.075750 kernel: audit: type=1130 audit(1734072478.056:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:57.997952 systemd-resolved[203]: Defaulting to hostname 'linux'. Dec 13 06:47:58.019912 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 06:47:58.048997 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 06:47:58.055833 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 06:47:58.056596 systemd[1]: Reached target nss-lookup.target. Dec 13 06:47:58.058335 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 06:47:58.064838 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 06:47:58.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.074991 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 06:47:58.093678 kernel: audit: type=1130 audit(1734072478.075:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.093708 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 06:47:58.093726 kernel: device-mapper: uevent: version 1.0.3 Dec 13 06:47:58.093743 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 06:47:58.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.095437 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 06:47:58.101671 kernel: audit: type=1130 audit(1734072478.095:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.101881 systemd[1]: Starting dracut-cmdline.service... Dec 13 06:47:58.104854 systemd-modules-load[202]: Inserted module 'dm_multipath' Dec 13 06:47:58.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.106359 systemd[1]: Finished systemd-modules-load.service. Dec 13 06:47:58.113355 kernel: audit: type=1130 audit(1734072478.106:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.114097 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:47:58.121859 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:47:58.136813 dracut-cmdline[219]: dracut-dracut-053 Dec 13 06:47:58.136813 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:47:58.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.145346 kernel: audit: type=1130 audit(1734072478.139:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.215352 kernel: Loading iSCSI transport class v2.0-870. Dec 13 06:47:58.237353 kernel: iscsi: registered transport (tcp) Dec 13 06:47:58.265767 kernel: iscsi: registered transport (qla4xxx) Dec 13 06:47:58.265845 kernel: QLogic iSCSI HBA Driver Dec 13 06:47:58.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.313765 systemd[1]: Finished dracut-cmdline.service. Dec 13 06:47:58.315669 systemd[1]: Starting dracut-pre-udev.service... Dec 13 06:47:58.375394 kernel: raid6: sse2x4 gen() 13848 MB/s Dec 13 06:47:58.392371 kernel: raid6: sse2x4 xor() 7907 MB/s Dec 13 06:47:58.410360 kernel: raid6: sse2x2 gen() 9424 MB/s Dec 13 06:47:58.428351 kernel: raid6: sse2x2 xor() 7847 MB/s Dec 13 06:47:58.446367 kernel: raid6: sse2x1 gen() 9572 MB/s Dec 13 06:47:58.464945 kernel: raid6: sse2x1 xor() 7207 MB/s Dec 13 06:47:58.465006 kernel: raid6: using algorithm sse2x4 gen() 13848 MB/s Dec 13 06:47:58.465026 kernel: raid6: .... xor() 7907 MB/s, rmw enabled Dec 13 06:47:58.466258 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 06:47:58.483350 kernel: xor: automatically using best checksumming function avx Dec 13 06:47:58.601377 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 06:47:58.614280 systemd[1]: Finished dracut-pre-udev.service. Dec 13 06:47:58.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.615000 audit: BPF prog-id=7 op=LOAD Dec 13 06:47:58.615000 audit: BPF prog-id=8 op=LOAD Dec 13 06:47:58.616368 systemd[1]: Starting systemd-udevd.service... Dec 13 06:47:58.634631 systemd-udevd[402]: Using default interface naming scheme 'v252'. Dec 13 06:47:58.643525 systemd[1]: Started systemd-udevd.service. Dec 13 06:47:58.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.645543 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 06:47:58.663167 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Dec 13 06:47:58.705493 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 06:47:58.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.707337 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 06:47:58.798234 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 06:47:58.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:47:58.884540 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 06:47:58.916702 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 06:47:58.916729 kernel: GPT:17805311 != 125829119 Dec 13 06:47:58.916747 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 06:47:58.916773 kernel: GPT:17805311 != 125829119 Dec 13 06:47:58.916791 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 06:47:58.916808 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:47:58.916825 kernel: ACPI: bus type USB registered Dec 13 06:47:58.916841 kernel: usbcore: registered new interface driver usbfs Dec 13 06:47:58.916858 kernel: usbcore: registered new interface driver hub Dec 13 06:47:58.916875 kernel: usbcore: registered new device driver usb Dec 13 06:47:58.925340 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 06:47:58.954342 kernel: AVX version of gcm_enc/dec engaged. Dec 13 06:47:58.973229 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 06:47:58.981463 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 06:47:59.111615 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Dec 13 06:47:59.111651 kernel: AES CTR mode by8 optimization enabled Dec 13 06:47:59.111680 kernel: libata version 3.00 loaded. Dec 13 06:47:59.111698 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 06:47:59.111952 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 06:47:59.112151 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 06:47:59.112345 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 06:47:59.112367 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 06:47:59.112538 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 06:47:59.112708 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 06:47:59.112890 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 06:47:59.113113 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 06:47:59.113290 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 06:47:59.113482 kernel: hub 1-0:1.0: USB hub found Dec 13 06:47:59.113694 kernel: hub 1-0:1.0: 4 ports detected Dec 13 06:47:59.113888 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 06:47:59.114186 kernel: hub 2-0:1.0: USB hub found Dec 13 06:47:59.114412 kernel: hub 2-0:1.0: 4 ports detected Dec 13 06:47:59.114604 kernel: scsi host0: ahci Dec 13 06:47:59.114799 kernel: scsi host1: ahci Dec 13 06:47:59.114991 kernel: scsi host2: ahci Dec 13 06:47:59.115188 kernel: scsi host3: ahci Dec 13 06:47:59.115395 kernel: scsi host4: ahci Dec 13 06:47:59.115580 kernel: scsi host5: ahci Dec 13 06:47:59.115762 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 06:47:59.115783 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 06:47:59.115800 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 06:47:59.115817 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 06:47:59.115834 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 06:47:59.115851 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 06:47:59.110812 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 06:47:59.120433 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 06:47:59.129225 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 06:47:59.132175 systemd[1]: Starting disk-uuid.service... Dec 13 06:47:59.139234 disk-uuid[529]: Primary Header is updated. Dec 13 06:47:59.139234 disk-uuid[529]: Secondary Entries is updated. Dec 13 06:47:59.139234 disk-uuid[529]: Secondary Header is updated. Dec 13 06:47:59.148337 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:47:59.151335 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:47:59.270382 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 06:47:59.354348 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 06:47:59.357617 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 06:47:59.357656 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 06:47:59.359228 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 06:47:59.362566 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 06:47:59.362621 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 06:47:59.410346 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 06:47:59.417695 kernel: usbcore: registered new interface driver usbhid Dec 13 06:47:59.417749 kernel: usbhid: USB HID core driver Dec 13 06:47:59.426682 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 13 06:47:59.426733 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 06:48:00.152721 disk-uuid[530]: The operation has completed successfully. Dec 13 06:48:00.153738 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:48:00.209444 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 06:48:00.210614 systemd[1]: Finished disk-uuid.service. Dec 13 06:48:00.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.217609 systemd[1]: Starting verity-setup.service... Dec 13 06:48:00.241865 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 06:48:00.295509 systemd[1]: Found device dev-mapper-usr.device. Dec 13 06:48:00.297078 systemd[1]: Mounting sysusr-usr.mount... Dec 13 06:48:00.299092 systemd[1]: Finished verity-setup.service. Dec 13 06:48:00.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.397360 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 06:48:00.397784 systemd[1]: Mounted sysusr-usr.mount. Dec 13 06:48:00.398658 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 06:48:00.399723 systemd[1]: Starting ignition-setup.service... Dec 13 06:48:00.403026 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 06:48:00.421370 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 06:48:00.421469 kernel: BTRFS info (device vda6): using free space tree Dec 13 06:48:00.421491 kernel: BTRFS info (device vda6): has skinny extents Dec 13 06:48:00.437344 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 06:48:00.445885 systemd[1]: Finished ignition-setup.service. Dec 13 06:48:00.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.448182 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 06:48:00.565617 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 06:48:00.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.567000 audit: BPF prog-id=9 op=LOAD Dec 13 06:48:00.568841 systemd[1]: Starting systemd-networkd.service... Dec 13 06:48:00.604969 systemd-networkd[710]: lo: Link UP Dec 13 06:48:00.604984 systemd-networkd[710]: lo: Gained carrier Dec 13 06:48:00.606463 systemd-networkd[710]: Enumeration completed Dec 13 06:48:00.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.606614 systemd[1]: Started systemd-networkd.service. Dec 13 06:48:00.608146 systemd[1]: Reached target network.target. Dec 13 06:48:00.608447 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 06:48:00.610484 systemd[1]: Starting iscsiuio.service... Dec 13 06:48:00.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.614289 systemd-networkd[710]: eth0: Link UP Dec 13 06:48:00.614297 systemd-networkd[710]: eth0: Gained carrier Dec 13 06:48:00.631643 systemd[1]: Started iscsiuio.service. Dec 13 06:48:00.633603 systemd[1]: Starting iscsid.service... Dec 13 06:48:00.639084 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 06:48:00.639084 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 06:48:00.639084 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 06:48:00.639084 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 06:48:00.639084 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 06:48:00.639084 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 06:48:00.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.641653 systemd[1]: Started iscsid.service. Dec 13 06:48:00.643699 systemd[1]: Starting dracut-initqueue.service... Dec 13 06:48:00.655469 systemd-networkd[710]: eth0: DHCPv4 address 10.230.20.2/30, gateway 10.230.20.1 acquired from 10.230.20.1 Dec 13 06:48:00.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.667763 systemd[1]: Finished dracut-initqueue.service. Dec 13 06:48:00.665627 ignition[625]: Ignition 2.14.0 Dec 13 06:48:00.668636 systemd[1]: Reached target remote-fs-pre.target. Dec 13 06:48:00.665658 ignition[625]: Stage: fetch-offline Dec 13 06:48:00.669259 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 06:48:00.665767 ignition[625]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:48:00.670431 systemd[1]: Reached target remote-fs.target. Dec 13 06:48:00.665807 ignition[625]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:48:00.674436 systemd[1]: Starting dracut-pre-mount.service... Dec 13 06:48:00.667284 ignition[625]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:48:00.670060 ignition[625]: parsed url from cmdline: "" Dec 13 06:48:00.681798 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 06:48:00.670068 ignition[625]: no config URL provided Dec 13 06:48:00.670080 ignition[625]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 06:48:00.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.670098 ignition[625]: no config at "/usr/lib/ignition/user.ign" Dec 13 06:48:00.686021 systemd[1]: Starting ignition-fetch.service... Dec 13 06:48:00.670108 ignition[625]: failed to fetch config: resource requires networking Dec 13 06:48:00.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.692100 systemd[1]: Finished dracut-pre-mount.service. Dec 13 06:48:00.670295 ignition[625]: Ignition finished successfully Dec 13 06:48:00.697840 ignition[728]: Ignition 2.14.0 Dec 13 06:48:00.697857 ignition[728]: Stage: fetch Dec 13 06:48:00.698056 ignition[728]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:48:00.698092 ignition[728]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:48:00.699394 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:48:00.699546 ignition[728]: parsed url from cmdline: "" Dec 13 06:48:00.699553 ignition[728]: no config URL provided Dec 13 06:48:00.699563 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 06:48:00.699578 ignition[728]: no config at "/usr/lib/ignition/user.ign" Dec 13 06:48:00.703922 ignition[728]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 06:48:00.703969 ignition[728]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 06:48:00.704602 ignition[728]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 06:48:00.726084 ignition[728]: GET result: OK Dec 13 06:48:00.726351 ignition[728]: parsing config with SHA512: 2f81a0dabecfccc7662515fe9fafac9242da74d5bf98fbc1d86bba8ee842d521ef4f1c25a7f0a706cadd3b3d4632a3c34f362d581ded61b359e800a3fd73d741 Dec 13 06:48:00.736282 unknown[728]: fetched base config from "system" Dec 13 06:48:00.737252 unknown[728]: fetched base config from "system" Dec 13 06:48:00.738024 unknown[728]: fetched user config from "openstack" Dec 13 06:48:00.739385 ignition[728]: fetch: fetch complete Dec 13 06:48:00.740082 ignition[728]: fetch: fetch passed Dec 13 06:48:00.740844 ignition[728]: Ignition finished successfully Dec 13 06:48:00.743283 systemd[1]: Finished ignition-fetch.service. Dec 13 06:48:00.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.745290 systemd[1]: Starting ignition-kargs.service... Dec 13 06:48:00.758130 ignition[735]: Ignition 2.14.0 Dec 13 06:48:00.758151 ignition[735]: Stage: kargs Dec 13 06:48:00.758342 ignition[735]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:48:00.758392 ignition[735]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:48:00.760118 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:48:00.761646 ignition[735]: kargs: kargs passed Dec 13 06:48:00.762845 systemd[1]: Finished ignition-kargs.service. Dec 13 06:48:00.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.761716 ignition[735]: Ignition finished successfully Dec 13 06:48:00.765091 systemd[1]: Starting ignition-disks.service... Dec 13 06:48:00.776043 ignition[740]: Ignition 2.14.0 Dec 13 06:48:00.776064 ignition[740]: Stage: disks Dec 13 06:48:00.776233 ignition[740]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:48:00.776267 ignition[740]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:48:00.777570 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:48:00.779167 ignition[740]: disks: disks passed Dec 13 06:48:00.779236 ignition[740]: Ignition finished successfully Dec 13 06:48:00.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.780448 systemd[1]: Finished ignition-disks.service. Dec 13 06:48:00.781593 systemd[1]: Reached target initrd-root-device.target. Dec 13 06:48:00.782925 systemd[1]: Reached target local-fs-pre.target. Dec 13 06:48:00.784148 systemd[1]: Reached target local-fs.target. Dec 13 06:48:00.785412 systemd[1]: Reached target sysinit.target. Dec 13 06:48:00.786611 systemd[1]: Reached target basic.target. Dec 13 06:48:00.789253 systemd[1]: Starting systemd-fsck-root.service... Dec 13 06:48:00.811089 systemd-fsck[747]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 06:48:00.815075 systemd[1]: Finished systemd-fsck-root.service. Dec 13 06:48:00.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.816889 systemd[1]: Mounting sysroot.mount... Dec 13 06:48:00.829353 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 06:48:00.829486 systemd[1]: Mounted sysroot.mount. Dec 13 06:48:00.830236 systemd[1]: Reached target initrd-root-fs.target. Dec 13 06:48:00.832801 systemd[1]: Mounting sysroot-usr.mount... Dec 13 06:48:00.833964 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 06:48:00.834861 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 06:48:00.835641 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 06:48:00.835703 systemd[1]: Reached target ignition-diskful.target. Dec 13 06:48:00.838053 systemd[1]: Mounted sysroot-usr.mount. Dec 13 06:48:00.839795 systemd[1]: Starting initrd-setup-root.service... Dec 13 06:48:00.853534 initrd-setup-root[758]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 06:48:00.863890 initrd-setup-root[766]: cut: /sysroot/etc/group: No such file or directory Dec 13 06:48:00.874203 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 06:48:00.884069 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 06:48:00.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:00.955802 systemd[1]: Finished initrd-setup-root.service. Dec 13 06:48:00.957910 systemd[1]: Starting ignition-mount.service... Dec 13 06:48:00.961216 systemd[1]: Starting sysroot-boot.service... Dec 13 06:48:00.973663 bash[801]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 06:48:00.995014 coreos-metadata[753]: Dec 13 06:48:00.994 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 06:48:00.996426 ignition[803]: INFO : Ignition 2.14.0 Dec 13 06:48:00.996426 ignition[803]: INFO : Stage: mount Dec 13 06:48:00.996426 ignition[803]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:48:00.996426 ignition[803]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:48:01.000353 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:48:01.000353 ignition[803]: INFO : mount: mount passed Dec 13 06:48:01.000353 ignition[803]: INFO : Ignition finished successfully Dec 13 06:48:01.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:01.001623 systemd[1]: Finished ignition-mount.service. Dec 13 06:48:01.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:01.008519 systemd[1]: Finished sysroot-boot.service. Dec 13 06:48:01.014445 coreos-metadata[753]: Dec 13 06:48:01.014 INFO Fetch successful Dec 13 06:48:01.015773 coreos-metadata[753]: Dec 13 06:48:01.014 INFO wrote hostname srv-uleoy.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 06:48:01.017843 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 06:48:01.017977 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 06:48:01.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:01.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:01.319712 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 06:48:01.332362 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (810) Dec 13 06:48:01.336796 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 06:48:01.336846 kernel: BTRFS info (device vda6): using free space tree Dec 13 06:48:01.336865 kernel: BTRFS info (device vda6): has skinny extents Dec 13 06:48:01.344092 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 06:48:01.346872 systemd[1]: Starting ignition-files.service... Dec 13 06:48:01.369064 ignition[830]: INFO : Ignition 2.14.0 Dec 13 06:48:01.370164 ignition[830]: INFO : Stage: files Dec 13 06:48:01.371040 ignition[830]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:48:01.372048 ignition[830]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:48:01.374696 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:48:01.378107 ignition[830]: DEBUG : files: compiled without relabeling support, skipping Dec 13 06:48:01.379898 ignition[830]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 06:48:01.380929 ignition[830]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 06:48:01.385601 ignition[830]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 06:48:01.386965 ignition[830]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 06:48:01.393547 unknown[830]: wrote ssh authorized keys file for user: core Dec 13 06:48:01.394633 ignition[830]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 06:48:01.396574 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 06:48:01.403439 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 06:48:01.552998 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 06:48:01.746579 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 06:48:01.747950 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 06:48:01.747950 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 06:48:02.210801 systemd-networkd[710]: eth0: Gained IPv6LL Dec 13 06:48:02.325747 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 06:48:02.622774 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 06:48:02.624548 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 06:48:02.625613 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 06:48:02.625613 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 06:48:02.625613 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 06:48:02.625613 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 06:48:02.625613 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 06:48:02.625613 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 06:48:02.632207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 06:48:02.632207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 06:48:02.632207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 06:48:02.632207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 06:48:02.632207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 06:48:02.632207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 06:48:02.632207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 06:48:03.122465 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 06:48:03.720396 systemd-networkd[710]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8500:24:19ff:fee6:1402/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8500:24:19ff:fee6:1402/64 assigned by NDisc. Dec 13 06:48:03.720408 systemd-networkd[710]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 06:48:05.255165 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 06:48:05.257805 ignition[830]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 06:48:05.257805 ignition[830]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 06:48:05.257805 ignition[830]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Dec 13 06:48:05.257805 ignition[830]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 06:48:05.257805 ignition[830]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 06:48:05.257805 ignition[830]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Dec 13 06:48:05.257805 ignition[830]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 06:48:05.257805 ignition[830]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 06:48:05.257805 ignition[830]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 06:48:05.257805 ignition[830]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 06:48:05.271365 ignition[830]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 06:48:05.271365 ignition[830]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 06:48:05.271365 ignition[830]: INFO : files: files passed Dec 13 06:48:05.271365 ignition[830]: INFO : Ignition finished successfully Dec 13 06:48:05.287645 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 06:48:05.287687 kernel: audit: type=1130 audit(1734072485.273:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.270802 systemd[1]: Finished ignition-files.service. Dec 13 06:48:05.276707 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 06:48:05.283467 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 06:48:05.284733 systemd[1]: Starting ignition-quench.service... Dec 13 06:48:05.290583 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 06:48:05.304523 kernel: audit: type=1130 audit(1734072485.293:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.304557 kernel: audit: type=1131 audit(1734072485.293:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.290741 systemd[1]: Finished ignition-quench.service. Dec 13 06:48:05.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.306305 initrd-setup-root-after-ignition[855]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 06:48:05.314435 kernel: audit: type=1130 audit(1734072485.305:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.294393 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 06:48:05.306214 systemd[1]: Reached target ignition-complete.target. Dec 13 06:48:05.314063 systemd[1]: Starting initrd-parse-etc.service... Dec 13 06:48:05.337397 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 06:48:05.337537 systemd[1]: Finished initrd-parse-etc.service. Dec 13 06:48:05.349779 kernel: audit: type=1130 audit(1734072485.339:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.349834 kernel: audit: type=1131 audit(1734072485.339:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.339620 systemd[1]: Reached target initrd-fs.target. Dec 13 06:48:05.350398 systemd[1]: Reached target initrd.target. Dec 13 06:48:05.351675 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 06:48:05.352947 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 06:48:05.370484 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 06:48:05.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.372379 systemd[1]: Starting initrd-cleanup.service... Dec 13 06:48:05.391705 kernel: audit: type=1130 audit(1734072485.370:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.399232 systemd[1]: Stopped target nss-lookup.target. Dec 13 06:48:05.400786 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 06:48:05.402381 systemd[1]: Stopped target timers.target. Dec 13 06:48:05.403123 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 06:48:05.409659 kernel: audit: type=1131 audit(1734072485.404:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.403286 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 06:48:05.404539 systemd[1]: Stopped target initrd.target. Dec 13 06:48:05.410395 systemd[1]: Stopped target basic.target. Dec 13 06:48:05.411655 systemd[1]: Stopped target ignition-complete.target. Dec 13 06:48:05.412995 systemd[1]: Stopped target ignition-diskful.target. Dec 13 06:48:05.414204 systemd[1]: Stopped target initrd-root-device.target. Dec 13 06:48:05.415504 systemd[1]: Stopped target remote-fs.target. Dec 13 06:48:05.416764 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 06:48:05.418070 systemd[1]: Stopped target sysinit.target. Dec 13 06:48:05.419210 systemd[1]: Stopped target local-fs.target. Dec 13 06:48:05.420510 systemd[1]: Stopped target local-fs-pre.target. Dec 13 06:48:05.421699 systemd[1]: Stopped target swap.target. Dec 13 06:48:05.429330 kernel: audit: type=1131 audit(1734072485.423:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.422788 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 06:48:05.423028 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 06:48:05.436637 kernel: audit: type=1131 audit(1734072485.431:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.424260 systemd[1]: Stopped target cryptsetup.target. Dec 13 06:48:05.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.430103 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 06:48:05.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.430340 systemd[1]: Stopped dracut-initqueue.service. Dec 13 06:48:05.431580 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 06:48:05.431798 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 06:48:05.437659 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 06:48:05.437873 systemd[1]: Stopped ignition-files.service. Dec 13 06:48:05.440190 systemd[1]: Stopping ignition-mount.service... Dec 13 06:48:05.446601 systemd[1]: Stopping sysroot-boot.service... Dec 13 06:48:05.447995 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 06:48:05.449120 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 06:48:05.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.450842 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 06:48:05.451872 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 06:48:05.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.457428 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 06:48:05.458396 systemd[1]: Finished initrd-cleanup.service. Dec 13 06:48:05.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.466702 ignition[868]: INFO : Ignition 2.14.0 Dec 13 06:48:05.467704 ignition[868]: INFO : Stage: umount Dec 13 06:48:05.468660 ignition[868]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:48:05.469704 ignition[868]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:48:05.472661 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 06:48:05.475821 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 06:48:05.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.475970 systemd[1]: Stopped sysroot-boot.service. Dec 13 06:48:05.480002 ignition[868]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:48:05.480002 ignition[868]: INFO : umount: umount passed Dec 13 06:48:05.480002 ignition[868]: INFO : Ignition finished successfully Dec 13 06:48:05.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.479576 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 06:48:05.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.479696 systemd[1]: Stopped ignition-mount.service. Dec 13 06:48:05.480698 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 06:48:05.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.480761 systemd[1]: Stopped ignition-disks.service. Dec 13 06:48:05.482077 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 06:48:05.482148 systemd[1]: Stopped ignition-kargs.service. Dec 13 06:48:05.483273 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 06:48:05.483394 systemd[1]: Stopped ignition-fetch.service. Dec 13 06:48:05.484515 systemd[1]: Stopped target network.target. Dec 13 06:48:05.485687 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 06:48:05.485750 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 06:48:05.487020 systemd[1]: Stopped target paths.target. Dec 13 06:48:05.488195 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 06:48:05.491405 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 06:48:05.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.492247 systemd[1]: Stopped target slices.target. Dec 13 06:48:05.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.493458 systemd[1]: Stopped target sockets.target. Dec 13 06:48:05.494840 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 06:48:05.494889 systemd[1]: Closed iscsid.socket. Dec 13 06:48:05.496079 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 06:48:05.496125 systemd[1]: Closed iscsiuio.socket. Dec 13 06:48:05.497233 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 06:48:05.497298 systemd[1]: Stopped ignition-setup.service. Dec 13 06:48:05.498744 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 06:48:05.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.498804 systemd[1]: Stopped initrd-setup-root.service. Dec 13 06:48:05.500860 systemd[1]: Stopping systemd-networkd.service... Dec 13 06:48:05.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.501770 systemd[1]: Stopping systemd-resolved.service... Dec 13 06:48:05.503393 systemd-networkd[710]: eth0: DHCPv6 lease lost Dec 13 06:48:05.511000 audit: BPF prog-id=9 op=UNLOAD Dec 13 06:48:05.505454 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 06:48:05.513000 audit: BPF prog-id=6 op=UNLOAD Dec 13 06:48:05.505603 systemd[1]: Stopped systemd-networkd.service. Dec 13 06:48:05.509002 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 06:48:05.509145 systemd[1]: Stopped systemd-resolved.service. Dec 13 06:48:05.511253 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 06:48:05.511460 systemd[1]: Closed systemd-networkd.socket. Dec 13 06:48:05.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.513509 systemd[1]: Stopping network-cleanup.service... Dec 13 06:48:05.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.516750 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 06:48:05.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.516837 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 06:48:05.518207 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 06:48:05.518274 systemd[1]: Stopped systemd-sysctl.service. Dec 13 06:48:05.519871 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 06:48:05.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.519977 systemd[1]: Stopped systemd-modules-load.service. Dec 13 06:48:05.520924 systemd[1]: Stopping systemd-udevd.service... Dec 13 06:48:05.524651 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 06:48:05.525569 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 06:48:05.525790 systemd[1]: Stopped systemd-udevd.service. Dec 13 06:48:05.528591 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 06:48:05.528702 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 06:48:05.532103 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 06:48:05.532168 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 06:48:05.532813 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 06:48:05.532887 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 06:48:05.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.540478 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 06:48:05.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.540579 systemd[1]: Stopped dracut-cmdline.service. Dec 13 06:48:05.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.541673 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 06:48:05.541743 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 06:48:05.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.544480 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 06:48:05.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.559751 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 06:48:05.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.559868 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 06:48:05.561762 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 06:48:05.561832 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 06:48:05.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.562649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 06:48:05.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:05.562714 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 06:48:05.565584 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 06:48:05.566451 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 06:48:05.566591 systemd[1]: Stopped network-cleanup.service. Dec 13 06:48:05.567659 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 06:48:05.567877 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 06:48:05.568837 systemd[1]: Reached target initrd-switch-root.target. Dec 13 06:48:05.571249 systemd[1]: Starting initrd-switch-root.service... Dec 13 06:48:05.588974 systemd[1]: Switching root. Dec 13 06:48:05.611049 iscsid[715]: iscsid shutting down. Dec 13 06:48:05.611848 systemd-journald[201]: Received SIGTERM from PID 1 (n/a). Dec 13 06:48:05.611949 systemd-journald[201]: Journal stopped Dec 13 06:48:09.654188 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 06:48:09.658166 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 06:48:09.658607 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 06:48:09.658653 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 06:48:09.658695 kernel: SELinux: policy capability open_perms=1 Dec 13 06:48:09.658724 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 06:48:09.658752 kernel: SELinux: policy capability always_check_network=0 Dec 13 06:48:09.658788 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 06:48:09.658818 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 06:48:09.658913 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 06:48:09.658943 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 06:48:09.658984 systemd[1]: Successfully loaded SELinux policy in 59.816ms. Dec 13 06:48:09.659049 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.660ms. Dec 13 06:48:09.659090 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 06:48:09.659121 systemd[1]: Detected virtualization kvm. Dec 13 06:48:09.659148 systemd[1]: Detected architecture x86-64. Dec 13 06:48:09.659184 systemd[1]: Detected first boot. Dec 13 06:48:09.659222 systemd[1]: Hostname set to . Dec 13 06:48:09.659251 systemd[1]: Initializing machine ID from VM UUID. Dec 13 06:48:09.659273 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 06:48:09.659293 systemd[1]: Populated /etc with preset unit settings. Dec 13 06:48:09.663487 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:48:09.663541 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:48:09.663578 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:48:09.663625 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 06:48:09.663654 systemd[1]: Stopped iscsiuio.service. Dec 13 06:48:09.663682 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 06:48:09.663704 systemd[1]: Stopped iscsid.service. Dec 13 06:48:09.663731 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 06:48:09.663768 systemd[1]: Stopped initrd-switch-root.service. Dec 13 06:48:09.663790 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 06:48:09.663822 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 06:48:09.663878 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 06:48:09.663902 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 06:48:09.663922 systemd[1]: Created slice system-getty.slice. Dec 13 06:48:09.663942 systemd[1]: Created slice system-modprobe.slice. Dec 13 06:48:09.663964 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 06:48:09.663998 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 06:48:09.664021 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 06:48:09.664049 systemd[1]: Created slice user.slice. Dec 13 06:48:09.664085 systemd[1]: Started systemd-ask-password-console.path. Dec 13 06:48:09.664108 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 06:48:09.664137 systemd[1]: Set up automount boot.automount. Dec 13 06:48:09.664169 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 06:48:09.664191 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 06:48:09.664220 systemd[1]: Stopped target initrd-fs.target. Dec 13 06:48:09.664253 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 06:48:09.664282 systemd[1]: Reached target integritysetup.target. Dec 13 06:48:09.664304 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 06:48:09.666877 systemd[1]: Reached target remote-fs.target. Dec 13 06:48:09.666913 systemd[1]: Reached target slices.target. Dec 13 06:48:09.666941 systemd[1]: Reached target swap.target. Dec 13 06:48:09.666963 systemd[1]: Reached target torcx.target. Dec 13 06:48:09.666993 systemd[1]: Reached target veritysetup.target. Dec 13 06:48:09.667015 systemd[1]: Listening on systemd-coredump.socket. Dec 13 06:48:09.667036 systemd[1]: Listening on systemd-initctl.socket. Dec 13 06:48:09.667070 systemd[1]: Listening on systemd-networkd.socket. Dec 13 06:48:09.667093 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 06:48:09.667123 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 06:48:09.667150 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 06:48:09.667172 systemd[1]: Mounting dev-hugepages.mount... Dec 13 06:48:09.667193 systemd[1]: Mounting dev-mqueue.mount... Dec 13 06:48:09.667222 systemd[1]: Mounting media.mount... Dec 13 06:48:09.667252 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:48:09.667275 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 06:48:09.667309 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 06:48:09.667350 systemd[1]: Mounting tmp.mount... Dec 13 06:48:09.667379 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 06:48:09.667408 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:48:09.667437 systemd[1]: Starting kmod-static-nodes.service... Dec 13 06:48:09.667459 systemd[1]: Starting modprobe@configfs.service... Dec 13 06:48:09.667480 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:48:09.667513 systemd[1]: Starting modprobe@drm.service... Dec 13 06:48:09.667541 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:48:09.667579 systemd[1]: Starting modprobe@fuse.service... Dec 13 06:48:09.667608 systemd[1]: Starting modprobe@loop.service... Dec 13 06:48:09.667637 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 06:48:09.667665 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 06:48:09.667686 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 06:48:09.667713 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 06:48:09.667740 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 06:48:09.667768 systemd[1]: Stopped systemd-journald.service. Dec 13 06:48:09.667790 systemd[1]: Starting systemd-journald.service... Dec 13 06:48:09.667826 kernel: fuse: init (API version 7.34) Dec 13 06:48:09.667861 kernel: loop: module loaded Dec 13 06:48:09.667882 systemd[1]: Starting systemd-modules-load.service... Dec 13 06:48:09.667910 systemd[1]: Starting systemd-network-generator.service... Dec 13 06:48:09.667938 systemd[1]: Starting systemd-remount-fs.service... Dec 13 06:48:09.667960 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 06:48:09.667987 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 06:48:09.668010 systemd[1]: Stopped verity-setup.service. Dec 13 06:48:09.668031 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:48:09.668064 systemd[1]: Mounted dev-hugepages.mount. Dec 13 06:48:09.668086 systemd[1]: Mounted dev-mqueue.mount. Dec 13 06:48:09.668107 systemd[1]: Mounted media.mount. Dec 13 06:48:09.668127 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 06:48:09.668147 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 06:48:09.668168 systemd[1]: Mounted tmp.mount. Dec 13 06:48:09.668188 systemd[1]: Finished kmod-static-nodes.service. Dec 13 06:48:09.668220 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 06:48:09.668243 systemd[1]: Finished modprobe@configfs.service. Dec 13 06:48:09.668264 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:48:09.668284 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:48:09.668331 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 06:48:09.668357 systemd[1]: Finished modprobe@drm.service. Dec 13 06:48:09.668379 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:48:09.668412 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:48:09.668436 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 06:48:09.668457 systemd[1]: Finished modprobe@fuse.service. Dec 13 06:48:09.668486 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 06:48:09.668509 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:48:09.668543 systemd-journald[979]: Journal started Dec 13 06:48:09.668659 systemd-journald[979]: Runtime Journal (/run/log/journal/1d7af8c79b534a86b5fd685c2ee6af9f) is 4.7M, max 38.1M, 33.3M free. Dec 13 06:48:05.786000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 06:48:05.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 06:48:05.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 06:48:05.861000 audit: BPF prog-id=10 op=LOAD Dec 13 06:48:05.861000 audit: BPF prog-id=10 op=UNLOAD Dec 13 06:48:05.862000 audit: BPF prog-id=11 op=LOAD Dec 13 06:48:05.862000 audit: BPF prog-id=11 op=UNLOAD Dec 13 06:48:05.976000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 06:48:05.976000 audit[901]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0000a02d2 a1=c0000a8378 a2=c0000aa800 a3=32 items=0 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:48:09.674138 systemd[1]: Finished modprobe@loop.service. Dec 13 06:48:09.674179 systemd[1]: Started systemd-journald.service. Dec 13 06:48:05.976000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 06:48:05.979000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 06:48:05.979000 audit[901]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0000a03a9 a2=1ed a3=0 items=2 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:48:05.979000 audit: CWD cwd="/" Dec 13 06:48:05.979000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:05.979000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:05.979000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 06:48:09.350000 audit: BPF prog-id=12 op=LOAD Dec 13 06:48:09.350000 audit: BPF prog-id=3 op=UNLOAD Dec 13 06:48:09.351000 audit: BPF prog-id=13 op=LOAD Dec 13 06:48:09.351000 audit: BPF prog-id=14 op=LOAD Dec 13 06:48:09.351000 audit: BPF prog-id=4 op=UNLOAD Dec 13 06:48:09.351000 audit: BPF prog-id=5 op=UNLOAD Dec 13 06:48:09.354000 audit: BPF prog-id=15 op=LOAD Dec 13 06:48:09.354000 audit: BPF prog-id=12 op=UNLOAD Dec 13 06:48:09.355000 audit: BPF prog-id=16 op=LOAD Dec 13 06:48:09.355000 audit: BPF prog-id=17 op=LOAD Dec 13 06:48:09.355000 audit: BPF prog-id=13 op=UNLOAD Dec 13 06:48:09.355000 audit: BPF prog-id=14 op=UNLOAD Dec 13 06:48:09.356000 audit: BPF prog-id=18 op=LOAD Dec 13 06:48:09.356000 audit: BPF prog-id=15 op=UNLOAD Dec 13 06:48:09.356000 audit: BPF prog-id=19 op=LOAD Dec 13 06:48:09.356000 audit: BPF prog-id=20 op=LOAD Dec 13 06:48:09.356000 audit: BPF prog-id=16 op=UNLOAD Dec 13 06:48:09.356000 audit: BPF prog-id=17 op=UNLOAD Dec 13 06:48:09.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.366000 audit: BPF prog-id=18 op=UNLOAD Dec 13 06:48:09.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.554000 audit: BPF prog-id=21 op=LOAD Dec 13 06:48:09.555000 audit: BPF prog-id=22 op=LOAD Dec 13 06:48:09.555000 audit: BPF prog-id=23 op=LOAD Dec 13 06:48:09.555000 audit: BPF prog-id=19 op=UNLOAD Dec 13 06:48:09.555000 audit: BPF prog-id=20 op=UNLOAD Dec 13 06:48:09.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.649000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 06:48:09.649000 audit[979]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff22c6f770 a2=4000 a3=7fff22c6f80c items=0 ppid=1 pid=979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:48:09.649000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 06:48:09.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.348033 systemd[1]: Queued start job for default target multi-user.target. Dec 13 06:48:05.972084 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:48:09.348061 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 06:48:05.972834 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 06:48:09.357642 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 06:48:05.972912 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 06:48:09.675961 systemd[1]: Finished systemd-modules-load.service. Dec 13 06:48:05.972975 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 06:48:09.677023 systemd[1]: Finished systemd-network-generator.service. Dec 13 06:48:05.972993 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 06:48:09.678083 systemd[1]: Finished systemd-remount-fs.service. Dec 13 06:48:05.973067 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 06:48:09.679339 systemd[1]: Reached target network-pre.target. Dec 13 06:48:05.973095 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 06:48:05.973532 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 06:48:05.973607 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 06:48:09.682020 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 06:48:05.973633 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 06:48:05.975640 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 06:48:05.975734 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 06:48:05.975783 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 06:48:05.975819 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 06:48:05.975852 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 06:48:05.975878 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:05Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 06:48:08.767616 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:08Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:48:08.768132 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:08Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:48:08.768380 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:08Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:48:08.768770 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:08Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:48:08.768893 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:08Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 06:48:08.769033 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:48:08Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 06:48:09.689379 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 06:48:09.692559 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 06:48:09.695280 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 06:48:09.700625 systemd[1]: Starting systemd-journal-flush.service... Dec 13 06:48:09.701394 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:48:09.705547 systemd[1]: Starting systemd-random-seed.service... Dec 13 06:48:09.706423 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:48:09.708329 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:48:09.710821 systemd[1]: Starting systemd-sysusers.service... Dec 13 06:48:09.717067 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 06:48:09.720909 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 06:48:09.724537 systemd-journald[979]: Time spent on flushing to /var/log/journal/1d7af8c79b534a86b5fd685c2ee6af9f is 66.051ms for 1302 entries. Dec 13 06:48:09.724537 systemd-journald[979]: System Journal (/var/log/journal/1d7af8c79b534a86b5fd685c2ee6af9f) is 8.0M, max 584.8M, 576.8M free. Dec 13 06:48:09.802211 systemd-journald[979]: Received client request to flush runtime journal. Dec 13 06:48:09.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.739555 systemd[1]: Finished systemd-random-seed.service. Dec 13 06:48:09.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.740476 systemd[1]: Reached target first-boot-complete.target. Dec 13 06:48:09.748733 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:48:09.774488 systemd[1]: Finished systemd-sysusers.service. Dec 13 06:48:09.779807 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 06:48:09.804642 systemd[1]: Finished systemd-journal-flush.service. Dec 13 06:48:09.827121 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 06:48:09.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:09.829643 systemd[1]: Starting systemd-udev-settle.service... Dec 13 06:48:09.844863 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 06:48:09.852360 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 06:48:09.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:10.358697 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 06:48:10.366466 kernel: kauditd_printk_skb: 108 callbacks suppressed Dec 13 06:48:10.366551 kernel: audit: type=1130 audit(1734072490.359:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:10.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:10.368520 kernel: audit: type=1334 audit(1734072490.366:149): prog-id=24 op=LOAD Dec 13 06:48:10.366000 audit: BPF prog-id=24 op=LOAD Dec 13 06:48:10.367489 systemd[1]: Starting systemd-udevd.service... Dec 13 06:48:10.366000 audit: BPF prog-id=25 op=LOAD Dec 13 06:48:10.366000 audit: BPF prog-id=7 op=UNLOAD Dec 13 06:48:10.366000 audit: BPF prog-id=8 op=UNLOAD Dec 13 06:48:10.370889 kernel: audit: type=1334 audit(1734072490.366:150): prog-id=25 op=LOAD Dec 13 06:48:10.370941 kernel: audit: type=1334 audit(1734072490.366:151): prog-id=7 op=UNLOAD Dec 13 06:48:10.370975 kernel: audit: type=1334 audit(1734072490.366:152): prog-id=8 op=UNLOAD Dec 13 06:48:10.398281 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Dec 13 06:48:10.434718 systemd[1]: Started systemd-udevd.service. Dec 13 06:48:10.447967 kernel: audit: type=1130 audit(1734072490.435:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:10.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:10.447326 systemd[1]: Starting systemd-networkd.service... Dec 13 06:48:10.445000 audit: BPF prog-id=26 op=LOAD Dec 13 06:48:10.453337 kernel: audit: type=1334 audit(1734072490.445:154): prog-id=26 op=LOAD Dec 13 06:48:10.472857 kernel: audit: type=1334 audit(1734072490.462:155): prog-id=27 op=LOAD Dec 13 06:48:10.472966 kernel: audit: type=1334 audit(1734072490.462:156): prog-id=28 op=LOAD Dec 13 06:48:10.473004 kernel: audit: type=1334 audit(1734072490.462:157): prog-id=29 op=LOAD Dec 13 06:48:10.462000 audit: BPF prog-id=27 op=LOAD Dec 13 06:48:10.462000 audit: BPF prog-id=28 op=LOAD Dec 13 06:48:10.462000 audit: BPF prog-id=29 op=LOAD Dec 13 06:48:10.464311 systemd[1]: Starting systemd-userdbd.service... Dec 13 06:48:10.515255 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 06:48:10.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:10.533337 systemd[1]: Started systemd-userdbd.service. Dec 13 06:48:10.646448 systemd-networkd[1028]: lo: Link UP Dec 13 06:48:10.646462 systemd-networkd[1028]: lo: Gained carrier Dec 13 06:48:10.647408 systemd-networkd[1028]: Enumeration completed Dec 13 06:48:10.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:10.647550 systemd[1]: Started systemd-networkd.service. Dec 13 06:48:10.647581 systemd-networkd[1028]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 06:48:10.650789 systemd-networkd[1028]: eth0: Link UP Dec 13 06:48:10.650803 systemd-networkd[1028]: eth0: Gained carrier Dec 13 06:48:10.654339 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 06:48:10.666514 systemd-networkd[1028]: eth0: DHCPv4 address 10.230.20.2/30, gateway 10.230.20.1 acquired from 10.230.20.1 Dec 13 06:48:10.674378 kernel: ACPI: button: Power Button [PWRF] Dec 13 06:48:10.694346 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 06:48:10.704472 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 06:48:10.731000 audit[1017]: AVC avc: denied { confidentiality } for pid=1017 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 06:48:10.731000 audit[1017]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5557f3b852f0 a1=337fc a2=7ffa90825bc5 a3=5 items=110 ppid=1014 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:48:10.731000 audit: CWD cwd="/" Dec 13 06:48:10.731000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=1 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=2 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.776370 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 06:48:10.801376 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 06:48:10.801423 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 06:48:10.801695 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 06:48:10.731000 audit: PATH item=3 name=(null) inode=15945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=4 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=5 name=(null) inode=15946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=6 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=7 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=8 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=9 name=(null) inode=15948 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=10 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=11 name=(null) inode=15949 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=12 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=13 name=(null) inode=15950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=14 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=15 name=(null) inode=15951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=16 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=17 name=(null) inode=15952 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=18 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=19 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=20 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=21 name=(null) inode=15954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=22 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=23 name=(null) inode=15955 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=24 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=25 name=(null) inode=15956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=26 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=27 name=(null) inode=15957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=28 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=29 name=(null) inode=15958 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=30 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=31 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=32 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=33 name=(null) inode=15960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=34 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=35 name=(null) inode=15961 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=36 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=37 name=(null) inode=15962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=38 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=39 name=(null) inode=15963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=40 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=41 name=(null) inode=15964 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=42 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=43 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=44 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=45 name=(null) inode=15966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=46 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=47 name=(null) inode=15967 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=48 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=49 name=(null) inode=15968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=50 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=51 name=(null) inode=15969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=52 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=53 name=(null) inode=15970 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=55 name=(null) inode=15971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=56 name=(null) inode=15971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=57 name=(null) inode=15972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=58 name=(null) inode=15971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=59 name=(null) inode=15973 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=60 name=(null) inode=15971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=61 name=(null) inode=15974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=62 name=(null) inode=15974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=63 name=(null) inode=15975 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=64 name=(null) inode=15974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=65 name=(null) inode=15976 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=66 name=(null) inode=15974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=67 name=(null) inode=15977 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=68 name=(null) inode=15974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=69 name=(null) inode=15978 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=70 name=(null) inode=15974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=71 name=(null) inode=15979 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=72 name=(null) inode=15971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=73 name=(null) inode=15980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=74 name=(null) inode=15980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=75 name=(null) inode=15981 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=76 name=(null) inode=15980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=77 name=(null) inode=15982 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=78 name=(null) inode=15980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=79 name=(null) inode=15983 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=80 name=(null) inode=15980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=81 name=(null) inode=15984 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=82 name=(null) inode=15980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=83 name=(null) inode=15985 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=84 name=(null) inode=15971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=85 name=(null) inode=15986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=86 name=(null) inode=15986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=87 name=(null) inode=15987 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=88 name=(null) inode=15986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=89 name=(null) inode=15988 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=90 name=(null) inode=15986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=91 name=(null) inode=15989 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=92 name=(null) inode=15986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=93 name=(null) inode=15990 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=94 name=(null) inode=15986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=95 name=(null) inode=15991 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=96 name=(null) inode=15971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=97 name=(null) inode=15992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=98 name=(null) inode=15992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=99 name=(null) inode=15993 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=100 name=(null) inode=15992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=101 name=(null) inode=15994 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=102 name=(null) inode=15992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=103 name=(null) inode=15995 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=104 name=(null) inode=15992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=105 name=(null) inode=15996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=106 name=(null) inode=15992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=107 name=(null) inode=15997 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PATH item=109 name=(null) inode=15998 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:48:10.731000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 06:48:10.929962 systemd[1]: Finished systemd-udev-settle.service. Dec 13 06:48:10.932473 systemd[1]: Starting lvm2-activation-early.service... Dec 13 06:48:10.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:10.956797 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 06:48:10.991043 systemd[1]: Finished lvm2-activation-early.service. Dec 13 06:48:10.991985 systemd[1]: Reached target cryptsetup.target. Dec 13 06:48:10.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:10.994423 systemd[1]: Starting lvm2-activation.service... Dec 13 06:48:11.000987 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 06:48:11.026734 systemd[1]: Finished lvm2-activation.service. Dec 13 06:48:11.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.027633 systemd[1]: Reached target local-fs-pre.target. Dec 13 06:48:11.028287 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 06:48:11.028353 systemd[1]: Reached target local-fs.target. Dec 13 06:48:11.028969 systemd[1]: Reached target machines.target. Dec 13 06:48:11.031582 systemd[1]: Starting ldconfig.service... Dec 13 06:48:11.032908 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:48:11.032965 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:48:11.036253 systemd[1]: Starting systemd-boot-update.service... Dec 13 06:48:11.038761 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 06:48:11.042213 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 06:48:11.048718 systemd[1]: Starting systemd-sysext.service... Dec 13 06:48:11.050876 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1046 (bootctl) Dec 13 06:48:11.052572 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 06:48:11.081668 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 06:48:11.113862 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 06:48:11.114140 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 06:48:11.136722 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 06:48:11.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.151882 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 06:48:11.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.153571 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 06:48:11.157357 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 06:48:11.186416 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 06:48:11.208374 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 06:48:11.224264 (sd-sysext)[1059]: Using extensions 'kubernetes'. Dec 13 06:48:11.226403 (sd-sysext)[1059]: Merged extensions into '/usr'. Dec 13 06:48:11.257750 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:48:11.259613 systemd-fsck[1056]: fsck.fat 4.2 (2021-01-31) Dec 13 06:48:11.259613 systemd-fsck[1056]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 06:48:11.262932 systemd[1]: Mounting usr-share-oem.mount... Dec 13 06:48:11.263936 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:48:11.266749 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:48:11.269483 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:48:11.274650 systemd[1]: Starting modprobe@loop.service... Dec 13 06:48:11.275613 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:48:11.276153 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:48:11.276416 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:48:11.279597 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 06:48:11.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.282667 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:48:11.282884 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:48:11.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.285099 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:48:11.285291 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:48:11.286613 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:48:11.286772 systemd[1]: Finished modprobe@loop.service. Dec 13 06:48:11.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.291074 systemd[1]: Mounted usr-share-oem.mount. Dec 13 06:48:11.295099 systemd[1]: Finished systemd-sysext.service. Dec 13 06:48:11.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.299919 systemd[1]: Mounting boot.mount... Dec 13 06:48:11.301990 systemd[1]: Starting ensure-sysext.service... Dec 13 06:48:11.304688 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:48:11.304774 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:48:11.306240 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 06:48:11.320736 systemd[1]: Reloading. Dec 13 06:48:11.339101 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 06:48:11.347077 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 06:48:11.353564 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 06:48:11.456450 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2024-12-13T06:48:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:48:11.456990 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2024-12-13T06:48:11Z" level=info msg="torcx already run" Dec 13 06:48:11.542016 ldconfig[1045]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 06:48:11.603149 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:48:11.603188 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:48:11.630628 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:48:11.710000 audit: BPF prog-id=30 op=LOAD Dec 13 06:48:11.710000 audit: BPF prog-id=27 op=UNLOAD Dec 13 06:48:11.710000 audit: BPF prog-id=31 op=LOAD Dec 13 06:48:11.710000 audit: BPF prog-id=32 op=LOAD Dec 13 06:48:11.710000 audit: BPF prog-id=28 op=UNLOAD Dec 13 06:48:11.710000 audit: BPF prog-id=29 op=UNLOAD Dec 13 06:48:11.711000 audit: BPF prog-id=33 op=LOAD Dec 13 06:48:11.711000 audit: BPF prog-id=34 op=LOAD Dec 13 06:48:11.711000 audit: BPF prog-id=24 op=UNLOAD Dec 13 06:48:11.711000 audit: BPF prog-id=25 op=UNLOAD Dec 13 06:48:11.713000 audit: BPF prog-id=35 op=LOAD Dec 13 06:48:11.713000 audit: BPF prog-id=26 op=UNLOAD Dec 13 06:48:11.719000 audit: BPF prog-id=36 op=LOAD Dec 13 06:48:11.719000 audit: BPF prog-id=21 op=UNLOAD Dec 13 06:48:11.719000 audit: BPF prog-id=37 op=LOAD Dec 13 06:48:11.719000 audit: BPF prog-id=38 op=LOAD Dec 13 06:48:11.719000 audit: BPF prog-id=22 op=UNLOAD Dec 13 06:48:11.719000 audit: BPF prog-id=23 op=UNLOAD Dec 13 06:48:11.729291 systemd[1]: Finished ldconfig.service. Dec 13 06:48:11.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.732469 systemd[1]: Mounted boot.mount. Dec 13 06:48:11.755339 systemd[1]: Finished ensure-sysext.service. Dec 13 06:48:11.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.757310 systemd[1]: Finished systemd-boot-update.service. Dec 13 06:48:11.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.758512 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:48:11.760278 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:48:11.762408 systemd[1]: Starting modprobe@drm.service... Dec 13 06:48:11.764567 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:48:11.766692 systemd[1]: Starting modprobe@loop.service... Dec 13 06:48:11.770099 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:48:11.770263 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:48:11.772000 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 06:48:11.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.773934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:48:11.774149 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:48:11.775208 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 06:48:11.775416 systemd[1]: Finished modprobe@drm.service. Dec 13 06:48:11.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.776401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:48:11.776564 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:48:11.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.777561 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:48:11.778352 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:48:11.778523 systemd[1]: Finished modprobe@loop.service. Dec 13 06:48:11.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.779434 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:48:11.890276 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 06:48:11.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.892945 systemd[1]: Starting audit-rules.service... Dec 13 06:48:11.895143 systemd[1]: Starting clean-ca-certificates.service... Dec 13 06:48:11.901465 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 06:48:11.903000 audit: BPF prog-id=39 op=LOAD Dec 13 06:48:11.906531 systemd[1]: Starting systemd-resolved.service... Dec 13 06:48:11.908000 audit: BPF prog-id=40 op=LOAD Dec 13 06:48:11.910998 systemd[1]: Starting systemd-timesyncd.service... Dec 13 06:48:11.916614 systemd[1]: Starting systemd-update-utmp.service... Dec 13 06:48:11.928076 systemd[1]: Finished clean-ca-certificates.service. Dec 13 06:48:11.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.928969 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:48:11.932000 audit[1150]: SYSTEM_BOOT pid=1150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.934400 systemd[1]: Finished systemd-update-utmp.service. Dec 13 06:48:11.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.963460 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 06:48:11.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:11.966050 systemd[1]: Starting systemd-update-done.service... Dec 13 06:48:11.978853 systemd[1]: Finished systemd-update-done.service. Dec 13 06:48:11.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:48:12.006000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 06:48:12.006000 audit[1160]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe48b74200 a2=420 a3=0 items=0 ppid=1139 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:48:12.006000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 06:48:12.007636 augenrules[1160]: No rules Dec 13 06:48:12.008025 systemd[1]: Finished audit-rules.service. Dec 13 06:48:12.024063 systemd[1]: Started systemd-timesyncd.service. Dec 13 06:48:12.024998 systemd[1]: Reached target time-set.target. Dec 13 06:48:12.027594 systemd-resolved[1145]: Positive Trust Anchors: Dec 13 06:48:12.027614 systemd-resolved[1145]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 06:48:12.027654 systemd-resolved[1145]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 06:48:12.036146 systemd-resolved[1145]: Using system hostname 'srv-uleoy.gb1.brightbox.com'. Dec 13 06:48:12.038900 systemd[1]: Started systemd-resolved.service. Dec 13 06:48:12.053426 systemd[1]: Reached target network.target. Dec 13 06:48:12.054107 systemd[1]: Reached target nss-lookup.target. Dec 13 06:48:12.054800 systemd[1]: Reached target sysinit.target. Dec 13 06:48:12.055542 systemd[1]: Started motdgen.path. Dec 13 06:48:12.056182 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 06:48:12.057222 systemd[1]: Started logrotate.timer. Dec 13 06:48:12.057958 systemd[1]: Started mdadm.timer. Dec 13 06:48:12.058552 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 06:48:12.059208 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 06:48:12.059257 systemd[1]: Reached target paths.target. Dec 13 06:48:12.059872 systemd[1]: Reached target timers.target. Dec 13 06:48:12.060952 systemd[1]: Listening on dbus.socket. Dec 13 06:48:12.063299 systemd[1]: Starting docker.socket... Dec 13 06:48:12.068018 systemd[1]: Listening on sshd.socket. Dec 13 06:48:12.068873 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:48:12.069602 systemd[1]: Listening on docker.socket. Dec 13 06:48:12.070579 systemd[1]: Reached target sockets.target. Dec 13 06:48:12.071217 systemd[1]: Reached target basic.target. Dec 13 06:48:12.071920 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 06:48:12.071981 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 06:48:12.073772 systemd[1]: Starting containerd.service... Dec 13 06:48:12.077825 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 06:48:12.080639 systemd[1]: Starting dbus.service... Dec 13 06:48:12.088150 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 06:48:12.091936 systemd[1]: Starting extend-filesystems.service... Dec 13 06:48:12.137127 jq[1173]: false Dec 13 06:48:12.096401 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 06:48:12.098389 systemd[1]: Starting motdgen.service... Dec 13 06:48:12.101766 systemd[1]: Starting prepare-helm.service... Dec 13 06:48:12.106306 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 06:48:12.109697 systemd[1]: Starting sshd-keygen.service... Dec 13 06:48:12.118661 systemd[1]: Starting systemd-logind.service... Dec 13 06:48:12.132436 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:48:12.132590 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 06:48:12.133455 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 06:48:12.134697 systemd[1]: Starting update-engine.service... Dec 13 06:48:12.139070 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 06:48:12.143751 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 06:48:12.144075 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 06:48:12.148693 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:48:12.148855 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:48:12.721790 systemd-resolved[1145]: Clock change detected. Flushing caches. Dec 13 06:48:12.721975 systemd-timesyncd[1147]: Contacted time server 176.58.115.34:123 (0.flatcar.pool.ntp.org). Dec 13 06:48:12.722068 systemd-timesyncd[1147]: Initial clock synchronization to Fri 2024-12-13 06:48:12.721714 UTC. Dec 13 06:48:12.747289 jq[1184]: true Dec 13 06:48:12.732258 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 06:48:12.732529 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 06:48:12.755623 tar[1186]: linux-amd64/helm Dec 13 06:48:12.760468 dbus-daemon[1170]: [system] SELinux support is enabled Dec 13 06:48:12.760740 systemd[1]: Started dbus.service. Dec 13 06:48:12.764034 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 06:48:12.764074 systemd[1]: Reached target system-config.target. Dec 13 06:48:12.764788 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 06:48:12.764843 systemd[1]: Reached target user-config.target. Dec 13 06:48:12.770158 jq[1193]: true Dec 13 06:48:12.791116 dbus-daemon[1170]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1028 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 06:48:12.793264 dbus-daemon[1170]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 06:48:12.795753 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 06:48:12.796027 systemd[1]: Finished motdgen.service. Dec 13 06:48:12.802333 systemd[1]: Starting systemd-hostnamed.service... Dec 13 06:48:12.818388 extend-filesystems[1174]: Found loop1 Dec 13 06:48:12.818388 extend-filesystems[1174]: Found vda Dec 13 06:48:12.818388 extend-filesystems[1174]: Found vda1 Dec 13 06:48:12.818388 extend-filesystems[1174]: Found vda2 Dec 13 06:48:12.818388 extend-filesystems[1174]: Found vda3 Dec 13 06:48:12.818388 extend-filesystems[1174]: Found usr Dec 13 06:48:12.818388 extend-filesystems[1174]: Found vda4 Dec 13 06:48:12.818388 extend-filesystems[1174]: Found vda6 Dec 13 06:48:12.818388 extend-filesystems[1174]: Found vda7 Dec 13 06:48:12.818388 extend-filesystems[1174]: Found vda9 Dec 13 06:48:12.859267 extend-filesystems[1174]: Checking size of /dev/vda9 Dec 13 06:48:12.852226 systemd[1]: Started update-engine.service. Dec 13 06:48:12.861865 update_engine[1183]: I1213 06:48:12.843000 1183 main.cc:92] Flatcar Update Engine starting Dec 13 06:48:12.861865 update_engine[1183]: I1213 06:48:12.852249 1183 update_check_scheduler.cc:74] Next update check in 10m47s Dec 13 06:48:12.862250 bash[1220]: Updated "/home/core/.ssh/authorized_keys" Dec 13 06:48:12.859478 systemd[1]: Started locksmithd.service. Dec 13 06:48:12.863580 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 06:48:12.880992 systemd-networkd[1028]: eth0: Gained IPv6LL Dec 13 06:48:12.885673 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 06:48:12.886574 systemd[1]: Reached target network-online.target. Dec 13 06:48:12.890348 extend-filesystems[1174]: Resized partition /dev/vda9 Dec 13 06:48:12.892756 systemd[1]: Starting kubelet.service... Dec 13 06:48:12.900735 extend-filesystems[1225]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 06:48:12.910384 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 06:48:12.957907 systemd-logind[1180]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 06:48:12.958540 systemd-logind[1180]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 06:48:12.959518 systemd-logind[1180]: New seat seat0. Dec 13 06:48:12.964173 systemd[1]: Started systemd-logind.service. Dec 13 06:48:13.027806 env[1189]: time="2024-12-13T06:48:13.026930977Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 06:48:13.046999 dbus-daemon[1170]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 06:48:13.047198 systemd[1]: Started systemd-hostnamed.service. Dec 13 06:48:13.048838 dbus-daemon[1170]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1212 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 06:48:13.053044 systemd[1]: Starting polkit.service... Dec 13 06:48:13.090197 polkitd[1231]: Started polkitd version 121 Dec 13 06:48:13.099384 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 06:48:13.118353 extend-filesystems[1225]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 06:48:13.118353 extend-filesystems[1225]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 06:48:13.118353 extend-filesystems[1225]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 06:48:13.124586 extend-filesystems[1174]: Resized filesystem in /dev/vda9 Dec 13 06:48:13.120469 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 06:48:13.120717 systemd[1]: Finished extend-filesystems.service. Dec 13 06:48:13.127713 polkitd[1231]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 06:48:13.127854 polkitd[1231]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 06:48:13.132793 polkitd[1231]: Finished loading, compiling and executing 2 rules Dec 13 06:48:13.133420 dbus-daemon[1170]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 06:48:13.133665 systemd[1]: Started polkit.service. Dec 13 06:48:13.135007 polkitd[1231]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 06:48:13.158982 env[1189]: time="2024-12-13T06:48:13.158924639Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 06:48:13.159464 systemd-hostnamed[1212]: Hostname set to (static) Dec 13 06:48:13.160576 env[1189]: time="2024-12-13T06:48:13.160544251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:48:13.166474 env[1189]: time="2024-12-13T06:48:13.166412647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:48:13.166687 env[1189]: time="2024-12-13T06:48:13.166655705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:48:13.167174 env[1189]: time="2024-12-13T06:48:13.167138019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:48:13.167306 env[1189]: time="2024-12-13T06:48:13.167276592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 06:48:13.167452 env[1189]: time="2024-12-13T06:48:13.167421153Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 06:48:13.167576 env[1189]: time="2024-12-13T06:48:13.167547031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 06:48:13.167802 env[1189]: time="2024-12-13T06:48:13.167772604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:48:13.168462 env[1189]: time="2024-12-13T06:48:13.168433142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:48:13.168740 env[1189]: time="2024-12-13T06:48:13.168705188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:48:13.168989 env[1189]: time="2024-12-13T06:48:13.168958168Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 06:48:13.169248 env[1189]: time="2024-12-13T06:48:13.169217090Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 06:48:13.169406 env[1189]: time="2024-12-13T06:48:13.169356229Z" level=info msg="metadata content store policy set" policy=shared Dec 13 06:48:13.179465 env[1189]: time="2024-12-13T06:48:13.179233500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 06:48:13.179465 env[1189]: time="2024-12-13T06:48:13.179327104Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 06:48:13.179465 env[1189]: time="2024-12-13T06:48:13.179388120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 06:48:13.180478 env[1189]: time="2024-12-13T06:48:13.179980407Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 06:48:13.180478 env[1189]: time="2024-12-13T06:48:13.180058808Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 06:48:13.180478 env[1189]: time="2024-12-13T06:48:13.180091987Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 06:48:13.180478 env[1189]: time="2024-12-13T06:48:13.180132767Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 06:48:13.180478 env[1189]: time="2024-12-13T06:48:13.180157813Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 06:48:13.180478 env[1189]: time="2024-12-13T06:48:13.180178542Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 06:48:13.180478 env[1189]: time="2024-12-13T06:48:13.180218014Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 06:48:13.180478 env[1189]: time="2024-12-13T06:48:13.180242918Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 06:48:13.180478 env[1189]: time="2024-12-13T06:48:13.180290690Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 06:48:13.181341 env[1189]: time="2024-12-13T06:48:13.181047321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 06:48:13.181341 env[1189]: time="2024-12-13T06:48:13.181267684Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 06:48:13.181837 env[1189]: time="2024-12-13T06:48:13.181792808Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 06:48:13.182045 env[1189]: time="2024-12-13T06:48:13.182013731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.182180 env[1189]: time="2024-12-13T06:48:13.182149249Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 06:48:13.182405 env[1189]: time="2024-12-13T06:48:13.182375369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.182610 env[1189]: time="2024-12-13T06:48:13.182580728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.182776 env[1189]: time="2024-12-13T06:48:13.182746591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.183534 env[1189]: time="2024-12-13T06:48:13.183502287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.183659 env[1189]: time="2024-12-13T06:48:13.183629814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.183777 env[1189]: time="2024-12-13T06:48:13.183748351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.183925 env[1189]: time="2024-12-13T06:48:13.183895799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.184095 env[1189]: time="2024-12-13T06:48:13.184067169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.184251 env[1189]: time="2024-12-13T06:48:13.184221570Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 06:48:13.184603 env[1189]: time="2024-12-13T06:48:13.184573409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.184726 env[1189]: time="2024-12-13T06:48:13.184696719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.184881 env[1189]: time="2024-12-13T06:48:13.184850744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.185044 env[1189]: time="2024-12-13T06:48:13.185014539Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 06:48:13.185171 env[1189]: time="2024-12-13T06:48:13.185137946Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 06:48:13.185309 env[1189]: time="2024-12-13T06:48:13.185280092Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 06:48:13.185473 env[1189]: time="2024-12-13T06:48:13.185440657Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 06:48:13.185640 env[1189]: time="2024-12-13T06:48:13.185610064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 06:48:13.186106 env[1189]: time="2024-12-13T06:48:13.186021327Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 06:48:13.188418 env[1189]: time="2024-12-13T06:48:13.186332632Z" level=info msg="Connect containerd service" Dec 13 06:48:13.189521 env[1189]: time="2024-12-13T06:48:13.189487585Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 06:48:13.190642 env[1189]: time="2024-12-13T06:48:13.190604743Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 06:48:13.191943 env[1189]: time="2024-12-13T06:48:13.191912603Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 06:48:13.194542 env[1189]: time="2024-12-13T06:48:13.194511663Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 06:48:13.194799 env[1189]: time="2024-12-13T06:48:13.194770395Z" level=info msg="containerd successfully booted in 0.177505s" Dec 13 06:48:13.194883 systemd[1]: Started containerd.service. Dec 13 06:48:13.203134 env[1189]: time="2024-12-13T06:48:13.203058576Z" level=info msg="Start subscribing containerd event" Dec 13 06:48:13.203270 env[1189]: time="2024-12-13T06:48:13.203208792Z" level=info msg="Start recovering state" Dec 13 06:48:13.203555 env[1189]: time="2024-12-13T06:48:13.203522988Z" level=info msg="Start event monitor" Dec 13 06:48:13.203816 env[1189]: time="2024-12-13T06:48:13.203784534Z" level=info msg="Start snapshots syncer" Dec 13 06:48:13.203927 env[1189]: time="2024-12-13T06:48:13.203829054Z" level=info msg="Start cni network conf syncer for default" Dec 13 06:48:13.203927 env[1189]: time="2024-12-13T06:48:13.203852272Z" level=info msg="Start streaming server" Dec 13 06:48:13.623115 tar[1186]: linux-amd64/LICENSE Dec 13 06:48:13.624374 tar[1186]: linux-amd64/README.md Dec 13 06:48:13.631883 systemd[1]: Finished prepare-helm.service. Dec 13 06:48:13.826401 locksmithd[1222]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 06:48:14.174114 systemd[1]: Started kubelet.service. Dec 13 06:48:14.314169 systemd-networkd[1028]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8500:24:19ff:fee6:1402/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8500:24:19ff:fee6:1402/64 assigned by NDisc. Dec 13 06:48:14.314187 systemd-networkd[1028]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 06:48:15.026964 kubelet[1250]: E1213 06:48:15.026818 1250 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:48:15.029615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:48:15.029899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:48:15.030448 systemd[1]: kubelet.service: Consumed 1.141s CPU time. Dec 13 06:48:15.243271 sshd_keygen[1190]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 06:48:15.270711 systemd[1]: Finished sshd-keygen.service. Dec 13 06:48:15.274341 systemd[1]: Starting issuegen.service... Dec 13 06:48:15.281772 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 06:48:15.281999 systemd[1]: Finished issuegen.service. Dec 13 06:48:15.284879 systemd[1]: Starting systemd-user-sessions.service... Dec 13 06:48:15.294467 systemd[1]: Finished systemd-user-sessions.service. Dec 13 06:48:15.297742 systemd[1]: Started getty@tty1.service. Dec 13 06:48:15.300650 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 06:48:15.301708 systemd[1]: Reached target getty.target. Dec 13 06:48:19.811049 coreos-metadata[1169]: Dec 13 06:48:19.810 WARN failed to locate config-drive, using the metadata service API instead Dec 13 06:48:19.863397 coreos-metadata[1169]: Dec 13 06:48:19.863 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 06:48:19.908144 coreos-metadata[1169]: Dec 13 06:48:19.907 INFO Fetch successful Dec 13 06:48:19.908762 coreos-metadata[1169]: Dec 13 06:48:19.908 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 06:48:19.946740 coreos-metadata[1169]: Dec 13 06:48:19.946 INFO Fetch successful Dec 13 06:48:19.951746 unknown[1169]: wrote ssh authorized keys file for user: core Dec 13 06:48:19.967417 update-ssh-keys[1273]: Updated "/home/core/.ssh/authorized_keys" Dec 13 06:48:19.969089 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 06:48:19.970427 systemd[1]: Reached target multi-user.target. Dec 13 06:48:19.973783 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 06:48:19.986245 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 06:48:19.986544 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 06:48:19.987998 systemd[1]: Startup finished in 1.150s (kernel) + 8.021s (initrd) + 13.715s (userspace) = 22.886s. Dec 13 06:48:22.899162 systemd[1]: Created slice system-sshd.slice. Dec 13 06:48:22.902520 systemd[1]: Started sshd@0-10.230.20.2:22-139.178.89.65:54586.service. Dec 13 06:48:23.816043 sshd[1276]: Accepted publickey for core from 139.178.89.65 port 54586 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:48:23.820014 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:48:23.836353 systemd[1]: Created slice user-500.slice. Dec 13 06:48:23.838389 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 06:48:23.845469 systemd-logind[1180]: New session 1 of user core. Dec 13 06:48:23.855250 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 06:48:23.859124 systemd[1]: Starting user@500.service... Dec 13 06:48:23.864920 (systemd)[1279]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:48:23.969909 systemd[1279]: Queued start job for default target default.target. Dec 13 06:48:23.971440 systemd[1279]: Reached target paths.target. Dec 13 06:48:23.971644 systemd[1279]: Reached target sockets.target. Dec 13 06:48:23.972055 systemd[1279]: Reached target timers.target. Dec 13 06:48:23.972237 systemd[1279]: Reached target basic.target. Dec 13 06:48:23.972472 systemd[1279]: Reached target default.target. Dec 13 06:48:23.972603 systemd[1]: Started user@500.service. Dec 13 06:48:23.973000 systemd[1279]: Startup finished in 97ms. Dec 13 06:48:23.974330 systemd[1]: Started session-1.scope. Dec 13 06:48:24.602072 systemd[1]: Started sshd@1-10.230.20.2:22-139.178.89.65:54588.service. Dec 13 06:48:25.053028 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 06:48:25.053332 systemd[1]: Stopped kubelet.service. Dec 13 06:48:25.053429 systemd[1]: kubelet.service: Consumed 1.141s CPU time. Dec 13 06:48:25.055833 systemd[1]: Starting kubelet.service... Dec 13 06:48:25.200415 systemd[1]: Started kubelet.service. Dec 13 06:48:25.322855 kubelet[1294]: E1213 06:48:25.322693 1294 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:48:25.327167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:48:25.327406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:48:25.488984 sshd[1288]: Accepted publickey for core from 139.178.89.65 port 54588 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:48:25.491377 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:48:25.498575 systemd-logind[1180]: New session 2 of user core. Dec 13 06:48:25.499355 systemd[1]: Started session-2.scope. Dec 13 06:48:26.110854 sshd[1288]: pam_unix(sshd:session): session closed for user core Dec 13 06:48:26.115009 systemd[1]: sshd@1-10.230.20.2:22-139.178.89.65:54588.service: Deactivated successfully. Dec 13 06:48:26.115972 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 06:48:26.116868 systemd-logind[1180]: Session 2 logged out. Waiting for processes to exit. Dec 13 06:48:26.118260 systemd-logind[1180]: Removed session 2. Dec 13 06:48:26.257642 systemd[1]: Started sshd@2-10.230.20.2:22-139.178.89.65:54592.service. Dec 13 06:48:27.145449 sshd[1305]: Accepted publickey for core from 139.178.89.65 port 54592 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:48:27.147838 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:48:27.156396 systemd-logind[1180]: New session 3 of user core. Dec 13 06:48:27.157206 systemd[1]: Started session-3.scope. Dec 13 06:48:27.758862 sshd[1305]: pam_unix(sshd:session): session closed for user core Dec 13 06:48:27.762312 systemd[1]: sshd@2-10.230.20.2:22-139.178.89.65:54592.service: Deactivated successfully. Dec 13 06:48:27.763423 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 06:48:27.764303 systemd-logind[1180]: Session 3 logged out. Waiting for processes to exit. Dec 13 06:48:27.765900 systemd-logind[1180]: Removed session 3. Dec 13 06:48:27.905253 systemd[1]: Started sshd@3-10.230.20.2:22-139.178.89.65:54608.service. Dec 13 06:48:28.788322 sshd[1311]: Accepted publickey for core from 139.178.89.65 port 54608 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:48:28.790167 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:48:28.797068 systemd-logind[1180]: New session 4 of user core. Dec 13 06:48:28.797882 systemd[1]: Started session-4.scope. Dec 13 06:48:29.404746 sshd[1311]: pam_unix(sshd:session): session closed for user core Dec 13 06:48:29.408914 systemd-logind[1180]: Session 4 logged out. Waiting for processes to exit. Dec 13 06:48:29.410959 systemd[1]: sshd@3-10.230.20.2:22-139.178.89.65:54608.service: Deactivated successfully. Dec 13 06:48:29.411881 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 06:48:29.412751 systemd-logind[1180]: Removed session 4. Dec 13 06:48:29.552738 systemd[1]: Started sshd@4-10.230.20.2:22-139.178.89.65:45538.service. Dec 13 06:48:30.442565 sshd[1317]: Accepted publickey for core from 139.178.89.65 port 45538 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:48:30.445646 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:48:30.453939 systemd[1]: Started session-5.scope. Dec 13 06:48:30.454995 systemd-logind[1180]: New session 5 of user core. Dec 13 06:48:30.935680 sudo[1320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 06:48:30.936082 sudo[1320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 06:48:30.976781 systemd[1]: Starting docker.service... Dec 13 06:48:31.038695 env[1330]: time="2024-12-13T06:48:31.038573854Z" level=info msg="Starting up" Dec 13 06:48:31.041115 env[1330]: time="2024-12-13T06:48:31.041075052Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 06:48:31.041115 env[1330]: time="2024-12-13T06:48:31.041107239Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 06:48:31.041268 env[1330]: time="2024-12-13T06:48:31.041135074Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 06:48:31.041268 env[1330]: time="2024-12-13T06:48:31.041158753Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 06:48:31.044951 env[1330]: time="2024-12-13T06:48:31.044917879Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 06:48:31.045086 env[1330]: time="2024-12-13T06:48:31.045058025Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 06:48:31.045207 env[1330]: time="2024-12-13T06:48:31.045176174Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 06:48:31.045340 env[1330]: time="2024-12-13T06:48:31.045313035Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 06:48:31.077208 env[1330]: time="2024-12-13T06:48:31.077154325Z" level=info msg="Loading containers: start." Dec 13 06:48:31.246447 kernel: Initializing XFRM netlink socket Dec 13 06:48:31.292644 env[1330]: time="2024-12-13T06:48:31.292579839Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 06:48:31.388789 systemd-networkd[1028]: docker0: Link UP Dec 13 06:48:31.417296 env[1330]: time="2024-12-13T06:48:31.417247082Z" level=info msg="Loading containers: done." Dec 13 06:48:31.437765 env[1330]: time="2024-12-13T06:48:31.437694300Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 06:48:31.438042 env[1330]: time="2024-12-13T06:48:31.438002605Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 06:48:31.438199 env[1330]: time="2024-12-13T06:48:31.438165478Z" level=info msg="Daemon has completed initialization" Dec 13 06:48:31.459557 systemd[1]: Started docker.service. Dec 13 06:48:31.469522 env[1330]: time="2024-12-13T06:48:31.469442177Z" level=info msg="API listen on /run/docker.sock" Dec 13 06:48:32.870749 env[1189]: time="2024-12-13T06:48:32.870302987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 06:48:33.655751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975129407.mount: Deactivated successfully. Dec 13 06:48:35.553484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 06:48:35.553895 systemd[1]: Stopped kubelet.service. Dec 13 06:48:35.557079 systemd[1]: Starting kubelet.service... Dec 13 06:48:35.711267 systemd[1]: Started kubelet.service. Dec 13 06:48:35.830341 kubelet[1468]: E1213 06:48:35.829602 1468 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:48:35.831589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:48:35.831822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:48:36.461859 env[1189]: time="2024-12-13T06:48:36.461729179Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:36.464103 env[1189]: time="2024-12-13T06:48:36.464063151Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:36.466521 env[1189]: time="2024-12-13T06:48:36.466483324Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:36.468937 env[1189]: time="2024-12-13T06:48:36.468896781Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:36.472696 env[1189]: time="2024-12-13T06:48:36.470265048Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 06:48:36.486534 env[1189]: time="2024-12-13T06:48:36.486477440Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 06:48:40.588063 env[1189]: time="2024-12-13T06:48:40.587871357Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:40.591350 env[1189]: time="2024-12-13T06:48:40.591287991Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:40.594017 env[1189]: time="2024-12-13T06:48:40.593971974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:40.596386 env[1189]: time="2024-12-13T06:48:40.596338264Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:40.597621 env[1189]: time="2024-12-13T06:48:40.597583570Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 06:48:40.616138 env[1189]: time="2024-12-13T06:48:40.616081990Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 06:48:43.437518 env[1189]: time="2024-12-13T06:48:43.437282777Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:43.440204 env[1189]: time="2024-12-13T06:48:43.440154300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:43.443145 env[1189]: time="2024-12-13T06:48:43.443100285Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:43.446046 env[1189]: time="2024-12-13T06:48:43.446005965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:43.447546 env[1189]: time="2024-12-13T06:48:43.447488200Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 06:48:43.465211 env[1189]: time="2024-12-13T06:48:43.465146412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 06:48:44.333018 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 06:48:45.537606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922140819.mount: Deactivated successfully. Dec 13 06:48:46.058637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 06:48:46.059151 systemd[1]: Stopped kubelet.service. Dec 13 06:48:46.064733 systemd[1]: Starting kubelet.service... Dec 13 06:48:46.243839 systemd[1]: Started kubelet.service. Dec 13 06:48:46.359034 kubelet[1496]: E1213 06:48:46.358135 1496 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:48:46.360418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:48:46.360673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:48:46.596121 env[1189]: time="2024-12-13T06:48:46.595942954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:46.599128 env[1189]: time="2024-12-13T06:48:46.599085305Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:46.602293 env[1189]: time="2024-12-13T06:48:46.602259490Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:46.603712 env[1189]: time="2024-12-13T06:48:46.603670310Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:46.604642 env[1189]: time="2024-12-13T06:48:46.604601767Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 06:48:46.623032 env[1189]: time="2024-12-13T06:48:46.622757662Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 06:48:47.332059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238176968.mount: Deactivated successfully. Dec 13 06:48:48.895954 env[1189]: time="2024-12-13T06:48:48.895827886Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:48.898762 env[1189]: time="2024-12-13T06:48:48.898719950Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:48.901298 env[1189]: time="2024-12-13T06:48:48.901248434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:48.903794 env[1189]: time="2024-12-13T06:48:48.903753236Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:48.904992 env[1189]: time="2024-12-13T06:48:48.904933380Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 06:48:48.922526 env[1189]: time="2024-12-13T06:48:48.922444696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 06:48:49.523329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291142554.mount: Deactivated successfully. Dec 13 06:48:49.542830 env[1189]: time="2024-12-13T06:48:49.542717704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:49.545157 env[1189]: time="2024-12-13T06:48:49.545110480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:49.547231 env[1189]: time="2024-12-13T06:48:49.547192612Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:49.549259 env[1189]: time="2024-12-13T06:48:49.549216432Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:49.550289 env[1189]: time="2024-12-13T06:48:49.550228620Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 06:48:49.568072 env[1189]: time="2024-12-13T06:48:49.568002602Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 06:48:50.236839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1025525899.mount: Deactivated successfully. Dec 13 06:48:56.555325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 06:48:56.555957 systemd[1]: Stopped kubelet.service. Dec 13 06:48:56.563139 systemd[1]: Starting kubelet.service... Dec 13 06:48:57.346536 env[1189]: time="2024-12-13T06:48:57.346213911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:57.372249 env[1189]: time="2024-12-13T06:48:57.372164587Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:57.386533 env[1189]: time="2024-12-13T06:48:57.386088611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:57.392135 env[1189]: time="2024-12-13T06:48:57.391676694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:48:57.393485 env[1189]: time="2024-12-13T06:48:57.393402424Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 06:48:57.636582 systemd[1]: Started kubelet.service. Dec 13 06:48:57.760075 kubelet[1528]: E1213 06:48:57.759978 1528 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:48:57.762710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:48:57.762950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:48:58.116475 update_engine[1183]: I1213 06:48:58.116294 1183 update_attempter.cc:509] Updating boot flags... Dec 13 06:49:01.482568 systemd[1]: Stopped kubelet.service. Dec 13 06:49:01.486613 systemd[1]: Starting kubelet.service... Dec 13 06:49:01.523057 systemd[1]: Reloading. Dec 13 06:49:01.688632 /usr/lib/systemd/system-generators/torcx-generator[1628]: time="2024-12-13T06:49:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:49:01.690242 /usr/lib/systemd/system-generators/torcx-generator[1628]: time="2024-12-13T06:49:01Z" level=info msg="torcx already run" Dec 13 06:49:01.820715 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:49:01.820758 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:49:01.849082 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:49:01.999109 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 06:49:01.999229 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 06:49:01.999580 systemd[1]: Stopped kubelet.service. Dec 13 06:49:02.002131 systemd[1]: Starting kubelet.service... Dec 13 06:49:02.172597 systemd[1]: Started kubelet.service. Dec 13 06:49:02.274163 kubelet[1682]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:49:02.274163 kubelet[1682]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 06:49:02.274163 kubelet[1682]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:49:02.274920 kubelet[1682]: I1213 06:49:02.274248 1682 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 06:49:03.444650 kubelet[1682]: I1213 06:49:03.444602 1682 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 06:49:03.445333 kubelet[1682]: I1213 06:49:03.445306 1682 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 06:49:03.445779 kubelet[1682]: I1213 06:49:03.445752 1682 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 06:49:03.476421 kubelet[1682]: E1213 06:49:03.476357 1682 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.20.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:03.476844 kubelet[1682]: I1213 06:49:03.476814 1682 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 06:49:03.491920 kubelet[1682]: I1213 06:49:03.491879 1682 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 06:49:03.492630 kubelet[1682]: I1213 06:49:03.492604 1682 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 06:49:03.493010 kubelet[1682]: I1213 06:49:03.492961 1682 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 06:49:03.494088 kubelet[1682]: I1213 06:49:03.494056 1682 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 06:49:03.494234 kubelet[1682]: I1213 06:49:03.494208 1682 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 06:49:03.494618 kubelet[1682]: I1213 06:49:03.494582 1682 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:49:03.494958 kubelet[1682]: I1213 06:49:03.494933 1682 kubelet.go:396] "Attempting to sync node with API server" Dec 13 06:49:03.495118 kubelet[1682]: I1213 06:49:03.495093 1682 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 06:49:03.495312 kubelet[1682]: I1213 06:49:03.495286 1682 kubelet.go:312] "Adding apiserver pod source" Dec 13 06:49:03.495517 kubelet[1682]: I1213 06:49:03.495492 1682 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 06:49:03.497841 kubelet[1682]: W1213 06:49:03.497785 1682 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.20.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-uleoy.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:03.498027 kubelet[1682]: E1213 06:49:03.498000 1682 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.20.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-uleoy.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:03.498286 kubelet[1682]: I1213 06:49:03.498259 1682 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 06:49:03.507720 kubelet[1682]: W1213 06:49:03.507650 1682 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.20.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:03.507953 kubelet[1682]: E1213 06:49:03.507928 1682 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.20.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:03.508505 kubelet[1682]: I1213 06:49:03.508468 1682 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 06:49:03.508755 kubelet[1682]: W1213 06:49:03.508731 1682 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 06:49:03.510543 kubelet[1682]: I1213 06:49:03.510517 1682 server.go:1256] "Started kubelet" Dec 13 06:49:03.512482 kubelet[1682]: I1213 06:49:03.512439 1682 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 06:49:03.514628 kubelet[1682]: I1213 06:49:03.514582 1682 server.go:461] "Adding debug handlers to kubelet server" Dec 13 06:49:03.524602 kubelet[1682]: I1213 06:49:03.524561 1682 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 06:49:03.525202 kubelet[1682]: I1213 06:49:03.525176 1682 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 06:49:03.529507 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 06:49:03.530445 kubelet[1682]: I1213 06:49:03.530239 1682 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 06:49:03.534498 kubelet[1682]: E1213 06:49:03.534463 1682 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.20.2:6443/api/v1/namespaces/default/events\": dial tcp 10.230.20.2:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-uleoy.gb1.brightbox.com.1810a9ca72b338c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-uleoy.gb1.brightbox.com,UID:srv-uleoy.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-uleoy.gb1.brightbox.com,},FirstTimestamp:2024-12-13 06:49:03.510476995 +0000 UTC m=+1.328048552,LastTimestamp:2024-12-13 06:49:03.510476995 +0000 UTC m=+1.328048552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-uleoy.gb1.brightbox.com,}" Dec 13 06:49:03.540879 kubelet[1682]: I1213 06:49:03.540845 1682 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 06:49:03.542617 kubelet[1682]: E1213 06:49:03.542590 1682 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.20.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-uleoy.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.20.2:6443: connect: connection refused" interval="200ms" Dec 13 06:49:03.543199 kubelet[1682]: I1213 06:49:03.543171 1682 factory.go:221] Registration of the systemd container factory successfully Dec 13 06:49:03.543472 kubelet[1682]: I1213 06:49:03.543443 1682 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 06:49:03.544881 kubelet[1682]: I1213 06:49:03.544857 1682 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 06:49:03.545499 kubelet[1682]: W1213 06:49:03.545455 1682 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.20.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:03.545662 kubelet[1682]: E1213 06:49:03.545637 1682 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.20.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:03.545890 kubelet[1682]: E1213 06:49:03.545864 1682 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 06:49:03.546197 kubelet[1682]: I1213 06:49:03.546171 1682 factory.go:221] Registration of the containerd container factory successfully Dec 13 06:49:03.546701 kubelet[1682]: I1213 06:49:03.546671 1682 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 06:49:03.568133 kubelet[1682]: I1213 06:49:03.568092 1682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 06:49:03.570790 kubelet[1682]: I1213 06:49:03.570753 1682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 06:49:03.570888 kubelet[1682]: I1213 06:49:03.570810 1682 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 06:49:03.570888 kubelet[1682]: I1213 06:49:03.570845 1682 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 06:49:03.571031 kubelet[1682]: E1213 06:49:03.570928 1682 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 06:49:03.573859 kubelet[1682]: W1213 06:49:03.573814 1682 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.20.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:03.574016 kubelet[1682]: E1213 06:49:03.573991 1682 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.20.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:03.585155 kubelet[1682]: I1213 06:49:03.585110 1682 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 06:49:03.585155 kubelet[1682]: I1213 06:49:03.585154 1682 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 06:49:03.585448 kubelet[1682]: I1213 06:49:03.585190 1682 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:49:03.589908 kubelet[1682]: I1213 06:49:03.589808 1682 policy_none.go:49] "None policy: Start" Dec 13 06:49:03.590941 kubelet[1682]: I1213 06:49:03.590903 1682 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 06:49:03.591036 kubelet[1682]: I1213 06:49:03.590953 1682 state_mem.go:35] "Initializing new in-memory state store" Dec 13 06:49:03.603743 systemd[1]: Created slice kubepods.slice. Dec 13 06:49:03.611224 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 06:49:03.615903 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 06:49:03.622761 kubelet[1682]: I1213 06:49:03.622728 1682 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 06:49:03.623592 kubelet[1682]: I1213 06:49:03.623568 1682 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 06:49:03.626512 kubelet[1682]: E1213 06:49:03.626469 1682 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-uleoy.gb1.brightbox.com\" not found" Dec 13 06:49:03.644663 kubelet[1682]: I1213 06:49:03.644625 1682 kubelet_node_status.go:73] "Attempting to register node" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.645382 kubelet[1682]: E1213 06:49:03.645338 1682 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.20.2:6443/api/v1/nodes\": dial tcp 10.230.20.2:6443: connect: connection refused" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.671967 kubelet[1682]: I1213 06:49:03.671847 1682 topology_manager.go:215] "Topology Admit Handler" podUID="0e92f47bf886795197d13a73ac4e2e44" podNamespace="kube-system" podName="kube-apiserver-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.677196 kubelet[1682]: I1213 06:49:03.677164 1682 topology_manager.go:215] "Topology Admit Handler" podUID="66ec531e0194d5eb3d0756b41da48ff3" podNamespace="kube-system" podName="kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.681157 kubelet[1682]: I1213 06:49:03.681131 1682 topology_manager.go:215] "Topology Admit Handler" podUID="7e8c3aa22c026420233fefe404f253fe" podNamespace="kube-system" podName="kube-scheduler-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.686830 systemd[1]: Created slice kubepods-burstable-pod0e92f47bf886795197d13a73ac4e2e44.slice. Dec 13 06:49:03.702946 systemd[1]: Created slice kubepods-burstable-pod66ec531e0194d5eb3d0756b41da48ff3.slice. Dec 13 06:49:03.716013 systemd[1]: Created slice kubepods-burstable-pod7e8c3aa22c026420233fefe404f253fe.slice. Dec 13 06:49:03.743771 kubelet[1682]: E1213 06:49:03.743692 1682 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.20.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-uleoy.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.20.2:6443: connect: connection refused" interval="400ms" Dec 13 06:49:03.748576 kubelet[1682]: I1213 06:49:03.748499 1682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e92f47bf886795197d13a73ac4e2e44-usr-share-ca-certificates\") pod \"kube-apiserver-srv-uleoy.gb1.brightbox.com\" (UID: \"0e92f47bf886795197d13a73ac4e2e44\") " pod="kube-system/kube-apiserver-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.748576 kubelet[1682]: I1213 06:49:03.748586 1682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66ec531e0194d5eb3d0756b41da48ff3-ca-certs\") pod \"kube-controller-manager-srv-uleoy.gb1.brightbox.com\" (UID: \"66ec531e0194d5eb3d0756b41da48ff3\") " pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.748866 kubelet[1682]: I1213 06:49:03.748647 1682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e8c3aa22c026420233fefe404f253fe-kubeconfig\") pod \"kube-scheduler-srv-uleoy.gb1.brightbox.com\" (UID: \"7e8c3aa22c026420233fefe404f253fe\") " pod="kube-system/kube-scheduler-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.748866 kubelet[1682]: I1213 06:49:03.748685 1682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e92f47bf886795197d13a73ac4e2e44-ca-certs\") pod \"kube-apiserver-srv-uleoy.gb1.brightbox.com\" (UID: \"0e92f47bf886795197d13a73ac4e2e44\") " pod="kube-system/kube-apiserver-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.748866 kubelet[1682]: I1213 06:49:03.748740 1682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66ec531e0194d5eb3d0756b41da48ff3-flexvolume-dir\") pod \"kube-controller-manager-srv-uleoy.gb1.brightbox.com\" (UID: \"66ec531e0194d5eb3d0756b41da48ff3\") " pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.748866 kubelet[1682]: I1213 06:49:03.748774 1682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66ec531e0194d5eb3d0756b41da48ff3-k8s-certs\") pod \"kube-controller-manager-srv-uleoy.gb1.brightbox.com\" (UID: \"66ec531e0194d5eb3d0756b41da48ff3\") " pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.748866 kubelet[1682]: I1213 06:49:03.748824 1682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66ec531e0194d5eb3d0756b41da48ff3-kubeconfig\") pod \"kube-controller-manager-srv-uleoy.gb1.brightbox.com\" (UID: \"66ec531e0194d5eb3d0756b41da48ff3\") " pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.749316 kubelet[1682]: I1213 06:49:03.748905 1682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66ec531e0194d5eb3d0756b41da48ff3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-uleoy.gb1.brightbox.com\" (UID: \"66ec531e0194d5eb3d0756b41da48ff3\") " pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.749316 kubelet[1682]: I1213 06:49:03.748940 1682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e92f47bf886795197d13a73ac4e2e44-k8s-certs\") pod \"kube-apiserver-srv-uleoy.gb1.brightbox.com\" (UID: \"0e92f47bf886795197d13a73ac4e2e44\") " pod="kube-system/kube-apiserver-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.848640 kubelet[1682]: I1213 06:49:03.848600 1682 kubelet_node_status.go:73] "Attempting to register node" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:03.849315 kubelet[1682]: E1213 06:49:03.849286 1682 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.20.2:6443/api/v1/nodes\": dial tcp 10.230.20.2:6443: connect: connection refused" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:04.001666 env[1189]: time="2024-12-13T06:49:04.001430087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-uleoy.gb1.brightbox.com,Uid:0e92f47bf886795197d13a73ac4e2e44,Namespace:kube-system,Attempt:0,}" Dec 13 06:49:04.010631 env[1189]: time="2024-12-13T06:49:04.010191460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-uleoy.gb1.brightbox.com,Uid:66ec531e0194d5eb3d0756b41da48ff3,Namespace:kube-system,Attempt:0,}" Dec 13 06:49:04.021075 env[1189]: time="2024-12-13T06:49:04.021012186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-uleoy.gb1.brightbox.com,Uid:7e8c3aa22c026420233fefe404f253fe,Namespace:kube-system,Attempt:0,}" Dec 13 06:49:04.145040 kubelet[1682]: E1213 06:49:04.144990 1682 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.20.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-uleoy.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.20.2:6443: connect: connection refused" interval="800ms" Dec 13 06:49:04.253908 kubelet[1682]: I1213 06:49:04.253770 1682 kubelet_node_status.go:73] "Attempting to register node" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:04.254830 kubelet[1682]: E1213 06:49:04.254796 1682 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.20.2:6443/api/v1/nodes\": dial tcp 10.230.20.2:6443: connect: connection refused" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:04.641320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4156972569.mount: Deactivated successfully. Dec 13 06:49:04.649474 env[1189]: time="2024-12-13T06:49:04.649421235Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.650767 env[1189]: time="2024-12-13T06:49:04.650723075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.652839 env[1189]: time="2024-12-13T06:49:04.652800310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.654347 env[1189]: time="2024-12-13T06:49:04.654311813Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.658507 env[1189]: time="2024-12-13T06:49:04.658470112Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.660585 env[1189]: time="2024-12-13T06:49:04.660548250Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.663858 env[1189]: time="2024-12-13T06:49:04.663822147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.665336 env[1189]: time="2024-12-13T06:49:04.665299544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.667922 env[1189]: time="2024-12-13T06:49:04.667875692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.671198 env[1189]: time="2024-12-13T06:49:04.671161485Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.677632 env[1189]: time="2024-12-13T06:49:04.677594510Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.681372 env[1189]: time="2024-12-13T06:49:04.681319995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:04.723451 env[1189]: time="2024-12-13T06:49:04.723291413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:49:04.723839 env[1189]: time="2024-12-13T06:49:04.723387795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:49:04.723839 env[1189]: time="2024-12-13T06:49:04.723539259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:49:04.724409 env[1189]: time="2024-12-13T06:49:04.723839923Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea285c267c38aed86b9a5b28c3982b2a54076366b5d7aa351562eb753bce8e2e pid=1730 runtime=io.containerd.runc.v2 Dec 13 06:49:04.732910 env[1189]: time="2024-12-13T06:49:04.732829664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:49:04.733112 env[1189]: time="2024-12-13T06:49:04.732886456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:49:04.733112 env[1189]: time="2024-12-13T06:49:04.733085168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:49:04.733765 env[1189]: time="2024-12-13T06:49:04.733607916Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/676a7942806e8419326d280d61d6c7817209917c0b91756f7ac41c25e99cd7cf pid=1731 runtime=io.containerd.runc.v2 Dec 13 06:49:04.762774 systemd[1]: Started cri-containerd-ea285c267c38aed86b9a5b28c3982b2a54076366b5d7aa351562eb753bce8e2e.scope. Dec 13 06:49:04.780820 systemd[1]: Started cri-containerd-676a7942806e8419326d280d61d6c7817209917c0b91756f7ac41c25e99cd7cf.scope. Dec 13 06:49:04.786851 env[1189]: time="2024-12-13T06:49:04.786725907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:49:04.786851 env[1189]: time="2024-12-13T06:49:04.786803605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:49:04.787186 env[1189]: time="2024-12-13T06:49:04.786823075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:49:04.788626 env[1189]: time="2024-12-13T06:49:04.788563232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3df3a2bd1e3ac6b7d169db4b7ddf80c19297a823b883f30097848552df64e57c pid=1764 runtime=io.containerd.runc.v2 Dec 13 06:49:04.810000 systemd[1]: Started cri-containerd-3df3a2bd1e3ac6b7d169db4b7ddf80c19297a823b883f30097848552df64e57c.scope. Dec 13 06:49:04.895573 kubelet[1682]: W1213 06:49:04.895287 1682 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.20.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-uleoy.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:04.895573 kubelet[1682]: E1213 06:49:04.895397 1682 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.20.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-uleoy.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:04.906385 env[1189]: time="2024-12-13T06:49:04.906297909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-uleoy.gb1.brightbox.com,Uid:0e92f47bf886795197d13a73ac4e2e44,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea285c267c38aed86b9a5b28c3982b2a54076366b5d7aa351562eb753bce8e2e\"" Dec 13 06:49:04.915106 env[1189]: time="2024-12-13T06:49:04.915041098Z" level=info msg="CreateContainer within sandbox \"ea285c267c38aed86b9a5b28c3982b2a54076366b5d7aa351562eb753bce8e2e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 06:49:04.916342 kubelet[1682]: W1213 06:49:04.916231 1682 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.20.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:04.916342 kubelet[1682]: E1213 06:49:04.916305 1682 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.20.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:04.931986 env[1189]: time="2024-12-13T06:49:04.931905394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-uleoy.gb1.brightbox.com,Uid:7e8c3aa22c026420233fefe404f253fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"676a7942806e8419326d280d61d6c7817209917c0b91756f7ac41c25e99cd7cf\"" Dec 13 06:49:04.935678 env[1189]: time="2024-12-13T06:49:04.935459464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-uleoy.gb1.brightbox.com,Uid:66ec531e0194d5eb3d0756b41da48ff3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3df3a2bd1e3ac6b7d169db4b7ddf80c19297a823b883f30097848552df64e57c\"" Dec 13 06:49:04.937391 env[1189]: time="2024-12-13T06:49:04.936642239Z" level=info msg="CreateContainer within sandbox \"676a7942806e8419326d280d61d6c7817209917c0b91756f7ac41c25e99cd7cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 06:49:04.941932 env[1189]: time="2024-12-13T06:49:04.941848056Z" level=info msg="CreateContainer within sandbox \"3df3a2bd1e3ac6b7d169db4b7ddf80c19297a823b883f30097848552df64e57c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 06:49:04.946708 kubelet[1682]: W1213 06:49:04.946589 1682 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.20.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:04.946708 kubelet[1682]: E1213 06:49:04.946663 1682 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.20.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:04.946708 kubelet[1682]: E1213 06:49:04.946599 1682 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.20.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-uleoy.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.20.2:6443: connect: connection refused" interval="1.6s" Dec 13 06:49:04.958044 env[1189]: time="2024-12-13T06:49:04.957975440Z" level=info msg="CreateContainer within sandbox \"ea285c267c38aed86b9a5b28c3982b2a54076366b5d7aa351562eb753bce8e2e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"734162547ad5621619d62fc717be511ecde4fd7f49bf149c60b57e69b02d4e6b\"" Dec 13 06:49:04.959173 env[1189]: time="2024-12-13T06:49:04.959131869Z" level=info msg="StartContainer for \"734162547ad5621619d62fc717be511ecde4fd7f49bf149c60b57e69b02d4e6b\"" Dec 13 06:49:04.963801 env[1189]: time="2024-12-13T06:49:04.963711532Z" level=info msg="CreateContainer within sandbox \"3df3a2bd1e3ac6b7d169db4b7ddf80c19297a823b883f30097848552df64e57c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cacb5a6a7cba4b6a3b28de1a117ccec651369db7066c45714e673447f345af43\"" Dec 13 06:49:04.964490 env[1189]: time="2024-12-13T06:49:04.964455473Z" level=info msg="StartContainer for \"cacb5a6a7cba4b6a3b28de1a117ccec651369db7066c45714e673447f345af43\"" Dec 13 06:49:04.971183 env[1189]: time="2024-12-13T06:49:04.971134487Z" level=info msg="CreateContainer within sandbox \"676a7942806e8419326d280d61d6c7817209917c0b91756f7ac41c25e99cd7cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d76d436e545374576ec223b7b90915598233e861d64458b000426f1c2860d404\"" Dec 13 06:49:04.971793 env[1189]: time="2024-12-13T06:49:04.971753409Z" level=info msg="StartContainer for \"d76d436e545374576ec223b7b90915598233e861d64458b000426f1c2860d404\"" Dec 13 06:49:04.995566 systemd[1]: Started cri-containerd-734162547ad5621619d62fc717be511ecde4fd7f49bf149c60b57e69b02d4e6b.scope. Dec 13 06:49:05.027586 systemd[1]: Started cri-containerd-d76d436e545374576ec223b7b90915598233e861d64458b000426f1c2860d404.scope. Dec 13 06:49:05.036949 systemd[1]: Started cri-containerd-cacb5a6a7cba4b6a3b28de1a117ccec651369db7066c45714e673447f345af43.scope. Dec 13 06:49:05.060257 kubelet[1682]: I1213 06:49:05.059784 1682 kubelet_node_status.go:73] "Attempting to register node" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:05.060257 kubelet[1682]: E1213 06:49:05.060209 1682 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.20.2:6443/api/v1/nodes\": dial tcp 10.230.20.2:6443: connect: connection refused" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:05.063292 kubelet[1682]: W1213 06:49:05.063146 1682 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.20.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:05.063292 kubelet[1682]: E1213 06:49:05.063258 1682 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.20.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:05.107305 env[1189]: time="2024-12-13T06:49:05.107186781Z" level=info msg="StartContainer for \"734162547ad5621619d62fc717be511ecde4fd7f49bf149c60b57e69b02d4e6b\" returns successfully" Dec 13 06:49:05.176538 env[1189]: time="2024-12-13T06:49:05.175567058Z" level=info msg="StartContainer for \"d76d436e545374576ec223b7b90915598233e861d64458b000426f1c2860d404\" returns successfully" Dec 13 06:49:05.183322 env[1189]: time="2024-12-13T06:49:05.183278182Z" level=info msg="StartContainer for \"cacb5a6a7cba4b6a3b28de1a117ccec651369db7066c45714e673447f345af43\" returns successfully" Dec 13 06:49:05.605726 kubelet[1682]: E1213 06:49:05.605679 1682 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.20.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.20.2:6443: connect: connection refused Dec 13 06:49:06.667187 kubelet[1682]: I1213 06:49:06.667117 1682 kubelet_node_status.go:73] "Attempting to register node" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:08.873316 kubelet[1682]: E1213 06:49:08.873259 1682 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-uleoy.gb1.brightbox.com\" not found" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:09.038905 kubelet[1682]: I1213 06:49:09.038840 1682 kubelet_node_status.go:76] "Successfully registered node" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:09.126082 kubelet[1682]: E1213 06:49:09.125919 1682 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-uleoy.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:09.500303 kubelet[1682]: I1213 06:49:09.500081 1682 apiserver.go:52] "Watching apiserver" Dec 13 06:49:09.545234 kubelet[1682]: I1213 06:49:09.545162 1682 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 06:49:10.977130 kubelet[1682]: W1213 06:49:10.975556 1682 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 06:49:11.879826 systemd[1]: Reloading. Dec 13 06:49:12.031312 /usr/lib/systemd/system-generators/torcx-generator[1975]: time="2024-12-13T06:49:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:49:12.035653 /usr/lib/systemd/system-generators/torcx-generator[1975]: time="2024-12-13T06:49:12Z" level=info msg="torcx already run" Dec 13 06:49:12.146894 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:49:12.147648 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:49:12.179197 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:49:12.360955 systemd[1]: Stopping kubelet.service... Dec 13 06:49:12.361418 kubelet[1682]: I1213 06:49:12.360930 1682 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 06:49:12.383450 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 06:49:12.384212 systemd[1]: Stopped kubelet.service. Dec 13 06:49:12.384566 systemd[1]: kubelet.service: Consumed 1.912s CPU time. Dec 13 06:49:12.390758 systemd[1]: Starting kubelet.service... Dec 13 06:49:13.687163 systemd[1]: Started kubelet.service. Dec 13 06:49:13.884199 sudo[2038]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 06:49:13.884734 sudo[2038]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 06:49:13.887821 kubelet[2027]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:49:13.888289 kubelet[2027]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 06:49:13.888432 kubelet[2027]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:49:13.888672 kubelet[2027]: I1213 06:49:13.888610 2027 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 06:49:13.906022 kubelet[2027]: I1213 06:49:13.905972 2027 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 06:49:13.906297 kubelet[2027]: I1213 06:49:13.906272 2027 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 06:49:13.907517 kubelet[2027]: I1213 06:49:13.907491 2027 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 06:49:13.913645 kubelet[2027]: I1213 06:49:13.913599 2027 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 06:49:13.922254 kubelet[2027]: I1213 06:49:13.922146 2027 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 06:49:13.950331 kubelet[2027]: I1213 06:49:13.948996 2027 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 06:49:13.951659 kubelet[2027]: I1213 06:49:13.951340 2027 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 06:49:13.953494 kubelet[2027]: I1213 06:49:13.953460 2027 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 06:49:13.954175 kubelet[2027]: I1213 06:49:13.954149 2027 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 06:49:13.954315 kubelet[2027]: I1213 06:49:13.954282 2027 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 06:49:13.954557 kubelet[2027]: I1213 06:49:13.954533 2027 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:49:13.956303 kubelet[2027]: I1213 06:49:13.956062 2027 kubelet.go:396] "Attempting to sync node with API server" Dec 13 06:49:13.956485 kubelet[2027]: I1213 06:49:13.956459 2027 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 06:49:13.956758 kubelet[2027]: I1213 06:49:13.956734 2027 kubelet.go:312] "Adding apiserver pod source" Dec 13 06:49:13.956917 kubelet[2027]: I1213 06:49:13.956893 2027 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 06:49:13.962971 kubelet[2027]: I1213 06:49:13.962944 2027 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 06:49:13.963888 kubelet[2027]: I1213 06:49:13.963862 2027 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 06:49:13.965265 kubelet[2027]: I1213 06:49:13.965241 2027 server.go:1256] "Started kubelet" Dec 13 06:49:13.975964 kubelet[2027]: I1213 06:49:13.975928 2027 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 06:49:13.979634 kubelet[2027]: I1213 06:49:13.979605 2027 server.go:461] "Adding debug handlers to kubelet server" Dec 13 06:49:13.989432 kubelet[2027]: I1213 06:49:13.980550 2027 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 06:49:13.990198 kubelet[2027]: I1213 06:49:13.990174 2027 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 06:49:13.990509 kubelet[2027]: I1213 06:49:13.986014 2027 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 06:49:14.003806 kubelet[2027]: I1213 06:49:14.003751 2027 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 06:49:14.005779 kubelet[2027]: I1213 06:49:14.005751 2027 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 06:49:14.006322 kubelet[2027]: I1213 06:49:14.006298 2027 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 06:49:14.064779 kubelet[2027]: E1213 06:49:14.064731 2027 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 06:49:14.074501 kubelet[2027]: I1213 06:49:14.074465 2027 factory.go:221] Registration of the containerd container factory successfully Dec 13 06:49:14.075702 kubelet[2027]: I1213 06:49:14.075283 2027 factory.go:221] Registration of the systemd container factory successfully Dec 13 06:49:14.076820 kubelet[2027]: I1213 06:49:14.076783 2027 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 06:49:14.144956 kubelet[2027]: I1213 06:49:14.144900 2027 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 06:49:14.148317 kubelet[2027]: I1213 06:49:14.148285 2027 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 06:49:14.148577 kubelet[2027]: I1213 06:49:14.148550 2027 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 06:49:14.148726 kubelet[2027]: I1213 06:49:14.148701 2027 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 06:49:14.148934 kubelet[2027]: E1213 06:49:14.148910 2027 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 06:49:14.162109 kubelet[2027]: I1213 06:49:14.161847 2027 kubelet_node_status.go:73] "Attempting to register node" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.224227 kubelet[2027]: I1213 06:49:14.222035 2027 kubelet_node_status.go:112] "Node was previously registered" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.224646 kubelet[2027]: I1213 06:49:14.224620 2027 kubelet_node_status.go:76] "Successfully registered node" node="srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.252298 kubelet[2027]: E1213 06:49:14.251228 2027 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 06:49:14.264493 kubelet[2027]: I1213 06:49:14.264451 2027 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 06:49:14.264493 kubelet[2027]: I1213 06:49:14.264494 2027 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 06:49:14.264734 kubelet[2027]: I1213 06:49:14.264549 2027 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:49:14.265014 kubelet[2027]: I1213 06:49:14.264981 2027 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 06:49:14.265097 kubelet[2027]: I1213 06:49:14.265059 2027 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 06:49:14.265097 kubelet[2027]: I1213 06:49:14.265086 2027 policy_none.go:49] "None policy: Start" Dec 13 06:49:14.267007 kubelet[2027]: I1213 06:49:14.266811 2027 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 06:49:14.267007 kubelet[2027]: I1213 06:49:14.266863 2027 state_mem.go:35] "Initializing new in-memory state store" Dec 13 06:49:14.267246 kubelet[2027]: I1213 06:49:14.267101 2027 state_mem.go:75] "Updated machine memory state" Dec 13 06:49:14.276292 kubelet[2027]: I1213 06:49:14.276258 2027 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 06:49:14.276897 kubelet[2027]: I1213 06:49:14.276856 2027 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 06:49:14.452569 kubelet[2027]: I1213 06:49:14.452493 2027 topology_manager.go:215] "Topology Admit Handler" podUID="0e92f47bf886795197d13a73ac4e2e44" podNamespace="kube-system" podName="kube-apiserver-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.452821 kubelet[2027]: I1213 06:49:14.452779 2027 topology_manager.go:215] "Topology Admit Handler" podUID="66ec531e0194d5eb3d0756b41da48ff3" podNamespace="kube-system" podName="kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.453056 kubelet[2027]: I1213 06:49:14.453025 2027 topology_manager.go:215] "Topology Admit Handler" podUID="7e8c3aa22c026420233fefe404f253fe" podNamespace="kube-system" podName="kube-scheduler-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.470620 kubelet[2027]: W1213 06:49:14.470577 2027 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 06:49:14.470829 kubelet[2027]: W1213 06:49:14.470695 2027 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 06:49:14.472243 kubelet[2027]: W1213 06:49:14.472213 2027 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 06:49:14.472354 kubelet[2027]: E1213 06:49:14.472323 2027 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-uleoy.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.519770 kubelet[2027]: I1213 06:49:14.519619 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e8c3aa22c026420233fefe404f253fe-kubeconfig\") pod \"kube-scheduler-srv-uleoy.gb1.brightbox.com\" (UID: \"7e8c3aa22c026420233fefe404f253fe\") " pod="kube-system/kube-scheduler-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.519770 kubelet[2027]: I1213 06:49:14.519709 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66ec531e0194d5eb3d0756b41da48ff3-ca-certs\") pod \"kube-controller-manager-srv-uleoy.gb1.brightbox.com\" (UID: \"66ec531e0194d5eb3d0756b41da48ff3\") " pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.519770 kubelet[2027]: I1213 06:49:14.519758 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66ec531e0194d5eb3d0756b41da48ff3-kubeconfig\") pod \"kube-controller-manager-srv-uleoy.gb1.brightbox.com\" (UID: \"66ec531e0194d5eb3d0756b41da48ff3\") " pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.520099 kubelet[2027]: I1213 06:49:14.519843 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e92f47bf886795197d13a73ac4e2e44-usr-share-ca-certificates\") pod \"kube-apiserver-srv-uleoy.gb1.brightbox.com\" (UID: \"0e92f47bf886795197d13a73ac4e2e44\") " pod="kube-system/kube-apiserver-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.520099 kubelet[2027]: I1213 06:49:14.519924 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66ec531e0194d5eb3d0756b41da48ff3-flexvolume-dir\") pod \"kube-controller-manager-srv-uleoy.gb1.brightbox.com\" (UID: \"66ec531e0194d5eb3d0756b41da48ff3\") " pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.520099 kubelet[2027]: I1213 06:49:14.519982 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66ec531e0194d5eb3d0756b41da48ff3-k8s-certs\") pod \"kube-controller-manager-srv-uleoy.gb1.brightbox.com\" (UID: \"66ec531e0194d5eb3d0756b41da48ff3\") " pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.520099 kubelet[2027]: I1213 06:49:14.520029 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66ec531e0194d5eb3d0756b41da48ff3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-uleoy.gb1.brightbox.com\" (UID: \"66ec531e0194d5eb3d0756b41da48ff3\") " pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.520099 kubelet[2027]: I1213 06:49:14.520089 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e92f47bf886795197d13a73ac4e2e44-ca-certs\") pod \"kube-apiserver-srv-uleoy.gb1.brightbox.com\" (UID: \"0e92f47bf886795197d13a73ac4e2e44\") " pod="kube-system/kube-apiserver-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.520428 kubelet[2027]: I1213 06:49:14.520216 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e92f47bf886795197d13a73ac4e2e44-k8s-certs\") pod \"kube-apiserver-srv-uleoy.gb1.brightbox.com\" (UID: \"0e92f47bf886795197d13a73ac4e2e44\") " pod="kube-system/kube-apiserver-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:14.882229 sudo[2038]: pam_unix(sudo:session): session closed for user root Dec 13 06:49:14.958863 kubelet[2027]: I1213 06:49:14.958814 2027 apiserver.go:52] "Watching apiserver" Dec 13 06:49:15.006680 kubelet[2027]: I1213 06:49:15.006614 2027 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 06:49:15.232308 kubelet[2027]: W1213 06:49:15.232180 2027 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 06:49:15.232308 kubelet[2027]: E1213 06:49:15.232274 2027 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-uleoy.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-uleoy.gb1.brightbox.com" Dec 13 06:49:15.265665 kubelet[2027]: I1213 06:49:15.265603 2027 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-uleoy.gb1.brightbox.com" podStartSLOduration=1.265524552 podStartE2EDuration="1.265524552s" podCreationTimestamp="2024-12-13 06:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:49:15.264303646 +0000 UTC m=+1.534332514" watchObservedRunningTime="2024-12-13 06:49:15.265524552 +0000 UTC m=+1.535553413" Dec 13 06:49:15.312141 kubelet[2027]: I1213 06:49:15.312074 2027 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-uleoy.gb1.brightbox.com" podStartSLOduration=5.311987831 podStartE2EDuration="5.311987831s" podCreationTimestamp="2024-12-13 06:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:49:15.289205987 +0000 UTC m=+1.559234856" watchObservedRunningTime="2024-12-13 06:49:15.311987831 +0000 UTC m=+1.582016699" Dec 13 06:49:15.326137 kubelet[2027]: I1213 06:49:15.326090 2027 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-uleoy.gb1.brightbox.com" podStartSLOduration=1.326022437 podStartE2EDuration="1.326022437s" podCreationTimestamp="2024-12-13 06:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:49:15.31321681 +0000 UTC m=+1.583245679" watchObservedRunningTime="2024-12-13 06:49:15.326022437 +0000 UTC m=+1.596051299" Dec 13 06:49:17.312741 sudo[1320]: pam_unix(sudo:session): session closed for user root Dec 13 06:49:17.460635 sshd[1317]: pam_unix(sshd:session): session closed for user core Dec 13 06:49:17.465850 systemd[1]: sshd@4-10.230.20.2:22-139.178.89.65:45538.service: Deactivated successfully. Dec 13 06:49:17.467643 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 06:49:17.467936 systemd[1]: session-5.scope: Consumed 6.417s CPU time. Dec 13 06:49:17.468678 systemd-logind[1180]: Session 5 logged out. Waiting for processes to exit. Dec 13 06:49:17.470695 systemd-logind[1180]: Removed session 5. Dec 13 06:49:26.622731 kubelet[2027]: I1213 06:49:26.622596 2027 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 06:49:26.623644 env[1189]: time="2024-12-13T06:49:26.623275705Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 06:49:26.624471 kubelet[2027]: I1213 06:49:26.624443 2027 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 06:49:26.842285 kubelet[2027]: I1213 06:49:26.842242 2027 topology_manager.go:215] "Topology Admit Handler" podUID="9c38a062-7b3e-4f2c-97be-2b09021359c6" podNamespace="kube-system" podName="cilium-kfczg" Dec 13 06:49:26.849329 kubelet[2027]: I1213 06:49:26.849290 2027 topology_manager.go:215] "Topology Admit Handler" podUID="b0164175-4ede-4543-ab4c-902b310b396e" podNamespace="kube-system" podName="kube-proxy-c7zvw" Dec 13 06:49:26.863549 systemd[1]: Created slice kubepods-burstable-pod9c38a062_7b3e_4f2c_97be_2b09021359c6.slice. Dec 13 06:49:26.873089 systemd[1]: Created slice kubepods-besteffort-podb0164175_4ede_4543_ab4c_902b310b396e.slice. Dec 13 06:49:26.900492 kubelet[2027]: I1213 06:49:26.900433 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-hostproc\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.900972 kubelet[2027]: I1213 06:49:26.900914 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-lib-modules\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.901244 kubelet[2027]: I1213 06:49:26.901203 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-xtables-lock\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.901471 kubelet[2027]: I1213 06:49:26.901438 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-host-proc-sys-net\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.901661 kubelet[2027]: I1213 06:49:26.901626 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c38a062-7b3e-4f2c-97be-2b09021359c6-hubble-tls\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.902234 kubelet[2027]: I1213 06:49:26.902199 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpdmg\" (UniqueName: \"kubernetes.io/projected/9c38a062-7b3e-4f2c-97be-2b09021359c6-kube-api-access-lpdmg\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.902417 kubelet[2027]: I1213 06:49:26.902394 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxs52\" (UniqueName: \"kubernetes.io/projected/b0164175-4ede-4543-ab4c-902b310b396e-kube-api-access-wxs52\") pod \"kube-proxy-c7zvw\" (UID: \"b0164175-4ede-4543-ab4c-902b310b396e\") " pod="kube-system/kube-proxy-c7zvw" Dec 13 06:49:26.902688 kubelet[2027]: I1213 06:49:26.902657 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-run\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.902902 kubelet[2027]: I1213 06:49:26.902867 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-bpf-maps\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.903085 kubelet[2027]: I1213 06:49:26.903052 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-host-proc-sys-kernel\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.903256 kubelet[2027]: I1213 06:49:26.903223 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b0164175-4ede-4543-ab4c-902b310b396e-kube-proxy\") pod \"kube-proxy-c7zvw\" (UID: \"b0164175-4ede-4543-ab4c-902b310b396e\") " pod="kube-system/kube-proxy-c7zvw" Dec 13 06:49:26.903431 kubelet[2027]: I1213 06:49:26.903399 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0164175-4ede-4543-ab4c-902b310b396e-lib-modules\") pod \"kube-proxy-c7zvw\" (UID: \"b0164175-4ede-4543-ab4c-902b310b396e\") " pod="kube-system/kube-proxy-c7zvw" Dec 13 06:49:26.903618 kubelet[2027]: I1213 06:49:26.903584 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cni-path\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.903823 kubelet[2027]: I1213 06:49:26.903753 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c38a062-7b3e-4f2c-97be-2b09021359c6-clustermesh-secrets\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.904015 kubelet[2027]: I1213 06:49:26.903982 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0164175-4ede-4543-ab4c-902b310b396e-xtables-lock\") pod \"kube-proxy-c7zvw\" (UID: \"b0164175-4ede-4543-ab4c-902b310b396e\") " pod="kube-system/kube-proxy-c7zvw" Dec 13 06:49:26.904196 kubelet[2027]: I1213 06:49:26.904163 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-etc-cni-netd\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.904378 kubelet[2027]: I1213 06:49:26.904342 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-config-path\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.904568 kubelet[2027]: I1213 06:49:26.904535 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-cgroup\") pod \"cilium-kfczg\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " pod="kube-system/cilium-kfczg" Dec 13 06:49:26.923037 kubelet[2027]: I1213 06:49:26.922987 2027 topology_manager.go:215] "Topology Admit Handler" podUID="444136c7-4d61-4e0e-ac32-e73836b1806f" podNamespace="kube-system" podName="cilium-operator-5cc964979-8z8jt" Dec 13 06:49:26.942996 systemd[1]: Created slice kubepods-besteffort-pod444136c7_4d61_4e0e_ac32_e73836b1806f.slice. Dec 13 06:49:27.005383 kubelet[2027]: I1213 06:49:27.005319 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/444136c7-4d61-4e0e-ac32-e73836b1806f-cilium-config-path\") pod \"cilium-operator-5cc964979-8z8jt\" (UID: \"444136c7-4d61-4e0e-ac32-e73836b1806f\") " pod="kube-system/cilium-operator-5cc964979-8z8jt" Dec 13 06:49:27.005886 kubelet[2027]: I1213 06:49:27.005791 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbmz\" (UniqueName: \"kubernetes.io/projected/444136c7-4d61-4e0e-ac32-e73836b1806f-kube-api-access-5qbmz\") pod \"cilium-operator-5cc964979-8z8jt\" (UID: \"444136c7-4d61-4e0e-ac32-e73836b1806f\") " pod="kube-system/cilium-operator-5cc964979-8z8jt" Dec 13 06:49:27.171873 env[1189]: time="2024-12-13T06:49:27.171715344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kfczg,Uid:9c38a062-7b3e-4f2c-97be-2b09021359c6,Namespace:kube-system,Attempt:0,}" Dec 13 06:49:27.186999 env[1189]: time="2024-12-13T06:49:27.186913364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c7zvw,Uid:b0164175-4ede-4543-ab4c-902b310b396e,Namespace:kube-system,Attempt:0,}" Dec 13 06:49:27.218808 env[1189]: time="2024-12-13T06:49:27.217073919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:49:27.218808 env[1189]: time="2024-12-13T06:49:27.218759855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:49:27.219162 env[1189]: time="2024-12-13T06:49:27.218779392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:49:27.220077 env[1189]: time="2024-12-13T06:49:27.219694963Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1 pid=2111 runtime=io.containerd.runc.v2 Dec 13 06:49:27.225915 env[1189]: time="2024-12-13T06:49:27.225812759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:49:27.226110 env[1189]: time="2024-12-13T06:49:27.225868959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:49:27.226110 env[1189]: time="2024-12-13T06:49:27.225905132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:49:27.226263 env[1189]: time="2024-12-13T06:49:27.226126428Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2202b89919478c51eac4e3c364e2c6fb9ee75c805f6119e8f925fc7d5f56cc2f pid=2128 runtime=io.containerd.runc.v2 Dec 13 06:49:27.244475 systemd[1]: Started cri-containerd-9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1.scope. Dec 13 06:49:27.247973 env[1189]: time="2024-12-13T06:49:27.247905860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-8z8jt,Uid:444136c7-4d61-4e0e-ac32-e73836b1806f,Namespace:kube-system,Attempt:0,}" Dec 13 06:49:27.257488 systemd[1]: Started cri-containerd-2202b89919478c51eac4e3c364e2c6fb9ee75c805f6119e8f925fc7d5f56cc2f.scope. Dec 13 06:49:27.294210 env[1189]: time="2024-12-13T06:49:27.294084280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:49:27.294424 env[1189]: time="2024-12-13T06:49:27.294252904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:49:27.294424 env[1189]: time="2024-12-13T06:49:27.294354717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:49:27.294845 env[1189]: time="2024-12-13T06:49:27.294666502Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762 pid=2168 runtime=io.containerd.runc.v2 Dec 13 06:49:27.316432 env[1189]: time="2024-12-13T06:49:27.316304851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kfczg,Uid:9c38a062-7b3e-4f2c-97be-2b09021359c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\"" Dec 13 06:49:27.322747 env[1189]: time="2024-12-13T06:49:27.322682910Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 06:49:27.341701 systemd[1]: Started cri-containerd-976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762.scope. Dec 13 06:49:27.354057 env[1189]: time="2024-12-13T06:49:27.353986679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c7zvw,Uid:b0164175-4ede-4543-ab4c-902b310b396e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2202b89919478c51eac4e3c364e2c6fb9ee75c805f6119e8f925fc7d5f56cc2f\"" Dec 13 06:49:27.361423 env[1189]: time="2024-12-13T06:49:27.361354358Z" level=info msg="CreateContainer within sandbox \"2202b89919478c51eac4e3c364e2c6fb9ee75c805f6119e8f925fc7d5f56cc2f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 06:49:27.401570 env[1189]: time="2024-12-13T06:49:27.401400599Z" level=info msg="CreateContainer within sandbox \"2202b89919478c51eac4e3c364e2c6fb9ee75c805f6119e8f925fc7d5f56cc2f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d47a1ae706ab03d6b203ff42ff4e91282f0a847167ff7f3ef22d0c8978bd1376\"" Dec 13 06:49:27.404416 env[1189]: time="2024-12-13T06:49:27.404350954Z" level=info msg="StartContainer for \"d47a1ae706ab03d6b203ff42ff4e91282f0a847167ff7f3ef22d0c8978bd1376\"" Dec 13 06:49:27.431453 env[1189]: time="2024-12-13T06:49:27.431211377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-8z8jt,Uid:444136c7-4d61-4e0e-ac32-e73836b1806f,Namespace:kube-system,Attempt:0,} returns sandbox id \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\"" Dec 13 06:49:27.439972 systemd[1]: Started cri-containerd-d47a1ae706ab03d6b203ff42ff4e91282f0a847167ff7f3ef22d0c8978bd1376.scope. Dec 13 06:49:27.491886 env[1189]: time="2024-12-13T06:49:27.491827603Z" level=info msg="StartContainer for \"d47a1ae706ab03d6b203ff42ff4e91282f0a847167ff7f3ef22d0c8978bd1376\" returns successfully" Dec 13 06:49:28.285779 kubelet[2027]: I1213 06:49:28.285462 2027 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c7zvw" podStartSLOduration=2.285302577 podStartE2EDuration="2.285302577s" podCreationTimestamp="2024-12-13 06:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:49:28.285166574 +0000 UTC m=+14.555195442" watchObservedRunningTime="2024-12-13 06:49:28.285302577 +0000 UTC m=+14.555331452" Dec 13 06:49:38.317198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665857627.mount: Deactivated successfully. Dec 13 06:49:43.103254 env[1189]: time="2024-12-13T06:49:43.102887167Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:43.107107 env[1189]: time="2024-12-13T06:49:43.107059497Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:43.109549 env[1189]: time="2024-12-13T06:49:43.109514850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:43.110673 env[1189]: time="2024-12-13T06:49:43.110620459Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 06:49:43.113000 env[1189]: time="2024-12-13T06:49:43.112962137Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 06:49:43.116664 env[1189]: time="2024-12-13T06:49:43.116079243Z" level=info msg="CreateContainer within sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:49:43.159562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627301028.mount: Deactivated successfully. Dec 13 06:49:43.165322 env[1189]: time="2024-12-13T06:49:43.165240826Z" level=info msg="CreateContainer within sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682\"" Dec 13 06:49:43.168176 env[1189]: time="2024-12-13T06:49:43.166774529Z" level=info msg="StartContainer for \"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682\"" Dec 13 06:49:43.216282 systemd[1]: run-containerd-runc-k8s.io-eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682-runc.B40fYU.mount: Deactivated successfully. Dec 13 06:49:43.222285 systemd[1]: Started cri-containerd-eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682.scope. Dec 13 06:49:43.272886 env[1189]: time="2024-12-13T06:49:43.272825064Z" level=info msg="StartContainer for \"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682\" returns successfully" Dec 13 06:49:43.298991 systemd[1]: cri-containerd-eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682.scope: Deactivated successfully. Dec 13 06:49:43.483709 env[1189]: time="2024-12-13T06:49:43.483519467Z" level=info msg="shim disconnected" id=eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682 Dec 13 06:49:43.483709 env[1189]: time="2024-12-13T06:49:43.483592074Z" level=warning msg="cleaning up after shim disconnected" id=eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682 namespace=k8s.io Dec 13 06:49:43.483709 env[1189]: time="2024-12-13T06:49:43.483614264Z" level=info msg="cleaning up dead shim" Dec 13 06:49:43.497174 env[1189]: time="2024-12-13T06:49:43.497079236Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:49:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2434 runtime=io.containerd.runc.v2\n" Dec 13 06:49:44.155060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682-rootfs.mount: Deactivated successfully. Dec 13 06:49:44.339617 env[1189]: time="2024-12-13T06:49:44.339518969Z" level=info msg="CreateContainer within sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 06:49:44.363465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1798779886.mount: Deactivated successfully. Dec 13 06:49:44.371862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2510379428.mount: Deactivated successfully. Dec 13 06:49:44.374773 env[1189]: time="2024-12-13T06:49:44.374699589Z" level=info msg="CreateContainer within sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309\"" Dec 13 06:49:44.375821 env[1189]: time="2024-12-13T06:49:44.375774037Z" level=info msg="StartContainer for \"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309\"" Dec 13 06:49:44.419918 systemd[1]: Started cri-containerd-924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309.scope. Dec 13 06:49:44.468441 env[1189]: time="2024-12-13T06:49:44.468071740Z" level=info msg="StartContainer for \"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309\" returns successfully" Dec 13 06:49:44.499330 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 06:49:44.500556 systemd[1]: Stopped systemd-sysctl.service. Dec 13 06:49:44.500999 systemd[1]: Stopping systemd-sysctl.service... Dec 13 06:49:44.504073 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:49:44.506706 systemd[1]: cri-containerd-924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309.scope: Deactivated successfully. Dec 13 06:49:44.529895 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:49:44.550645 env[1189]: time="2024-12-13T06:49:44.550572923Z" level=info msg="shim disconnected" id=924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309 Dec 13 06:49:44.551109 env[1189]: time="2024-12-13T06:49:44.551068479Z" level=warning msg="cleaning up after shim disconnected" id=924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309 namespace=k8s.io Dec 13 06:49:44.551257 env[1189]: time="2024-12-13T06:49:44.551228224Z" level=info msg="cleaning up dead shim" Dec 13 06:49:44.563299 env[1189]: time="2024-12-13T06:49:44.563229608Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:49:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2499 runtime=io.containerd.runc.v2\n" Dec 13 06:49:45.321326 env[1189]: time="2024-12-13T06:49:45.321267232Z" level=info msg="CreateContainer within sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 06:49:45.340068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1448102485.mount: Deactivated successfully. Dec 13 06:49:45.349467 env[1189]: time="2024-12-13T06:49:45.349401002Z" level=info msg="CreateContainer within sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0\"" Dec 13 06:49:45.351602 env[1189]: time="2024-12-13T06:49:45.351548575Z" level=info msg="StartContainer for \"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0\"" Dec 13 06:49:45.396524 systemd[1]: Started cri-containerd-7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0.scope. Dec 13 06:49:45.457003 env[1189]: time="2024-12-13T06:49:45.456916706Z" level=info msg="StartContainer for \"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0\" returns successfully" Dec 13 06:49:45.465170 systemd[1]: cri-containerd-7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0.scope: Deactivated successfully. Dec 13 06:49:45.499564 env[1189]: time="2024-12-13T06:49:45.499498565Z" level=info msg="shim disconnected" id=7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0 Dec 13 06:49:45.499564 env[1189]: time="2024-12-13T06:49:45.499563061Z" level=warning msg="cleaning up after shim disconnected" id=7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0 namespace=k8s.io Dec 13 06:49:45.499932 env[1189]: time="2024-12-13T06:49:45.499580618Z" level=info msg="cleaning up dead shim" Dec 13 06:49:45.511058 env[1189]: time="2024-12-13T06:49:45.510957276Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:49:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2555 runtime=io.containerd.runc.v2\n" Dec 13 06:49:46.154714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0-rootfs.mount: Deactivated successfully. Dec 13 06:49:46.326551 env[1189]: time="2024-12-13T06:49:46.326468406Z" level=info msg="CreateContainer within sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 06:49:46.364085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount101276817.mount: Deactivated successfully. Dec 13 06:49:46.388034 env[1189]: time="2024-12-13T06:49:46.387936205Z" level=info msg="CreateContainer within sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7\"" Dec 13 06:49:46.392043 env[1189]: time="2024-12-13T06:49:46.392003681Z" level=info msg="StartContainer for \"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7\"" Dec 13 06:49:46.443664 systemd[1]: Started cri-containerd-626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7.scope. Dec 13 06:49:46.496460 systemd[1]: cri-containerd-626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7.scope: Deactivated successfully. Dec 13 06:49:46.501524 env[1189]: time="2024-12-13T06:49:46.501463794Z" level=info msg="StartContainer for \"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7\" returns successfully" Dec 13 06:49:46.502233 env[1189]: time="2024-12-13T06:49:46.500516352Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c38a062_7b3e_4f2c_97be_2b09021359c6.slice/cri-containerd-626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7.scope/memory.events\": no such file or directory" Dec 13 06:49:46.561846 env[1189]: time="2024-12-13T06:49:46.561767020Z" level=info msg="shim disconnected" id=626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7 Dec 13 06:49:46.562301 env[1189]: time="2024-12-13T06:49:46.562267789Z" level=warning msg="cleaning up after shim disconnected" id=626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7 namespace=k8s.io Dec 13 06:49:46.562484 env[1189]: time="2024-12-13T06:49:46.562453469Z" level=info msg="cleaning up dead shim" Dec 13 06:49:46.584288 env[1189]: time="2024-12-13T06:49:46.584188128Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:49:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2612 runtime=io.containerd.runc.v2\n" Dec 13 06:49:47.340120 env[1189]: time="2024-12-13T06:49:47.340049469Z" level=info msg="CreateContainer within sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 06:49:47.365165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350824953.mount: Deactivated successfully. Dec 13 06:49:47.374909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138221349.mount: Deactivated successfully. Dec 13 06:49:47.378538 env[1189]: time="2024-12-13T06:49:47.378470728Z" level=info msg="CreateContainer within sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\"" Dec 13 06:49:47.381065 env[1189]: time="2024-12-13T06:49:47.379651733Z" level=info msg="StartContainer for \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\"" Dec 13 06:49:47.419049 systemd[1]: Started cri-containerd-e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4.scope. Dec 13 06:49:47.557645 env[1189]: time="2024-12-13T06:49:47.557545810Z" level=info msg="StartContainer for \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\" returns successfully" Dec 13 06:49:47.623804 env[1189]: time="2024-12-13T06:49:47.623609634Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:47.629487 env[1189]: time="2024-12-13T06:49:47.629445275Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:47.633788 env[1189]: time="2024-12-13T06:49:47.633716582Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 06:49:47.638024 env[1189]: time="2024-12-13T06:49:47.637504621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:49:47.642686 env[1189]: time="2024-12-13T06:49:47.642639343Z" level=info msg="CreateContainer within sandbox \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 06:49:47.682087 env[1189]: time="2024-12-13T06:49:47.682022499Z" level=info msg="CreateContainer within sandbox \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86\"" Dec 13 06:49:47.683390 env[1189]: time="2024-12-13T06:49:47.683335931Z" level=info msg="StartContainer for \"c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86\"" Dec 13 06:49:47.709627 systemd[1]: Started cri-containerd-c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86.scope. Dec 13 06:49:47.784638 env[1189]: time="2024-12-13T06:49:47.784565426Z" level=info msg="StartContainer for \"c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86\" returns successfully" Dec 13 06:49:47.858942 kubelet[2027]: I1213 06:49:47.858892 2027 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 06:49:47.934223 kubelet[2027]: I1213 06:49:47.934034 2027 topology_manager.go:215] "Topology Admit Handler" podUID="10adc0ac-390b-4057-977d-6862e68fdad8" podNamespace="kube-system" podName="coredns-76f75df574-lwtcf" Dec 13 06:49:47.935783 kubelet[2027]: I1213 06:49:47.935644 2027 topology_manager.go:215] "Topology Admit Handler" podUID="172b6fbb-b08f-4e26-a64c-e9b32258e448" podNamespace="kube-system" podName="coredns-76f75df574-f2rz4" Dec 13 06:49:47.951990 systemd[1]: Created slice kubepods-burstable-pod10adc0ac_390b_4057_977d_6862e68fdad8.slice. Dec 13 06:49:47.959590 systemd[1]: Created slice kubepods-burstable-pod172b6fbb_b08f_4e26_a64c_e9b32258e448.slice. Dec 13 06:49:48.014805 kubelet[2027]: I1213 06:49:48.014755 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r265p\" (UniqueName: \"kubernetes.io/projected/172b6fbb-b08f-4e26-a64c-e9b32258e448-kube-api-access-r265p\") pod \"coredns-76f75df574-f2rz4\" (UID: \"172b6fbb-b08f-4e26-a64c-e9b32258e448\") " pod="kube-system/coredns-76f75df574-f2rz4" Dec 13 06:49:48.015069 kubelet[2027]: I1213 06:49:48.014839 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10adc0ac-390b-4057-977d-6862e68fdad8-config-volume\") pod \"coredns-76f75df574-lwtcf\" (UID: \"10adc0ac-390b-4057-977d-6862e68fdad8\") " pod="kube-system/coredns-76f75df574-lwtcf" Dec 13 06:49:48.015069 kubelet[2027]: I1213 06:49:48.014897 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prg4h\" (UniqueName: \"kubernetes.io/projected/10adc0ac-390b-4057-977d-6862e68fdad8-kube-api-access-prg4h\") pod \"coredns-76f75df574-lwtcf\" (UID: \"10adc0ac-390b-4057-977d-6862e68fdad8\") " pod="kube-system/coredns-76f75df574-lwtcf" Dec 13 06:49:48.015069 kubelet[2027]: I1213 06:49:48.014942 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/172b6fbb-b08f-4e26-a64c-e9b32258e448-config-volume\") pod \"coredns-76f75df574-f2rz4\" (UID: \"172b6fbb-b08f-4e26-a64c-e9b32258e448\") " pod="kube-system/coredns-76f75df574-f2rz4" Dec 13 06:49:48.259410 env[1189]: time="2024-12-13T06:49:48.258591330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lwtcf,Uid:10adc0ac-390b-4057-977d-6862e68fdad8,Namespace:kube-system,Attempt:0,}" Dec 13 06:49:48.265386 env[1189]: time="2024-12-13T06:49:48.265329122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f2rz4,Uid:172b6fbb-b08f-4e26-a64c-e9b32258e448,Namespace:kube-system,Attempt:0,}" Dec 13 06:49:48.508779 kubelet[2027]: I1213 06:49:48.508721 2027 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-8z8jt" podStartSLOduration=2.30873365 podStartE2EDuration="22.508593515s" podCreationTimestamp="2024-12-13 06:49:26 +0000 UTC" firstStartedPulling="2024-12-13 06:49:27.434545784 +0000 UTC m=+13.704574645" lastFinishedPulling="2024-12-13 06:49:47.634405643 +0000 UTC m=+33.904434510" observedRunningTime="2024-12-13 06:49:48.395460115 +0000 UTC m=+34.665488989" watchObservedRunningTime="2024-12-13 06:49:48.508593515 +0000 UTC m=+34.778622388" Dec 13 06:49:51.738899 systemd-networkd[1028]: cilium_host: Link UP Dec 13 06:49:51.744232 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 06:49:51.744482 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 06:49:51.744958 systemd-networkd[1028]: cilium_net: Link UP Dec 13 06:49:51.745336 systemd-networkd[1028]: cilium_net: Gained carrier Dec 13 06:49:51.747071 systemd-networkd[1028]: cilium_host: Gained carrier Dec 13 06:49:51.935989 systemd-networkd[1028]: cilium_vxlan: Link UP Dec 13 06:49:51.936011 systemd-networkd[1028]: cilium_vxlan: Gained carrier Dec 13 06:49:52.096682 systemd-networkd[1028]: cilium_net: Gained IPv6LL Dec 13 06:49:52.486403 kernel: NET: Registered PF_ALG protocol family Dec 13 06:49:52.657220 systemd-networkd[1028]: cilium_host: Gained IPv6LL Dec 13 06:49:53.425768 systemd-networkd[1028]: cilium_vxlan: Gained IPv6LL Dec 13 06:49:53.591671 systemd-networkd[1028]: lxc_health: Link UP Dec 13 06:49:53.600418 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 06:49:53.601478 systemd-networkd[1028]: lxc_health: Gained carrier Dec 13 06:49:53.883942 systemd-networkd[1028]: lxceab72992bcc0: Link UP Dec 13 06:49:53.890395 kernel: eth0: renamed from tmp4bf66 Dec 13 06:49:53.906403 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceab72992bcc0: link becomes ready Dec 13 06:49:53.906607 systemd-networkd[1028]: lxceab72992bcc0: Gained carrier Dec 13 06:49:53.927976 systemd-networkd[1028]: lxcb22377020d96: Link UP Dec 13 06:49:53.938406 kernel: eth0: renamed from tmpebf33 Dec 13 06:49:53.953696 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb22377020d96: link becomes ready Dec 13 06:49:53.953217 systemd-networkd[1028]: lxcb22377020d96: Gained carrier Dec 13 06:49:55.219734 kubelet[2027]: I1213 06:49:55.219660 2027 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kfczg" podStartSLOduration=13.428038653 podStartE2EDuration="29.219514962s" podCreationTimestamp="2024-12-13 06:49:26 +0000 UTC" firstStartedPulling="2024-12-13 06:49:27.320308483 +0000 UTC m=+13.590337343" lastFinishedPulling="2024-12-13 06:49:43.11178478 +0000 UTC m=+29.381813652" observedRunningTime="2024-12-13 06:49:48.510323359 +0000 UTC m=+34.780352245" watchObservedRunningTime="2024-12-13 06:49:55.219514962 +0000 UTC m=+41.489543825" Dec 13 06:49:55.349555 systemd-networkd[1028]: lxcb22377020d96: Gained IPv6LL Dec 13 06:49:55.600716 systemd-networkd[1028]: lxc_health: Gained IPv6LL Dec 13 06:49:55.856561 systemd-networkd[1028]: lxceab72992bcc0: Gained IPv6LL Dec 13 06:49:59.824285 env[1189]: time="2024-12-13T06:49:59.824100250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:49:59.824285 env[1189]: time="2024-12-13T06:49:59.824275267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:49:59.826829 env[1189]: time="2024-12-13T06:49:59.824329097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:49:59.845498 env[1189]: time="2024-12-13T06:49:59.834664764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:49:59.845498 env[1189]: time="2024-12-13T06:49:59.834715345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:49:59.845498 env[1189]: time="2024-12-13T06:49:59.834733106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:49:59.845498 env[1189]: time="2024-12-13T06:49:59.834891522Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bf66c55d388b00787c2cea92e5115938bc46e7431ddaad7ad2e314a5f2712bb pid=3210 runtime=io.containerd.runc.v2 Dec 13 06:49:59.845498 env[1189]: time="2024-12-13T06:49:59.832743703Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebf33208d92079ce5fd998ccf3cdbd3ab0ed703fc5c6221ecfad72384a936173 pid=3198 runtime=io.containerd.runc.v2 Dec 13 06:49:59.894517 systemd[1]: Started cri-containerd-ebf33208d92079ce5fd998ccf3cdbd3ab0ed703fc5c6221ecfad72384a936173.scope. Dec 13 06:49:59.925574 systemd[1]: Started cri-containerd-4bf66c55d388b00787c2cea92e5115938bc46e7431ddaad7ad2e314a5f2712bb.scope. Dec 13 06:50:00.031108 env[1189]: time="2024-12-13T06:50:00.031020238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lwtcf,Uid:10adc0ac-390b-4057-977d-6862e68fdad8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebf33208d92079ce5fd998ccf3cdbd3ab0ed703fc5c6221ecfad72384a936173\"" Dec 13 06:50:00.045738 env[1189]: time="2024-12-13T06:50:00.045672426Z" level=info msg="CreateContainer within sandbox \"ebf33208d92079ce5fd998ccf3cdbd3ab0ed703fc5c6221ecfad72384a936173\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 06:50:00.065213 env[1189]: time="2024-12-13T06:50:00.065142247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f2rz4,Uid:172b6fbb-b08f-4e26-a64c-e9b32258e448,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bf66c55d388b00787c2cea92e5115938bc46e7431ddaad7ad2e314a5f2712bb\"" Dec 13 06:50:00.069953 env[1189]: time="2024-12-13T06:50:00.069696144Z" level=info msg="CreateContainer within sandbox \"4bf66c55d388b00787c2cea92e5115938bc46e7431ddaad7ad2e314a5f2712bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 06:50:00.093685 env[1189]: time="2024-12-13T06:50:00.092873117Z" level=info msg="CreateContainer within sandbox \"ebf33208d92079ce5fd998ccf3cdbd3ab0ed703fc5c6221ecfad72384a936173\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4681b5979548a5fc55496e2ed40e034dac099b2118b9194961da040edf8a5684\"" Dec 13 06:50:00.095086 env[1189]: time="2024-12-13T06:50:00.095048649Z" level=info msg="StartContainer for \"4681b5979548a5fc55496e2ed40e034dac099b2118b9194961da040edf8a5684\"" Dec 13 06:50:00.108317 env[1189]: time="2024-12-13T06:50:00.108256773Z" level=info msg="CreateContainer within sandbox \"4bf66c55d388b00787c2cea92e5115938bc46e7431ddaad7ad2e314a5f2712bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed14440724d407b618ec9597e161e4811aa7ee3b3018ba8e1614b6d6ce12be57\"" Dec 13 06:50:00.109217 env[1189]: time="2024-12-13T06:50:00.108991843Z" level=info msg="StartContainer for \"ed14440724d407b618ec9597e161e4811aa7ee3b3018ba8e1614b6d6ce12be57\"" Dec 13 06:50:00.132490 systemd[1]: Started cri-containerd-4681b5979548a5fc55496e2ed40e034dac099b2118b9194961da040edf8a5684.scope. Dec 13 06:50:00.153800 systemd[1]: Started cri-containerd-ed14440724d407b618ec9597e161e4811aa7ee3b3018ba8e1614b6d6ce12be57.scope. Dec 13 06:50:00.244497 env[1189]: time="2024-12-13T06:50:00.244434082Z" level=info msg="StartContainer for \"ed14440724d407b618ec9597e161e4811aa7ee3b3018ba8e1614b6d6ce12be57\" returns successfully" Dec 13 06:50:00.246114 env[1189]: time="2024-12-13T06:50:00.246012292Z" level=info msg="StartContainer for \"4681b5979548a5fc55496e2ed40e034dac099b2118b9194961da040edf8a5684\" returns successfully" Dec 13 06:50:00.434339 kubelet[2027]: I1213 06:50:00.434198 2027 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-f2rz4" podStartSLOduration=34.434118341 podStartE2EDuration="34.434118341s" podCreationTimestamp="2024-12-13 06:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:50:00.43042476 +0000 UTC m=+46.700453636" watchObservedRunningTime="2024-12-13 06:50:00.434118341 +0000 UTC m=+46.704147231" Dec 13 06:50:00.451338 kubelet[2027]: I1213 06:50:00.451282 2027 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-lwtcf" podStartSLOduration=34.451227368 podStartE2EDuration="34.451227368s" podCreationTimestamp="2024-12-13 06:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:50:00.450323728 +0000 UTC m=+46.720352598" watchObservedRunningTime="2024-12-13 06:50:00.451227368 +0000 UTC m=+46.721256231" Dec 13 06:50:39.107401 systemd[1]: Started sshd@5-10.230.20.2:22-139.178.89.65:53148.service. Dec 13 06:50:40.025197 sshd[3363]: Accepted publickey for core from 139.178.89.65 port 53148 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:50:40.029622 sshd[3363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:50:40.040401 systemd-logind[1180]: New session 6 of user core. Dec 13 06:50:40.040633 systemd[1]: Started session-6.scope. Dec 13 06:50:40.871153 sshd[3363]: pam_unix(sshd:session): session closed for user core Dec 13 06:50:40.876265 systemd[1]: sshd@5-10.230.20.2:22-139.178.89.65:53148.service: Deactivated successfully. Dec 13 06:50:40.877544 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 06:50:40.878648 systemd-logind[1180]: Session 6 logged out. Waiting for processes to exit. Dec 13 06:50:40.879994 systemd-logind[1180]: Removed session 6. Dec 13 06:50:46.019374 systemd[1]: Started sshd@6-10.230.20.2:22-139.178.89.65:53158.service. Dec 13 06:50:46.910079 sshd[3375]: Accepted publickey for core from 139.178.89.65 port 53158 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:50:46.912464 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:50:46.921189 systemd-logind[1180]: New session 7 of user core. Dec 13 06:50:46.922103 systemd[1]: Started session-7.scope. Dec 13 06:50:47.664891 sshd[3375]: pam_unix(sshd:session): session closed for user core Dec 13 06:50:47.670665 systemd[1]: sshd@6-10.230.20.2:22-139.178.89.65:53158.service: Deactivated successfully. Dec 13 06:50:47.671963 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 06:50:47.672926 systemd-logind[1180]: Session 7 logged out. Waiting for processes to exit. Dec 13 06:50:47.674637 systemd-logind[1180]: Removed session 7. Dec 13 06:50:52.815485 systemd[1]: Started sshd@7-10.230.20.2:22-139.178.89.65:49838.service. Dec 13 06:50:53.707581 sshd[3387]: Accepted publickey for core from 139.178.89.65 port 49838 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:50:53.709724 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:50:53.717160 systemd-logind[1180]: New session 8 of user core. Dec 13 06:50:53.718150 systemd[1]: Started session-8.scope. Dec 13 06:50:54.430699 sshd[3387]: pam_unix(sshd:session): session closed for user core Dec 13 06:50:54.435534 systemd[1]: sshd@7-10.230.20.2:22-139.178.89.65:49838.service: Deactivated successfully. Dec 13 06:50:54.436928 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 06:50:54.437970 systemd-logind[1180]: Session 8 logged out. Waiting for processes to exit. Dec 13 06:50:54.439489 systemd-logind[1180]: Removed session 8. Dec 13 06:50:59.581892 systemd[1]: Started sshd@8-10.230.20.2:22-139.178.89.65:36872.service. Dec 13 06:51:00.486452 sshd[3402]: Accepted publickey for core from 139.178.89.65 port 36872 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:00.488338 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:00.496289 systemd[1]: Started session-9.scope. Dec 13 06:51:00.498215 systemd-logind[1180]: New session 9 of user core. Dec 13 06:51:01.232955 sshd[3402]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:01.237346 systemd[1]: sshd@8-10.230.20.2:22-139.178.89.65:36872.service: Deactivated successfully. Dec 13 06:51:01.238724 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 06:51:01.239945 systemd-logind[1180]: Session 9 logged out. Waiting for processes to exit. Dec 13 06:51:01.241598 systemd-logind[1180]: Removed session 9. Dec 13 06:51:01.380765 systemd[1]: Started sshd@9-10.230.20.2:22-139.178.89.65:36882.service. Dec 13 06:51:02.278471 sshd[3415]: Accepted publickey for core from 139.178.89.65 port 36882 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:02.281594 sshd[3415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:02.293430 systemd-logind[1180]: New session 10 of user core. Dec 13 06:51:02.294704 systemd[1]: Started session-10.scope. Dec 13 06:51:03.093965 sshd[3415]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:03.099654 systemd[1]: sshd@9-10.230.20.2:22-139.178.89.65:36882.service: Deactivated successfully. Dec 13 06:51:03.101122 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 06:51:03.102743 systemd-logind[1180]: Session 10 logged out. Waiting for processes to exit. Dec 13 06:51:03.103984 systemd-logind[1180]: Removed session 10. Dec 13 06:51:03.243899 systemd[1]: Started sshd@10-10.230.20.2:22-139.178.89.65:36884.service. Dec 13 06:51:04.151003 sshd[3425]: Accepted publickey for core from 139.178.89.65 port 36884 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:04.153043 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:04.160958 systemd[1]: Started session-11.scope. Dec 13 06:51:04.161529 systemd-logind[1180]: New session 11 of user core. Dec 13 06:51:04.945202 sshd[3425]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:04.949943 systemd-logind[1180]: Session 11 logged out. Waiting for processes to exit. Dec 13 06:51:04.952013 systemd[1]: sshd@10-10.230.20.2:22-139.178.89.65:36884.service: Deactivated successfully. Dec 13 06:51:04.953083 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 06:51:04.954531 systemd-logind[1180]: Removed session 11. Dec 13 06:51:10.095250 systemd[1]: Started sshd@11-10.230.20.2:22-139.178.89.65:57552.service. Dec 13 06:51:10.996767 sshd[3437]: Accepted publickey for core from 139.178.89.65 port 57552 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:10.999589 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:11.008041 systemd-logind[1180]: New session 12 of user core. Dec 13 06:51:11.008976 systemd[1]: Started session-12.scope. Dec 13 06:51:11.703725 sshd[3437]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:11.707500 systemd[1]: sshd@11-10.230.20.2:22-139.178.89.65:57552.service: Deactivated successfully. Dec 13 06:51:11.708556 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 06:51:11.709673 systemd-logind[1180]: Session 12 logged out. Waiting for processes to exit. Dec 13 06:51:11.711190 systemd-logind[1180]: Removed session 12. Dec 13 06:51:16.852233 systemd[1]: Started sshd@12-10.230.20.2:22-139.178.89.65:57566.service. Dec 13 06:51:17.742959 sshd[3451]: Accepted publickey for core from 139.178.89.65 port 57566 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:17.745312 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:17.753286 systemd-logind[1180]: New session 13 of user core. Dec 13 06:51:17.753747 systemd[1]: Started session-13.scope. Dec 13 06:51:18.467169 sshd[3451]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:18.471989 systemd-logind[1180]: Session 13 logged out. Waiting for processes to exit. Dec 13 06:51:18.472533 systemd[1]: sshd@12-10.230.20.2:22-139.178.89.65:57566.service: Deactivated successfully. Dec 13 06:51:18.473611 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 06:51:18.474710 systemd-logind[1180]: Removed session 13. Dec 13 06:51:18.614941 systemd[1]: Started sshd@13-10.230.20.2:22-139.178.89.65:42854.service. Dec 13 06:51:19.506274 sshd[3462]: Accepted publickey for core from 139.178.89.65 port 42854 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:19.508827 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:19.516245 systemd-logind[1180]: New session 14 of user core. Dec 13 06:51:19.516343 systemd[1]: Started session-14.scope. Dec 13 06:51:20.566582 sshd[3462]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:20.575453 systemd[1]: sshd@13-10.230.20.2:22-139.178.89.65:42854.service: Deactivated successfully. Dec 13 06:51:20.576731 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 06:51:20.579025 systemd-logind[1180]: Session 14 logged out. Waiting for processes to exit. Dec 13 06:51:20.580862 systemd-logind[1180]: Removed session 14. Dec 13 06:51:20.721126 systemd[1]: Started sshd@14-10.230.20.2:22-139.178.89.65:42864.service. Dec 13 06:51:21.626821 sshd[3472]: Accepted publickey for core from 139.178.89.65 port 42864 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:21.629404 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:21.639096 systemd-logind[1180]: New session 15 of user core. Dec 13 06:51:21.640082 systemd[1]: Started session-15.scope. Dec 13 06:51:24.721447 sshd[3472]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:24.728585 systemd[1]: sshd@14-10.230.20.2:22-139.178.89.65:42864.service: Deactivated successfully. Dec 13 06:51:24.731028 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 06:51:24.732677 systemd-logind[1180]: Session 15 logged out. Waiting for processes to exit. Dec 13 06:51:24.734436 systemd-logind[1180]: Removed session 15. Dec 13 06:51:24.867641 systemd[1]: Started sshd@15-10.230.20.2:22-139.178.89.65:42870.service. Dec 13 06:51:25.751192 sshd[3489]: Accepted publickey for core from 139.178.89.65 port 42870 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:25.754038 sshd[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:25.762630 systemd[1]: Started session-16.scope. Dec 13 06:51:25.764096 systemd-logind[1180]: New session 16 of user core. Dec 13 06:51:26.706946 sshd[3489]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:26.711082 systemd[1]: sshd@15-10.230.20.2:22-139.178.89.65:42870.service: Deactivated successfully. Dec 13 06:51:26.712258 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 06:51:26.713174 systemd-logind[1180]: Session 16 logged out. Waiting for processes to exit. Dec 13 06:51:26.714870 systemd-logind[1180]: Removed session 16. Dec 13 06:51:26.857414 systemd[1]: Started sshd@16-10.230.20.2:22-139.178.89.65:42872.service. Dec 13 06:51:27.742860 sshd[3499]: Accepted publickey for core from 139.178.89.65 port 42872 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:27.745148 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:27.753242 systemd-logind[1180]: New session 17 of user core. Dec 13 06:51:27.753951 systemd[1]: Started session-17.scope. Dec 13 06:51:28.462099 sshd[3499]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:28.466185 systemd-logind[1180]: Session 17 logged out. Waiting for processes to exit. Dec 13 06:51:28.466830 systemd[1]: sshd@16-10.230.20.2:22-139.178.89.65:42872.service: Deactivated successfully. Dec 13 06:51:28.468006 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 06:51:28.470490 systemd-logind[1180]: Removed session 17. Dec 13 06:51:33.620788 systemd[1]: Started sshd@17-10.230.20.2:22-139.178.89.65:53522.service. Dec 13 06:51:34.505500 sshd[3516]: Accepted publickey for core from 139.178.89.65 port 53522 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:34.507756 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:34.516125 systemd[1]: Started session-18.scope. Dec 13 06:51:34.517460 systemd-logind[1180]: New session 18 of user core. Dec 13 06:51:35.216291 sshd[3516]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:35.221756 systemd[1]: sshd@17-10.230.20.2:22-139.178.89.65:53522.service: Deactivated successfully. Dec 13 06:51:35.223094 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 06:51:35.224463 systemd-logind[1180]: Session 18 logged out. Waiting for processes to exit. Dec 13 06:51:35.226285 systemd-logind[1180]: Removed session 18. Dec 13 06:51:40.368298 systemd[1]: Started sshd@18-10.230.20.2:22-139.178.89.65:36642.service. Dec 13 06:51:41.267573 sshd[3528]: Accepted publickey for core from 139.178.89.65 port 36642 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:41.269667 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:41.277859 systemd[1]: Started session-19.scope. Dec 13 06:51:41.278743 systemd-logind[1180]: New session 19 of user core. Dec 13 06:51:41.982432 sshd[3528]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:41.987549 systemd[1]: sshd@18-10.230.20.2:22-139.178.89.65:36642.service: Deactivated successfully. Dec 13 06:51:41.987604 systemd-logind[1180]: Session 19 logged out. Waiting for processes to exit. Dec 13 06:51:41.988874 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 06:51:41.990338 systemd-logind[1180]: Removed session 19. Dec 13 06:51:47.130549 systemd[1]: Started sshd@19-10.230.20.2:22-139.178.89.65:36658.service. Dec 13 06:51:48.018839 sshd[3539]: Accepted publickey for core from 139.178.89.65 port 36658 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:48.021490 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:48.028460 systemd-logind[1180]: New session 20 of user core. Dec 13 06:51:48.029895 systemd[1]: Started session-20.scope. Dec 13 06:51:48.749846 sshd[3539]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:48.754449 systemd-logind[1180]: Session 20 logged out. Waiting for processes to exit. Dec 13 06:51:48.757201 systemd[1]: sshd@19-10.230.20.2:22-139.178.89.65:36658.service: Deactivated successfully. Dec 13 06:51:48.758313 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 06:51:48.760077 systemd-logind[1180]: Removed session 20. Dec 13 06:51:48.899446 systemd[1]: Started sshd@20-10.230.20.2:22-139.178.89.65:49882.service. Dec 13 06:51:49.795982 sshd[3550]: Accepted publickey for core from 139.178.89.65 port 49882 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:49.798008 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:49.806167 systemd[1]: Started session-21.scope. Dec 13 06:51:49.807427 systemd-logind[1180]: New session 21 of user core. Dec 13 06:51:52.545827 env[1189]: time="2024-12-13T06:51:52.545682429Z" level=info msg="StopContainer for \"c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86\" with timeout 30 (s)" Dec 13 06:51:52.554392 env[1189]: time="2024-12-13T06:51:52.554313827Z" level=info msg="Stop container \"c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86\" with signal terminated" Dec 13 06:51:52.572812 systemd[1]: run-containerd-runc-k8s.io-e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4-runc.r8Slg5.mount: Deactivated successfully. Dec 13 06:51:52.609450 systemd[1]: cri-containerd-c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86.scope: Deactivated successfully. Dec 13 06:51:52.637661 env[1189]: time="2024-12-13T06:51:52.637533883Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 06:51:52.655417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86-rootfs.mount: Deactivated successfully. Dec 13 06:51:52.660656 env[1189]: time="2024-12-13T06:51:52.660357461Z" level=info msg="StopContainer for \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\" with timeout 2 (s)" Dec 13 06:51:52.661375 env[1189]: time="2024-12-13T06:51:52.661322584Z" level=info msg="Stop container \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\" with signal terminated" Dec 13 06:51:52.662581 env[1189]: time="2024-12-13T06:51:52.662519960Z" level=info msg="shim disconnected" id=c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86 Dec 13 06:51:52.662680 env[1189]: time="2024-12-13T06:51:52.662596566Z" level=warning msg="cleaning up after shim disconnected" id=c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86 namespace=k8s.io Dec 13 06:51:52.662680 env[1189]: time="2024-12-13T06:51:52.662623673Z" level=info msg="cleaning up dead shim" Dec 13 06:51:52.682581 systemd-networkd[1028]: lxc_health: Link DOWN Dec 13 06:51:52.682600 systemd-networkd[1028]: lxc_health: Lost carrier Dec 13 06:51:52.711452 env[1189]: time="2024-12-13T06:51:52.702910536Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3597 runtime=io.containerd.runc.v2\n" Dec 13 06:51:52.712468 env[1189]: time="2024-12-13T06:51:52.711873029Z" level=info msg="StopContainer for \"c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86\" returns successfully" Dec 13 06:51:52.716838 env[1189]: time="2024-12-13T06:51:52.716555068Z" level=info msg="StopPodSandbox for \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\"" Dec 13 06:51:52.716838 env[1189]: time="2024-12-13T06:51:52.716691435Z" level=info msg="Container to stop \"c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:52.719690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762-shm.mount: Deactivated successfully. Dec 13 06:51:52.730357 systemd[1]: cri-containerd-e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4.scope: Deactivated successfully. Dec 13 06:51:52.730786 systemd[1]: cri-containerd-e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4.scope: Consumed 10.333s CPU time. Dec 13 06:51:52.746261 systemd[1]: cri-containerd-976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762.scope: Deactivated successfully. Dec 13 06:51:52.785869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4-rootfs.mount: Deactivated successfully. Dec 13 06:51:52.800029 env[1189]: time="2024-12-13T06:51:52.799799361Z" level=info msg="shim disconnected" id=e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4 Dec 13 06:51:52.800029 env[1189]: time="2024-12-13T06:51:52.799865802Z" level=warning msg="cleaning up after shim disconnected" id=e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4 namespace=k8s.io Dec 13 06:51:52.800029 env[1189]: time="2024-12-13T06:51:52.799883341Z" level=info msg="cleaning up dead shim" Dec 13 06:51:52.815237 env[1189]: time="2024-12-13T06:51:52.815170563Z" level=info msg="shim disconnected" id=976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762 Dec 13 06:51:52.816646 env[1189]: time="2024-12-13T06:51:52.816613259Z" level=warning msg="cleaning up after shim disconnected" id=976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762 namespace=k8s.io Dec 13 06:51:52.816844 env[1189]: time="2024-12-13T06:51:52.816814545Z" level=info msg="cleaning up dead shim" Dec 13 06:51:52.820385 env[1189]: time="2024-12-13T06:51:52.820327947Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3650 runtime=io.containerd.runc.v2\n" Dec 13 06:51:52.822265 env[1189]: time="2024-12-13T06:51:52.822220460Z" level=info msg="StopContainer for \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\" returns successfully" Dec 13 06:51:52.823013 env[1189]: time="2024-12-13T06:51:52.822962971Z" level=info msg="StopPodSandbox for \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\"" Dec 13 06:51:52.823152 env[1189]: time="2024-12-13T06:51:52.823053034Z" level=info msg="Container to stop \"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:52.823152 env[1189]: time="2024-12-13T06:51:52.823080658Z" level=info msg="Container to stop \"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:52.823152 env[1189]: time="2024-12-13T06:51:52.823098553Z" level=info msg="Container to stop \"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:52.823152 env[1189]: time="2024-12-13T06:51:52.823117156Z" level=info msg="Container to stop \"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:52.823152 env[1189]: time="2024-12-13T06:51:52.823143641Z" level=info msg="Container to stop \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:52.835877 systemd[1]: cri-containerd-9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1.scope: Deactivated successfully. Dec 13 06:51:52.840655 env[1189]: time="2024-12-13T06:51:52.840605326Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3660 runtime=io.containerd.runc.v2\n" Dec 13 06:51:52.841813 env[1189]: time="2024-12-13T06:51:52.841773494Z" level=info msg="TearDown network for sandbox \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\" successfully" Dec 13 06:51:52.841920 env[1189]: time="2024-12-13T06:51:52.841812529Z" level=info msg="StopPodSandbox for \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\" returns successfully" Dec 13 06:51:52.900895 env[1189]: time="2024-12-13T06:51:52.900827029Z" level=info msg="shim disconnected" id=9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1 Dec 13 06:51:52.900895 env[1189]: time="2024-12-13T06:51:52.900905952Z" level=warning msg="cleaning up after shim disconnected" id=9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1 namespace=k8s.io Dec 13 06:51:52.900895 env[1189]: time="2024-12-13T06:51:52.900924883Z" level=info msg="cleaning up dead shim" Dec 13 06:51:52.914217 env[1189]: time="2024-12-13T06:51:52.914097108Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3694 runtime=io.containerd.runc.v2\n" Dec 13 06:51:52.915089 env[1189]: time="2024-12-13T06:51:52.915020870Z" level=info msg="TearDown network for sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" successfully" Dec 13 06:51:52.915202 env[1189]: time="2024-12-13T06:51:52.915073019Z" level=info msg="StopPodSandbox for \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" returns successfully" Dec 13 06:51:52.991251 kubelet[2027]: I1213 06:51:52.991186 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/444136c7-4d61-4e0e-ac32-e73836b1806f-cilium-config-path\") pod \"444136c7-4d61-4e0e-ac32-e73836b1806f\" (UID: \"444136c7-4d61-4e0e-ac32-e73836b1806f\") " Dec 13 06:51:52.991972 kubelet[2027]: I1213 06:51:52.991274 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qbmz\" (UniqueName: \"kubernetes.io/projected/444136c7-4d61-4e0e-ac32-e73836b1806f-kube-api-access-5qbmz\") pod \"444136c7-4d61-4e0e-ac32-e73836b1806f\" (UID: \"444136c7-4d61-4e0e-ac32-e73836b1806f\") " Dec 13 06:51:53.004302 kubelet[2027]: I1213 06:51:53.001731 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444136c7-4d61-4e0e-ac32-e73836b1806f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "444136c7-4d61-4e0e-ac32-e73836b1806f" (UID: "444136c7-4d61-4e0e-ac32-e73836b1806f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:51:53.007735 kubelet[2027]: I1213 06:51:53.007688 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444136c7-4d61-4e0e-ac32-e73836b1806f-kube-api-access-5qbmz" (OuterVolumeSpecName: "kube-api-access-5qbmz") pod "444136c7-4d61-4e0e-ac32-e73836b1806f" (UID: "444136c7-4d61-4e0e-ac32-e73836b1806f"). InnerVolumeSpecName "kube-api-access-5qbmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:51:53.091811 kubelet[2027]: I1213 06:51:53.091766 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-bpf-maps\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.092149 kubelet[2027]: I1213 06:51:53.092122 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-host-proc-sys-kernel\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.092458 kubelet[2027]: I1213 06:51:53.092434 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-config-path\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.092765 kubelet[2027]: I1213 06:51:53.092741 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-lib-modules\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.092935 kubelet[2027]: I1213 06:51:53.092910 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-xtables-lock\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.093122 kubelet[2027]: I1213 06:51:53.093083 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:53.093226 kubelet[2027]: I1213 06:51:53.092605 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:53.093226 kubelet[2027]: I1213 06:51:53.093195 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:53.093388 kubelet[2027]: I1213 06:51:53.093083 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:53.093770 kubelet[2027]: I1213 06:51:53.093745 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-run\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.094754 kubelet[2027]: I1213 06:51:53.094728 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-hostproc\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.094962 kubelet[2027]: I1213 06:51:53.094940 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-etc-cni-netd\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.098481 kubelet[2027]: I1213 06:51:53.098448 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c38a062-7b3e-4f2c-97be-2b09021359c6-hubble-tls\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.098719 kubelet[2027]: I1213 06:51:53.094892 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-hostproc" (OuterVolumeSpecName: "hostproc") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:53.098833 kubelet[2027]: I1213 06:51:53.095685 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:53.099076 kubelet[2027]: I1213 06:51:53.093901 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:53.099188 kubelet[2027]: I1213 06:51:53.098348 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:51:53.099396 kubelet[2027]: I1213 06:51:53.098990 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-host-proc-sys-net\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.099610 kubelet[2027]: I1213 06:51:53.099534 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-cgroup\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.099813 kubelet[2027]: I1213 06:51:53.099791 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cni-path\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.100074 kubelet[2027]: I1213 06:51:53.099753 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:53.100354 kubelet[2027]: I1213 06:51:53.099027 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:53.100550 kubelet[2027]: I1213 06:51:53.099963 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cni-path" (OuterVolumeSpecName: "cni-path") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:53.100749 kubelet[2027]: I1213 06:51:53.100279 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c38a062-7b3e-4f2c-97be-2b09021359c6-clustermesh-secrets\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.100896 kubelet[2027]: I1213 06:51:53.100871 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpdmg\" (UniqueName: \"kubernetes.io/projected/9c38a062-7b3e-4f2c-97be-2b09021359c6-kube-api-access-lpdmg\") pod \"9c38a062-7b3e-4f2c-97be-2b09021359c6\" (UID: \"9c38a062-7b3e-4f2c-97be-2b09021359c6\") " Dec 13 06:51:53.101161 kubelet[2027]: I1213 06:51:53.101137 2027 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-bpf-maps\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.101572 kubelet[2027]: I1213 06:51:53.101536 2027 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-config-path\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.101769 kubelet[2027]: I1213 06:51:53.101742 2027 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-host-proc-sys-kernel\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.101909 kubelet[2027]: I1213 06:51:53.101886 2027 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5qbmz\" (UniqueName: \"kubernetes.io/projected/444136c7-4d61-4e0e-ac32-e73836b1806f-kube-api-access-5qbmz\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.102038 kubelet[2027]: I1213 06:51:53.102015 2027 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-lib-modules\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.102164 kubelet[2027]: I1213 06:51:53.102141 2027 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-xtables-lock\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.102289 kubelet[2027]: I1213 06:51:53.102267 2027 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-hostproc\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.102533 kubelet[2027]: I1213 06:51:53.102509 2027 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-run\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.102701 kubelet[2027]: I1213 06:51:53.102679 2027 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-etc-cni-netd\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.102851 kubelet[2027]: I1213 06:51:53.102830 2027 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cni-path\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.102977 kubelet[2027]: I1213 06:51:53.102955 2027 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/444136c7-4d61-4e0e-ac32-e73836b1806f-cilium-config-path\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.103104 kubelet[2027]: I1213 06:51:53.103081 2027 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-host-proc-sys-net\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.103254 kubelet[2027]: I1213 06:51:53.103231 2027 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c38a062-7b3e-4f2c-97be-2b09021359c6-cilium-cgroup\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.105213 kubelet[2027]: I1213 06:51:53.105176 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c38a062-7b3e-4f2c-97be-2b09021359c6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:51:53.107677 kubelet[2027]: I1213 06:51:53.107590 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c38a062-7b3e-4f2c-97be-2b09021359c6-kube-api-access-lpdmg" (OuterVolumeSpecName: "kube-api-access-lpdmg") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "kube-api-access-lpdmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:51:53.110317 kubelet[2027]: I1213 06:51:53.110283 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c38a062-7b3e-4f2c-97be-2b09021359c6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9c38a062-7b3e-4f2c-97be-2b09021359c6" (UID: "9c38a062-7b3e-4f2c-97be-2b09021359c6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:51:53.204527 kubelet[2027]: I1213 06:51:53.204424 2027 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c38a062-7b3e-4f2c-97be-2b09021359c6-hubble-tls\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.204859 kubelet[2027]: I1213 06:51:53.204833 2027 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c38a062-7b3e-4f2c-97be-2b09021359c6-clustermesh-secrets\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.205012 kubelet[2027]: I1213 06:51:53.204987 2027 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lpdmg\" (UniqueName: \"kubernetes.io/projected/9c38a062-7b3e-4f2c-97be-2b09021359c6-kube-api-access-lpdmg\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:51:53.565621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762-rootfs.mount: Deactivated successfully. Dec 13 06:51:53.566226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1-rootfs.mount: Deactivated successfully. Dec 13 06:51:53.566644 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1-shm.mount: Deactivated successfully. Dec 13 06:51:53.567054 systemd[1]: var-lib-kubelet-pods-444136c7\x2d4d61\x2d4e0e\x2dac32\x2de73836b1806f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5qbmz.mount: Deactivated successfully. Dec 13 06:51:53.567455 systemd[1]: var-lib-kubelet-pods-9c38a062\x2d7b3e\x2d4f2c\x2d97be\x2d2b09021359c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlpdmg.mount: Deactivated successfully. Dec 13 06:51:53.567817 systemd[1]: var-lib-kubelet-pods-9c38a062\x2d7b3e\x2d4f2c\x2d97be\x2d2b09021359c6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 06:51:53.568141 systemd[1]: var-lib-kubelet-pods-9c38a062\x2d7b3e\x2d4f2c\x2d97be\x2d2b09021359c6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 06:51:53.747705 kubelet[2027]: I1213 06:51:53.747594 2027 scope.go:117] "RemoveContainer" containerID="c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86" Dec 13 06:51:53.750466 env[1189]: time="2024-12-13T06:51:53.750387899Z" level=info msg="RemoveContainer for \"c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86\"" Dec 13 06:51:53.758620 systemd[1]: Removed slice kubepods-besteffort-pod444136c7_4d61_4e0e_ac32_e73836b1806f.slice. Dec 13 06:51:53.764420 env[1189]: time="2024-12-13T06:51:53.764343437Z" level=info msg="RemoveContainer for \"c0caea758685535e3f802359728c6fa7236d91bff7644f4f6ebc337ea2f99a86\" returns successfully" Dec 13 06:51:53.779400 systemd[1]: Removed slice kubepods-burstable-pod9c38a062_7b3e_4f2c_97be_2b09021359c6.slice. Dec 13 06:51:53.779570 systemd[1]: kubepods-burstable-pod9c38a062_7b3e_4f2c_97be_2b09021359c6.slice: Consumed 10.512s CPU time. Dec 13 06:51:53.780414 kubelet[2027]: I1213 06:51:53.780381 2027 scope.go:117] "RemoveContainer" containerID="e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4" Dec 13 06:51:53.785104 env[1189]: time="2024-12-13T06:51:53.784247273Z" level=info msg="RemoveContainer for \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\"" Dec 13 06:51:53.789527 env[1189]: time="2024-12-13T06:51:53.789467269Z" level=info msg="RemoveContainer for \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\" returns successfully" Dec 13 06:51:53.789833 kubelet[2027]: I1213 06:51:53.789797 2027 scope.go:117] "RemoveContainer" containerID="626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7" Dec 13 06:51:53.791916 env[1189]: time="2024-12-13T06:51:53.791506238Z" level=info msg="RemoveContainer for \"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7\"" Dec 13 06:51:53.797467 env[1189]: time="2024-12-13T06:51:53.795990329Z" level=info msg="RemoveContainer for \"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7\" returns successfully" Dec 13 06:51:53.797706 kubelet[2027]: I1213 06:51:53.796508 2027 scope.go:117] "RemoveContainer" containerID="7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0" Dec 13 06:51:53.808806 env[1189]: time="2024-12-13T06:51:53.808715764Z" level=info msg="RemoveContainer for \"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0\"" Dec 13 06:51:53.812395 env[1189]: time="2024-12-13T06:51:53.812331826Z" level=info msg="RemoveContainer for \"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0\" returns successfully" Dec 13 06:51:53.812809 kubelet[2027]: I1213 06:51:53.812743 2027 scope.go:117] "RemoveContainer" containerID="924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309" Dec 13 06:51:53.817950 env[1189]: time="2024-12-13T06:51:53.815799017Z" level=info msg="RemoveContainer for \"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309\"" Dec 13 06:51:53.821911 env[1189]: time="2024-12-13T06:51:53.820161219Z" level=info msg="RemoveContainer for \"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309\" returns successfully" Dec 13 06:51:53.824736 kubelet[2027]: I1213 06:51:53.824680 2027 scope.go:117] "RemoveContainer" containerID="eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682" Dec 13 06:51:53.827798 env[1189]: time="2024-12-13T06:51:53.827747787Z" level=info msg="RemoveContainer for \"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682\"" Dec 13 06:51:53.831618 env[1189]: time="2024-12-13T06:51:53.831578703Z" level=info msg="RemoveContainer for \"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682\" returns successfully" Dec 13 06:51:53.831983 kubelet[2027]: I1213 06:51:53.831953 2027 scope.go:117] "RemoveContainer" containerID="e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4" Dec 13 06:51:53.832767 env[1189]: time="2024-12-13T06:51:53.832483177Z" level=error msg="ContainerStatus for \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\": not found" Dec 13 06:51:53.833089 kubelet[2027]: E1213 06:51:53.833048 2027 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\": not found" containerID="e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4" Dec 13 06:51:53.836791 kubelet[2027]: I1213 06:51:53.836756 2027 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4"} err="failed to get container status \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e140698a59c844365bf0eed1621267b1363fbc5cd3335e6135832ddd8e0c88e4\": not found" Dec 13 06:51:53.837873 kubelet[2027]: I1213 06:51:53.837107 2027 scope.go:117] "RemoveContainer" containerID="626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7" Dec 13 06:51:53.838344 env[1189]: time="2024-12-13T06:51:53.838273717Z" level=error msg="ContainerStatus for \"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7\": not found" Dec 13 06:51:53.838773 kubelet[2027]: E1213 06:51:53.838727 2027 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7\": not found" containerID="626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7" Dec 13 06:51:53.838973 kubelet[2027]: I1213 06:51:53.838936 2027 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7"} err="failed to get container status \"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"626c3aa5b8260b6528dccea2ef50d27d1bb268d51983d8a0313d096c4265aeb7\": not found" Dec 13 06:51:53.839106 kubelet[2027]: I1213 06:51:53.839083 2027 scope.go:117] "RemoveContainer" containerID="7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0" Dec 13 06:51:53.839739 env[1189]: time="2024-12-13T06:51:53.839662879Z" level=error msg="ContainerStatus for \"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0\": not found" Dec 13 06:51:53.839978 kubelet[2027]: E1213 06:51:53.839953 2027 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0\": not found" containerID="7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0" Dec 13 06:51:53.840179 kubelet[2027]: I1213 06:51:53.840155 2027 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0"} err="failed to get container status \"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"7255fc1aee4903a81554709be3cbf6c3601f95f422b4e4d085e19c9c328240d0\": not found" Dec 13 06:51:53.840321 kubelet[2027]: I1213 06:51:53.840298 2027 scope.go:117] "RemoveContainer" containerID="924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309" Dec 13 06:51:53.840797 env[1189]: time="2024-12-13T06:51:53.840735133Z" level=error msg="ContainerStatus for \"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309\": not found" Dec 13 06:51:53.841196 kubelet[2027]: E1213 06:51:53.841079 2027 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309\": not found" containerID="924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309" Dec 13 06:51:53.841329 kubelet[2027]: I1213 06:51:53.841304 2027 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309"} err="failed to get container status \"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309\": rpc error: code = NotFound desc = an error occurred when try to find container \"924416b91c8e62aae005e38f6f90282de282c1de0ea3fa3d79755cb2391e3309\": not found" Dec 13 06:51:53.847463 kubelet[2027]: I1213 06:51:53.847423 2027 scope.go:117] "RemoveContainer" containerID="eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682" Dec 13 06:51:53.848033 env[1189]: time="2024-12-13T06:51:53.847944838Z" level=error msg="ContainerStatus for \"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682\": not found" Dec 13 06:51:53.848300 kubelet[2027]: E1213 06:51:53.848275 2027 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682\": not found" containerID="eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682" Dec 13 06:51:53.848494 kubelet[2027]: I1213 06:51:53.848469 2027 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682"} err="failed to get container status \"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb26b6a4682b531140187ec7dd8142de65e10367024d637314e38948f75f7682\": not found" Dec 13 06:51:54.153265 kubelet[2027]: I1213 06:51:54.153132 2027 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="444136c7-4d61-4e0e-ac32-e73836b1806f" path="/var/lib/kubelet/pods/444136c7-4d61-4e0e-ac32-e73836b1806f/volumes" Dec 13 06:51:54.155866 kubelet[2027]: I1213 06:51:54.155840 2027 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9c38a062-7b3e-4f2c-97be-2b09021359c6" path="/var/lib/kubelet/pods/9c38a062-7b3e-4f2c-97be-2b09021359c6/volumes" Dec 13 06:51:54.331754 kubelet[2027]: E1213 06:51:54.331713 2027 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:51:54.586342 sshd[3550]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:54.590936 systemd-logind[1180]: Session 21 logged out. Waiting for processes to exit. Dec 13 06:51:54.591296 systemd[1]: sshd@20-10.230.20.2:22-139.178.89.65:49882.service: Deactivated successfully. Dec 13 06:51:54.592444 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 06:51:54.592693 systemd[1]: session-21.scope: Consumed 1.455s CPU time. Dec 13 06:51:54.593778 systemd-logind[1180]: Removed session 21. Dec 13 06:51:54.733576 systemd[1]: Started sshd@21-10.230.20.2:22-139.178.89.65:49886.service. Dec 13 06:51:55.621569 sshd[3713]: Accepted publickey for core from 139.178.89.65 port 49886 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:55.624114 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:55.632253 systemd[1]: Started session-22.scope. Dec 13 06:51:55.632873 systemd-logind[1180]: New session 22 of user core. Dec 13 06:51:57.005789 kubelet[2027]: I1213 06:51:57.005720 2027 topology_manager.go:215] "Topology Admit Handler" podUID="65456da0-d167-4afb-899d-f468042df70d" podNamespace="kube-system" podName="cilium-xqks5" Dec 13 06:51:57.006753 kubelet[2027]: E1213 06:51:57.006519 2027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c38a062-7b3e-4f2c-97be-2b09021359c6" containerName="apply-sysctl-overwrites" Dec 13 06:51:57.006753 kubelet[2027]: E1213 06:51:57.006554 2027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c38a062-7b3e-4f2c-97be-2b09021359c6" containerName="mount-bpf-fs" Dec 13 06:51:57.006753 kubelet[2027]: E1213 06:51:57.006570 2027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c38a062-7b3e-4f2c-97be-2b09021359c6" containerName="clean-cilium-state" Dec 13 06:51:57.006753 kubelet[2027]: E1213 06:51:57.006583 2027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c38a062-7b3e-4f2c-97be-2b09021359c6" containerName="mount-cgroup" Dec 13 06:51:57.006753 kubelet[2027]: E1213 06:51:57.006595 2027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c38a062-7b3e-4f2c-97be-2b09021359c6" containerName="cilium-agent" Dec 13 06:51:57.006753 kubelet[2027]: E1213 06:51:57.006607 2027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="444136c7-4d61-4e0e-ac32-e73836b1806f" containerName="cilium-operator" Dec 13 06:51:57.007342 kubelet[2027]: I1213 06:51:57.007309 2027 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c38a062-7b3e-4f2c-97be-2b09021359c6" containerName="cilium-agent" Dec 13 06:51:57.007342 kubelet[2027]: I1213 06:51:57.007345 2027 memory_manager.go:354] "RemoveStaleState removing state" podUID="444136c7-4d61-4e0e-ac32-e73836b1806f" containerName="cilium-operator" Dec 13 06:51:57.028184 systemd[1]: Created slice kubepods-burstable-pod65456da0_d167_4afb_899d_f468042df70d.slice. Dec 13 06:51:57.110356 sshd[3713]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:57.116427 systemd[1]: sshd@21-10.230.20.2:22-139.178.89.65:49886.service: Deactivated successfully. Dec 13 06:51:57.117660 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 06:51:57.118526 systemd-logind[1180]: Session 22 logged out. Waiting for processes to exit. Dec 13 06:51:57.119813 systemd-logind[1180]: Removed session 22. Dec 13 06:51:57.132906 kubelet[2027]: I1213 06:51:57.132839 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65456da0-d167-4afb-899d-f468042df70d-cilium-config-path\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.133321 kubelet[2027]: I1213 06:51:57.133285 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-bpf-maps\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.133594 kubelet[2027]: I1213 06:51:57.133555 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cilium-cgroup\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.133835 kubelet[2027]: I1213 06:51:57.133811 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-hostproc\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.134178 kubelet[2027]: I1213 06:51:57.134153 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-host-proc-sys-net\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.134427 kubelet[2027]: I1213 06:51:57.134346 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-xtables-lock\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.134729 kubelet[2027]: I1213 06:51:57.134686 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65456da0-d167-4afb-899d-f468042df70d-clustermesh-secrets\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.134924 kubelet[2027]: I1213 06:51:57.134893 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cni-path\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.135151 kubelet[2027]: I1213 06:51:57.135129 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-lib-modules\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.135433 kubelet[2027]: I1213 06:51:57.135410 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/65456da0-d167-4afb-899d-f468042df70d-cilium-ipsec-secrets\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.135978 kubelet[2027]: I1213 06:51:57.135671 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cilium-run\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.135978 kubelet[2027]: I1213 06:51:57.135795 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-host-proc-sys-kernel\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.136194 kubelet[2027]: I1213 06:51:57.136170 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65456da0-d167-4afb-899d-f468042df70d-hubble-tls\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.136437 kubelet[2027]: I1213 06:51:57.136414 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-etc-cni-netd\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.136647 kubelet[2027]: I1213 06:51:57.136615 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h96zd\" (UniqueName: \"kubernetes.io/projected/65456da0-d167-4afb-899d-f468042df70d-kube-api-access-h96zd\") pod \"cilium-xqks5\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " pod="kube-system/cilium-xqks5" Dec 13 06:51:57.276974 systemd[1]: Started sshd@22-10.230.20.2:22-139.178.89.65:49900.service. Dec 13 06:51:57.335269 env[1189]: time="2024-12-13T06:51:57.335162116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqks5,Uid:65456da0-d167-4afb-899d-f468042df70d,Namespace:kube-system,Attempt:0,}" Dec 13 06:51:57.364554 env[1189]: time="2024-12-13T06:51:57.364066943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:51:57.364554 env[1189]: time="2024-12-13T06:51:57.364147548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:51:57.364554 env[1189]: time="2024-12-13T06:51:57.364166299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:51:57.366344 env[1189]: time="2024-12-13T06:51:57.365013892Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8 pid=3738 runtime=io.containerd.runc.v2 Dec 13 06:51:57.384626 systemd[1]: Started cri-containerd-af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8.scope. Dec 13 06:51:57.441096 env[1189]: time="2024-12-13T06:51:57.441027923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqks5,Uid:65456da0-d167-4afb-899d-f468042df70d,Namespace:kube-system,Attempt:0,} returns sandbox id \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\"" Dec 13 06:51:57.446290 env[1189]: time="2024-12-13T06:51:57.446119896Z" level=info msg="CreateContainer within sandbox \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:51:57.461688 env[1189]: time="2024-12-13T06:51:57.461601555Z" level=info msg="CreateContainer within sandbox \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791\"" Dec 13 06:51:57.463663 env[1189]: time="2024-12-13T06:51:57.463584450Z" level=info msg="StartContainer for \"511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791\"" Dec 13 06:51:57.490401 systemd[1]: Started cri-containerd-511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791.scope. Dec 13 06:51:57.523179 systemd[1]: cri-containerd-511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791.scope: Deactivated successfully. Dec 13 06:51:57.545596 env[1189]: time="2024-12-13T06:51:57.544603317Z" level=info msg="shim disconnected" id=511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791 Dec 13 06:51:57.545596 env[1189]: time="2024-12-13T06:51:57.545304553Z" level=warning msg="cleaning up after shim disconnected" id=511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791 namespace=k8s.io Dec 13 06:51:57.545596 env[1189]: time="2024-12-13T06:51:57.545327506Z" level=info msg="cleaning up dead shim" Dec 13 06:51:57.563259 env[1189]: time="2024-12-13T06:51:57.563140519Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3795 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T06:51:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 06:51:57.563807 env[1189]: time="2024-12-13T06:51:57.563562212Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Dec 13 06:51:57.564076 env[1189]: time="2024-12-13T06:51:57.564011419Z" level=error msg="Failed to pipe stdout of container \"511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791\"" error="reading from a closed fifo" Dec 13 06:51:57.564286 env[1189]: time="2024-12-13T06:51:57.564236274Z" level=error msg="Failed to pipe stderr of container \"511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791\"" error="reading from a closed fifo" Dec 13 06:51:57.565958 env[1189]: time="2024-12-13T06:51:57.565900966Z" level=error msg="StartContainer for \"511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 06:51:57.567079 kubelet[2027]: E1213 06:51:57.566636 2027 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791" Dec 13 06:51:57.571777 kubelet[2027]: E1213 06:51:57.571701 2027 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 06:51:57.571777 kubelet[2027]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 06:51:57.571777 kubelet[2027]: rm /hostbin/cilium-mount Dec 13 06:51:57.572023 kubelet[2027]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h96zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-xqks5_kube-system(65456da0-d167-4afb-899d-f468042df70d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 06:51:57.572023 kubelet[2027]: E1213 06:51:57.571917 2027 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xqks5" podUID="65456da0-d167-4afb-899d-f468042df70d" Dec 13 06:51:57.805612 env[1189]: time="2024-12-13T06:51:57.804934429Z" level=info msg="CreateContainer within sandbox \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 06:51:57.821992 env[1189]: time="2024-12-13T06:51:57.821925664Z" level=info msg="CreateContainer within sandbox \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786\"" Dec 13 06:51:57.824111 env[1189]: time="2024-12-13T06:51:57.823440536Z" level=info msg="StartContainer for \"5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786\"" Dec 13 06:51:57.850727 systemd[1]: Started cri-containerd-5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786.scope. Dec 13 06:51:57.869709 systemd[1]: cri-containerd-5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786.scope: Deactivated successfully. Dec 13 06:51:57.883861 env[1189]: time="2024-12-13T06:51:57.883764078Z" level=info msg="shim disconnected" id=5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786 Dec 13 06:51:57.883861 env[1189]: time="2024-12-13T06:51:57.883859056Z" level=warning msg="cleaning up after shim disconnected" id=5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786 namespace=k8s.io Dec 13 06:51:57.884154 env[1189]: time="2024-12-13T06:51:57.883875726Z" level=info msg="cleaning up dead shim" Dec 13 06:51:57.902179 env[1189]: time="2024-12-13T06:51:57.902114663Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3833 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T06:51:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 06:51:57.902674 env[1189]: time="2024-12-13T06:51:57.902586379Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Dec 13 06:51:57.903201 env[1189]: time="2024-12-13T06:51:57.903103376Z" level=error msg="Failed to pipe stdout of container \"5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786\"" error="reading from a closed fifo" Dec 13 06:51:57.903411 env[1189]: time="2024-12-13T06:51:57.903282768Z" level=error msg="Failed to pipe stderr of container \"5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786\"" error="reading from a closed fifo" Dec 13 06:51:57.905243 env[1189]: time="2024-12-13T06:51:57.905140114Z" level=error msg="StartContainer for \"5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 06:51:57.906243 kubelet[2027]: E1213 06:51:57.905604 2027 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786" Dec 13 06:51:57.906243 kubelet[2027]: E1213 06:51:57.905765 2027 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 06:51:57.906243 kubelet[2027]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 06:51:57.906243 kubelet[2027]: rm /hostbin/cilium-mount Dec 13 06:51:57.906243 kubelet[2027]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h96zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-xqks5_kube-system(65456da0-d167-4afb-899d-f468042df70d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 06:51:57.906243 kubelet[2027]: E1213 06:51:57.905833 2027 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xqks5" podUID="65456da0-d167-4afb-899d-f468042df70d" Dec 13 06:51:58.169910 sshd[3728]: Accepted publickey for core from 139.178.89.65 port 49900 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:58.172915 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:58.180984 systemd-logind[1180]: New session 23 of user core. Dec 13 06:51:58.183631 systemd[1]: Started session-23.scope. Dec 13 06:51:58.218971 kubelet[2027]: I1213 06:51:58.218923 2027 setters.go:568] "Node became not ready" node="srv-uleoy.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T06:51:58Z","lastTransitionTime":"2024-12-13T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 06:51:58.780718 kubelet[2027]: I1213 06:51:58.780670 2027 scope.go:117] "RemoveContainer" containerID="511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791" Dec 13 06:51:58.781216 kubelet[2027]: I1213 06:51:58.781176 2027 scope.go:117] "RemoveContainer" containerID="511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791" Dec 13 06:51:58.785239 env[1189]: time="2024-12-13T06:51:58.784929653Z" level=info msg="RemoveContainer for \"511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791\"" Dec 13 06:51:58.786151 env[1189]: time="2024-12-13T06:51:58.786111469Z" level=info msg="RemoveContainer for \"511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791\"" Dec 13 06:51:58.786311 env[1189]: time="2024-12-13T06:51:58.786260800Z" level=error msg="RemoveContainer for \"511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791\" failed" error="failed to set removing state for container \"511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791\": container is already in removing state" Dec 13 06:51:58.786922 kubelet[2027]: E1213 06:51:58.786887 2027 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791\": container is already in removing state" containerID="511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791" Dec 13 06:51:58.787059 kubelet[2027]: E1213 06:51:58.786993 2027 kuberuntime_container.go:858] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791": container is already in removing state; Skipping pod "cilium-xqks5_kube-system(65456da0-d167-4afb-899d-f468042df70d)" Dec 13 06:51:58.787613 kubelet[2027]: E1213 06:51:58.787580 2027 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-xqks5_kube-system(65456da0-d167-4afb-899d-f468042df70d)\"" pod="kube-system/cilium-xqks5" podUID="65456da0-d167-4afb-899d-f468042df70d" Dec 13 06:51:58.802537 env[1189]: time="2024-12-13T06:51:58.801709909Z" level=info msg="RemoveContainer for \"511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791\" returns successfully" Dec 13 06:51:58.929700 sshd[3728]: pam_unix(sshd:session): session closed for user core Dec 13 06:51:58.933890 systemd[1]: sshd@22-10.230.20.2:22-139.178.89.65:49900.service: Deactivated successfully. Dec 13 06:51:58.935379 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 06:51:58.936434 systemd-logind[1180]: Session 23 logged out. Waiting for processes to exit. Dec 13 06:51:58.938446 systemd-logind[1180]: Removed session 23. Dec 13 06:51:59.077018 systemd[1]: Started sshd@23-10.230.20.2:22-139.178.89.65:39490.service. Dec 13 06:51:59.333758 kubelet[2027]: E1213 06:51:59.333574 2027 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:51:59.786762 env[1189]: time="2024-12-13T06:51:59.786512104Z" level=info msg="StopPodSandbox for \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\"" Dec 13 06:51:59.786762 env[1189]: time="2024-12-13T06:51:59.786658407Z" level=info msg="Container to stop \"5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:59.789675 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8-shm.mount: Deactivated successfully. Dec 13 06:51:59.802138 systemd[1]: cri-containerd-af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8.scope: Deactivated successfully. Dec 13 06:51:59.842491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8-rootfs.mount: Deactivated successfully. Dec 13 06:51:59.852078 env[1189]: time="2024-12-13T06:51:59.851998769Z" level=info msg="shim disconnected" id=af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8 Dec 13 06:51:59.852078 env[1189]: time="2024-12-13T06:51:59.852071358Z" level=warning msg="cleaning up after shim disconnected" id=af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8 namespace=k8s.io Dec 13 06:51:59.852370 env[1189]: time="2024-12-13T06:51:59.852089361Z" level=info msg="cleaning up dead shim" Dec 13 06:51:59.874151 env[1189]: time="2024-12-13T06:51:59.874087503Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3877 runtime=io.containerd.runc.v2\n" Dec 13 06:51:59.874862 env[1189]: time="2024-12-13T06:51:59.874822242Z" level=info msg="TearDown network for sandbox \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\" successfully" Dec 13 06:51:59.875003 env[1189]: time="2024-12-13T06:51:59.874968991Z" level=info msg="StopPodSandbox for \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\" returns successfully" Dec 13 06:51:59.967931 sshd[3856]: Accepted publickey for core from 139.178.89.65 port 39490 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:51:59.969883 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:51:59.977589 systemd[1]: Started session-24.scope. Dec 13 06:51:59.979664 systemd-logind[1180]: New session 24 of user core. Dec 13 06:52:00.061003 kubelet[2027]: I1213 06:52:00.060895 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-hostproc\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.061341 kubelet[2027]: I1213 06:52:00.061303 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-xtables-lock\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.061606 kubelet[2027]: I1213 06:52:00.061564 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65456da0-d167-4afb-899d-f468042df70d-cilium-config-path\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.061799 kubelet[2027]: I1213 06:52:00.061765 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-bpf-maps\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.061997 kubelet[2027]: I1213 06:52:00.061956 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65456da0-d167-4afb-899d-f468042df70d-clustermesh-secrets\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.062420 kubelet[2027]: I1213 06:52:00.062392 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/65456da0-d167-4afb-899d-f468042df70d-cilium-ipsec-secrets\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.062656 kubelet[2027]: I1213 06:52:00.062621 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cilium-cgroup\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.062846 kubelet[2027]: I1213 06:52:00.062822 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cni-path\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.063025 kubelet[2027]: I1213 06:52:00.063000 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-etc-cni-netd\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.063229 kubelet[2027]: I1213 06:52:00.063195 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-host-proc-sys-net\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.068435 systemd[1]: var-lib-kubelet-pods-65456da0\x2dd167\x2d4afb\x2d899d\x2df468042df70d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 06:52:00.070507 kubelet[2027]: I1213 06:52:00.069689 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-host-proc-sys-kernel\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.070507 kubelet[2027]: I1213 06:52:00.069751 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65456da0-d167-4afb-899d-f468042df70d-hubble-tls\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.070507 kubelet[2027]: I1213 06:52:00.069791 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cilium-run\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.070507 kubelet[2027]: I1213 06:52:00.069821 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-lib-modules\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.070507 kubelet[2027]: I1213 06:52:00.069853 2027 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h96zd\" (UniqueName: \"kubernetes.io/projected/65456da0-d167-4afb-899d-f468042df70d-kube-api-access-h96zd\") pod \"65456da0-d167-4afb-899d-f468042df70d\" (UID: \"65456da0-d167-4afb-899d-f468042df70d\") " Dec 13 06:52:00.072340 kubelet[2027]: I1213 06:52:00.063422 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:52:00.072535 kubelet[2027]: I1213 06:52:00.063855 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-hostproc" (OuterVolumeSpecName: "hostproc") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:52:00.072688 kubelet[2027]: I1213 06:52:00.069618 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:52:00.072851 kubelet[2027]: I1213 06:52:00.069641 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:52:00.075148 systemd[1]: var-lib-kubelet-pods-65456da0\x2dd167\x2d4afb\x2d899d\x2df468042df70d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh96zd.mount: Deactivated successfully. Dec 13 06:52:00.077154 kubelet[2027]: I1213 06:52:00.075437 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:52:00.077354 kubelet[2027]: I1213 06:52:00.077315 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:52:00.077576 kubelet[2027]: I1213 06:52:00.077539 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:52:00.077836 kubelet[2027]: I1213 06:52:00.077796 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65456da0-d167-4afb-899d-f468042df70d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:52:00.078112 kubelet[2027]: I1213 06:52:00.078072 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65456da0-d167-4afb-899d-f468042df70d-kube-api-access-h96zd" (OuterVolumeSpecName: "kube-api-access-h96zd") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "kube-api-access-h96zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:52:00.078305 kubelet[2027]: I1213 06:52:00.078268 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cni-path" (OuterVolumeSpecName: "cni-path") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:52:00.080525 systemd[1]: var-lib-kubelet-pods-65456da0\x2dd167\x2d4afb\x2d899d\x2df468042df70d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 06:52:00.081226 kubelet[2027]: I1213 06:52:00.080853 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:52:00.081421 kubelet[2027]: I1213 06:52:00.080934 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65456da0-d167-4afb-899d-f468042df70d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:52:00.081658 kubelet[2027]: I1213 06:52:00.080962 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:52:00.082991 kubelet[2027]: I1213 06:52:00.082959 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65456da0-d167-4afb-899d-f468042df70d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:52:00.084874 kubelet[2027]: I1213 06:52:00.084811 2027 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65456da0-d167-4afb-899d-f468042df70d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "65456da0-d167-4afb-899d-f468042df70d" (UID: "65456da0-d167-4afb-899d-f468042df70d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:52:00.163068 systemd[1]: Removed slice kubepods-burstable-pod65456da0_d167_4afb_899d_f468042df70d.slice. Dec 13 06:52:00.170437 kubelet[2027]: I1213 06:52:00.170400 2027 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cni-path\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.170666 kubelet[2027]: I1213 06:52:00.170642 2027 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-etc-cni-netd\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.170853 kubelet[2027]: I1213 06:52:00.170829 2027 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cilium-cgroup\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.171025 kubelet[2027]: I1213 06:52:00.171003 2027 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-host-proc-sys-net\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.171200 kubelet[2027]: I1213 06:52:00.171176 2027 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-host-proc-sys-kernel\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.171358 kubelet[2027]: I1213 06:52:00.171335 2027 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65456da0-d167-4afb-899d-f468042df70d-hubble-tls\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.171532 kubelet[2027]: I1213 06:52:00.171510 2027 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-cilium-run\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.171677 kubelet[2027]: I1213 06:52:00.171655 2027 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h96zd\" (UniqueName: \"kubernetes.io/projected/65456da0-d167-4afb-899d-f468042df70d-kube-api-access-h96zd\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.171847 kubelet[2027]: I1213 06:52:00.171823 2027 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-lib-modules\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.172010 kubelet[2027]: I1213 06:52:00.171988 2027 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-hostproc\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.172160 kubelet[2027]: I1213 06:52:00.172138 2027 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-xtables-lock\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.172345 kubelet[2027]: I1213 06:52:00.172314 2027 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65456da0-d167-4afb-899d-f468042df70d-cilium-config-path\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.172524 kubelet[2027]: I1213 06:52:00.172501 2027 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65456da0-d167-4afb-899d-f468042df70d-bpf-maps\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.172694 kubelet[2027]: I1213 06:52:00.172670 2027 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65456da0-d167-4afb-899d-f468042df70d-clustermesh-secrets\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.172862 kubelet[2027]: I1213 06:52:00.172838 2027 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/65456da0-d167-4afb-899d-f468042df70d-cilium-ipsec-secrets\") on node \"srv-uleoy.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:52:00.663013 kubelet[2027]: W1213 06:52:00.662897 2027 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65456da0_d167_4afb_899d_f468042df70d.slice/cri-containerd-511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791.scope WatchSource:0}: container "511adacad42fed4981b24eb9fb2158ab04aafe21fc9ab41bb3f3d4017fa68791" in namespace "k8s.io": not found Dec 13 06:52:00.789734 systemd[1]: var-lib-kubelet-pods-65456da0\x2dd167\x2d4afb\x2d899d\x2df468042df70d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 06:52:00.792031 kubelet[2027]: I1213 06:52:00.791999 2027 scope.go:117] "RemoveContainer" containerID="5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786" Dec 13 06:52:00.794110 env[1189]: time="2024-12-13T06:52:00.793664751Z" level=info msg="RemoveContainer for \"5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786\"" Dec 13 06:52:00.799200 env[1189]: time="2024-12-13T06:52:00.799143392Z" level=info msg="RemoveContainer for \"5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786\" returns successfully" Dec 13 06:52:00.839383 kubelet[2027]: I1213 06:52:00.839316 2027 topology_manager.go:215] "Topology Admit Handler" podUID="c9248faa-8110-4313-ae9c-0d48dde687b5" podNamespace="kube-system" podName="cilium-tldhx" Dec 13 06:52:00.839628 kubelet[2027]: E1213 06:52:00.839448 2027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="65456da0-d167-4afb-899d-f468042df70d" containerName="mount-cgroup" Dec 13 06:52:00.839628 kubelet[2027]: E1213 06:52:00.839483 2027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="65456da0-d167-4afb-899d-f468042df70d" containerName="mount-cgroup" Dec 13 06:52:00.839628 kubelet[2027]: I1213 06:52:00.839553 2027 memory_manager.go:354] "RemoveStaleState removing state" podUID="65456da0-d167-4afb-899d-f468042df70d" containerName="mount-cgroup" Dec 13 06:52:00.839628 kubelet[2027]: I1213 06:52:00.839570 2027 memory_manager.go:354] "RemoveStaleState removing state" podUID="65456da0-d167-4afb-899d-f468042df70d" containerName="mount-cgroup" Dec 13 06:52:00.846711 systemd[1]: Created slice kubepods-burstable-podc9248faa_8110_4313_ae9c_0d48dde687b5.slice. Dec 13 06:52:00.876371 kubelet[2027]: I1213 06:52:00.876301 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c9248faa-8110-4313-ae9c-0d48dde687b5-cilium-ipsec-secrets\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.876371 kubelet[2027]: I1213 06:52:00.876375 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85j7l\" (UniqueName: \"kubernetes.io/projected/c9248faa-8110-4313-ae9c-0d48dde687b5-kube-api-access-85j7l\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.876824 kubelet[2027]: I1213 06:52:00.876425 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c9248faa-8110-4313-ae9c-0d48dde687b5-cilium-cgroup\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.876824 kubelet[2027]: I1213 06:52:00.876482 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9248faa-8110-4313-ae9c-0d48dde687b5-xtables-lock\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.876824 kubelet[2027]: I1213 06:52:00.876523 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c9248faa-8110-4313-ae9c-0d48dde687b5-cilium-run\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.876824 kubelet[2027]: I1213 06:52:00.876555 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9248faa-8110-4313-ae9c-0d48dde687b5-lib-modules\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.876824 kubelet[2027]: I1213 06:52:00.876598 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c9248faa-8110-4313-ae9c-0d48dde687b5-cni-path\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.876824 kubelet[2027]: I1213 06:52:00.876654 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9248faa-8110-4313-ae9c-0d48dde687b5-etc-cni-netd\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.876824 kubelet[2027]: I1213 06:52:00.876689 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c9248faa-8110-4313-ae9c-0d48dde687b5-hostproc\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.876824 kubelet[2027]: I1213 06:52:00.876736 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c9248faa-8110-4313-ae9c-0d48dde687b5-clustermesh-secrets\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.876824 kubelet[2027]: I1213 06:52:00.876770 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c9248faa-8110-4313-ae9c-0d48dde687b5-host-proc-sys-net\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.876824 kubelet[2027]: I1213 06:52:00.876808 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c9248faa-8110-4313-ae9c-0d48dde687b5-host-proc-sys-kernel\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.877620 kubelet[2027]: I1213 06:52:00.876866 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c9248faa-8110-4313-ae9c-0d48dde687b5-hubble-tls\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.877620 kubelet[2027]: I1213 06:52:00.876916 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c9248faa-8110-4313-ae9c-0d48dde687b5-bpf-maps\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:00.877620 kubelet[2027]: I1213 06:52:00.876953 2027 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9248faa-8110-4313-ae9c-0d48dde687b5-cilium-config-path\") pod \"cilium-tldhx\" (UID: \"c9248faa-8110-4313-ae9c-0d48dde687b5\") " pod="kube-system/cilium-tldhx" Dec 13 06:52:01.151831 env[1189]: time="2024-12-13T06:52:01.151283947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tldhx,Uid:c9248faa-8110-4313-ae9c-0d48dde687b5,Namespace:kube-system,Attempt:0,}" Dec 13 06:52:01.170397 env[1189]: time="2024-12-13T06:52:01.170073843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:52:01.170397 env[1189]: time="2024-12-13T06:52:01.170135964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:52:01.170397 env[1189]: time="2024-12-13T06:52:01.170154401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:52:01.170744 env[1189]: time="2024-12-13T06:52:01.170453417Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d pid=3912 runtime=io.containerd.runc.v2 Dec 13 06:52:01.188926 systemd[1]: Started cri-containerd-b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d.scope. Dec 13 06:52:01.231216 env[1189]: time="2024-12-13T06:52:01.231154736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tldhx,Uid:c9248faa-8110-4313-ae9c-0d48dde687b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d\"" Dec 13 06:52:01.238525 env[1189]: time="2024-12-13T06:52:01.238003068Z" level=info msg="CreateContainer within sandbox \"b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:52:01.253743 env[1189]: time="2024-12-13T06:52:01.253535967Z" level=info msg="CreateContainer within sandbox \"b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"66e5f2affe8577c31f96067bb5bddf4aac7300a8abcaec9bd3eeb35098d544d5\"" Dec 13 06:52:01.256995 env[1189]: time="2024-12-13T06:52:01.256946442Z" level=info msg="StartContainer for \"66e5f2affe8577c31f96067bb5bddf4aac7300a8abcaec9bd3eeb35098d544d5\"" Dec 13 06:52:01.284952 systemd[1]: Started cri-containerd-66e5f2affe8577c31f96067bb5bddf4aac7300a8abcaec9bd3eeb35098d544d5.scope. Dec 13 06:52:01.333845 env[1189]: time="2024-12-13T06:52:01.333772209Z" level=info msg="StartContainer for \"66e5f2affe8577c31f96067bb5bddf4aac7300a8abcaec9bd3eeb35098d544d5\" returns successfully" Dec 13 06:52:01.350035 systemd[1]: cri-containerd-66e5f2affe8577c31f96067bb5bddf4aac7300a8abcaec9bd3eeb35098d544d5.scope: Deactivated successfully. Dec 13 06:52:01.394911 env[1189]: time="2024-12-13T06:52:01.394832017Z" level=info msg="shim disconnected" id=66e5f2affe8577c31f96067bb5bddf4aac7300a8abcaec9bd3eeb35098d544d5 Dec 13 06:52:01.394911 env[1189]: time="2024-12-13T06:52:01.394903095Z" level=warning msg="cleaning up after shim disconnected" id=66e5f2affe8577c31f96067bb5bddf4aac7300a8abcaec9bd3eeb35098d544d5 namespace=k8s.io Dec 13 06:52:01.394911 env[1189]: time="2024-12-13T06:52:01.394928175Z" level=info msg="cleaning up dead shim" Dec 13 06:52:01.409424 env[1189]: time="2024-12-13T06:52:01.408850452Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:52:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3996 runtime=io.containerd.runc.v2\n" Dec 13 06:52:01.801577 env[1189]: time="2024-12-13T06:52:01.801410720Z" level=info msg="CreateContainer within sandbox \"b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 06:52:01.820558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846777166.mount: Deactivated successfully. Dec 13 06:52:01.825383 env[1189]: time="2024-12-13T06:52:01.824949550Z" level=info msg="CreateContainer within sandbox \"b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8877ddf410e85b8f87dc2873cb021961bbbca5a270d1299374af934c5bec1fac\"" Dec 13 06:52:01.827029 env[1189]: time="2024-12-13T06:52:01.826991181Z" level=info msg="StartContainer for \"8877ddf410e85b8f87dc2873cb021961bbbca5a270d1299374af934c5bec1fac\"" Dec 13 06:52:01.856007 systemd[1]: Started cri-containerd-8877ddf410e85b8f87dc2873cb021961bbbca5a270d1299374af934c5bec1fac.scope. Dec 13 06:52:01.908967 env[1189]: time="2024-12-13T06:52:01.908899151Z" level=info msg="StartContainer for \"8877ddf410e85b8f87dc2873cb021961bbbca5a270d1299374af934c5bec1fac\" returns successfully" Dec 13 06:52:01.927846 systemd[1]: cri-containerd-8877ddf410e85b8f87dc2873cb021961bbbca5a270d1299374af934c5bec1fac.scope: Deactivated successfully. Dec 13 06:52:01.962263 env[1189]: time="2024-12-13T06:52:01.962188051Z" level=info msg="shim disconnected" id=8877ddf410e85b8f87dc2873cb021961bbbca5a270d1299374af934c5bec1fac Dec 13 06:52:01.962263 env[1189]: time="2024-12-13T06:52:01.962262106Z" level=warning msg="cleaning up after shim disconnected" id=8877ddf410e85b8f87dc2873cb021961bbbca5a270d1299374af934c5bec1fac namespace=k8s.io Dec 13 06:52:01.962690 env[1189]: time="2024-12-13T06:52:01.962280038Z" level=info msg="cleaning up dead shim" Dec 13 06:52:01.974582 env[1189]: time="2024-12-13T06:52:01.974522794Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:52:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4053 runtime=io.containerd.runc.v2\n" Dec 13 06:52:02.155205 kubelet[2027]: I1213 06:52:02.154479 2027 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="65456da0-d167-4afb-899d-f468042df70d" path="/var/lib/kubelet/pods/65456da0-d167-4afb-899d-f468042df70d/volumes" Dec 13 06:52:02.790673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8877ddf410e85b8f87dc2873cb021961bbbca5a270d1299374af934c5bec1fac-rootfs.mount: Deactivated successfully. Dec 13 06:52:02.809163 env[1189]: time="2024-12-13T06:52:02.809101558Z" level=info msg="CreateContainer within sandbox \"b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 06:52:02.838972 env[1189]: time="2024-12-13T06:52:02.838913217Z" level=info msg="CreateContainer within sandbox \"b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb8a8733e326c36917266111392e1c69cb969194fdb4cd0eaf20fcd8500d058e\"" Dec 13 06:52:02.838971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790161094.mount: Deactivated successfully. Dec 13 06:52:02.841238 env[1189]: time="2024-12-13T06:52:02.841201559Z" level=info msg="StartContainer for \"cb8a8733e326c36917266111392e1c69cb969194fdb4cd0eaf20fcd8500d058e\"" Dec 13 06:52:02.874934 systemd[1]: Started cri-containerd-cb8a8733e326c36917266111392e1c69cb969194fdb4cd0eaf20fcd8500d058e.scope. Dec 13 06:52:02.933459 env[1189]: time="2024-12-13T06:52:02.933351973Z" level=info msg="StartContainer for \"cb8a8733e326c36917266111392e1c69cb969194fdb4cd0eaf20fcd8500d058e\" returns successfully" Dec 13 06:52:02.940992 systemd[1]: cri-containerd-cb8a8733e326c36917266111392e1c69cb969194fdb4cd0eaf20fcd8500d058e.scope: Deactivated successfully. Dec 13 06:52:02.973280 env[1189]: time="2024-12-13T06:52:02.973218738Z" level=info msg="shim disconnected" id=cb8a8733e326c36917266111392e1c69cb969194fdb4cd0eaf20fcd8500d058e Dec 13 06:52:02.973720 env[1189]: time="2024-12-13T06:52:02.973688508Z" level=warning msg="cleaning up after shim disconnected" id=cb8a8733e326c36917266111392e1c69cb969194fdb4cd0eaf20fcd8500d058e namespace=k8s.io Dec 13 06:52:02.973899 env[1189]: time="2024-12-13T06:52:02.973868602Z" level=info msg="cleaning up dead shim" Dec 13 06:52:02.984868 env[1189]: time="2024-12-13T06:52:02.984798706Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:52:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4109 runtime=io.containerd.runc.v2\n" Dec 13 06:52:03.774849 kubelet[2027]: W1213 06:52:03.773538 2027 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65456da0_d167_4afb_899d_f468042df70d.slice/cri-containerd-5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786.scope WatchSource:0}: container "5d358c436d9e6d6d950191b14b33e11376ab27a5fa721d98f6a434929d588786" in namespace "k8s.io": not found Dec 13 06:52:03.820169 env[1189]: time="2024-12-13T06:52:03.820095033Z" level=info msg="CreateContainer within sandbox \"b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 06:52:03.859903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290694895.mount: Deactivated successfully. Dec 13 06:52:03.868778 env[1189]: time="2024-12-13T06:52:03.868662207Z" level=info msg="CreateContainer within sandbox \"b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3dee8430491b3557d5bab6dc3bca76061ece4f40a1bee793daf8bdec233f4b80\"" Dec 13 06:52:03.869931 env[1189]: time="2024-12-13T06:52:03.869885597Z" level=info msg="StartContainer for \"3dee8430491b3557d5bab6dc3bca76061ece4f40a1bee793daf8bdec233f4b80\"" Dec 13 06:52:03.901371 systemd[1]: Started cri-containerd-3dee8430491b3557d5bab6dc3bca76061ece4f40a1bee793daf8bdec233f4b80.scope. Dec 13 06:52:03.966781 env[1189]: time="2024-12-13T06:52:03.966688014Z" level=info msg="StartContainer for \"3dee8430491b3557d5bab6dc3bca76061ece4f40a1bee793daf8bdec233f4b80\" returns successfully" Dec 13 06:52:03.970012 systemd[1]: cri-containerd-3dee8430491b3557d5bab6dc3bca76061ece4f40a1bee793daf8bdec233f4b80.scope: Deactivated successfully. Dec 13 06:52:04.005888 env[1189]: time="2024-12-13T06:52:04.005819970Z" level=info msg="shim disconnected" id=3dee8430491b3557d5bab6dc3bca76061ece4f40a1bee793daf8bdec233f4b80 Dec 13 06:52:04.005888 env[1189]: time="2024-12-13T06:52:04.005885784Z" level=warning msg="cleaning up after shim disconnected" id=3dee8430491b3557d5bab6dc3bca76061ece4f40a1bee793daf8bdec233f4b80 namespace=k8s.io Dec 13 06:52:04.006246 env[1189]: time="2024-12-13T06:52:04.005903269Z" level=info msg="cleaning up dead shim" Dec 13 06:52:04.020841 env[1189]: time="2024-12-13T06:52:04.020771688Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:52:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4163 runtime=io.containerd.runc.v2\n" Dec 13 06:52:04.335287 kubelet[2027]: E1213 06:52:04.335227 2027 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:52:04.790619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dee8430491b3557d5bab6dc3bca76061ece4f40a1bee793daf8bdec233f4b80-rootfs.mount: Deactivated successfully. Dec 13 06:52:04.826755 env[1189]: time="2024-12-13T06:52:04.826696874Z" level=info msg="CreateContainer within sandbox \"b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 06:52:04.852387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457561969.mount: Deactivated successfully. Dec 13 06:52:04.855563 env[1189]: time="2024-12-13T06:52:04.855511817Z" level=info msg="CreateContainer within sandbox \"b13291f84756acd3b05d8a14fda4d539a0c79b2981c0b141665efbd8501cc53d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"677bf5a9d934d3822e7ac71eb46ad0b4aae818b13a655a91eabb532a1c91d82f\"" Dec 13 06:52:04.856797 env[1189]: time="2024-12-13T06:52:04.856761920Z" level=info msg="StartContainer for \"677bf5a9d934d3822e7ac71eb46ad0b4aae818b13a655a91eabb532a1c91d82f\"" Dec 13 06:52:04.883904 systemd[1]: Started cri-containerd-677bf5a9d934d3822e7ac71eb46ad0b4aae818b13a655a91eabb532a1c91d82f.scope. Dec 13 06:52:04.941639 env[1189]: time="2024-12-13T06:52:04.941577257Z" level=info msg="StartContainer for \"677bf5a9d934d3822e7ac71eb46ad0b4aae818b13a655a91eabb532a1c91d82f\" returns successfully" Dec 13 06:52:05.650585 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 06:52:05.847481 kubelet[2027]: I1213 06:52:05.847415 2027 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tldhx" podStartSLOduration=5.847302632 podStartE2EDuration="5.847302632s" podCreationTimestamp="2024-12-13 06:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:52:05.845819765 +0000 UTC m=+172.115848637" watchObservedRunningTime="2024-12-13 06:52:05.847302632 +0000 UTC m=+172.117331499" Dec 13 06:52:06.853249 systemd[1]: run-containerd-runc-k8s.io-677bf5a9d934d3822e7ac71eb46ad0b4aae818b13a655a91eabb532a1c91d82f-runc.vmDiST.mount: Deactivated successfully. Dec 13 06:52:06.889932 kubelet[2027]: W1213 06:52:06.889852 2027 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9248faa_8110_4313_ae9c_0d48dde687b5.slice/cri-containerd-66e5f2affe8577c31f96067bb5bddf4aac7300a8abcaec9bd3eeb35098d544d5.scope WatchSource:0}: task 66e5f2affe8577c31f96067bb5bddf4aac7300a8abcaec9bd3eeb35098d544d5 not found: not found Dec 13 06:52:09.099012 systemd[1]: run-containerd-runc-k8s.io-677bf5a9d934d3822e7ac71eb46ad0b4aae818b13a655a91eabb532a1c91d82f-runc.boos3w.mount: Deactivated successfully. Dec 13 06:52:09.272687 systemd-networkd[1028]: lxc_health: Link UP Dec 13 06:52:09.306479 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 06:52:09.306907 systemd-networkd[1028]: lxc_health: Gained carrier Dec 13 06:52:10.004182 kubelet[2027]: W1213 06:52:10.004024 2027 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9248faa_8110_4313_ae9c_0d48dde687b5.slice/cri-containerd-8877ddf410e85b8f87dc2873cb021961bbbca5a270d1299374af934c5bec1fac.scope WatchSource:0}: task 8877ddf410e85b8f87dc2873cb021961bbbca5a270d1299374af934c5bec1fac not found: not found Dec 13 06:52:11.301887 systemd-networkd[1028]: lxc_health: Gained IPv6LL Dec 13 06:52:11.401102 systemd[1]: run-containerd-runc-k8s.io-677bf5a9d934d3822e7ac71eb46ad0b4aae818b13a655a91eabb532a1c91d82f-runc.aLtt8b.mount: Deactivated successfully. Dec 13 06:52:13.125603 kubelet[2027]: W1213 06:52:13.125520 2027 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9248faa_8110_4313_ae9c_0d48dde687b5.slice/cri-containerd-cb8a8733e326c36917266111392e1c69cb969194fdb4cd0eaf20fcd8500d058e.scope WatchSource:0}: task cb8a8733e326c36917266111392e1c69cb969194fdb4cd0eaf20fcd8500d058e not found: not found Dec 13 06:52:13.584706 systemd[1]: run-containerd-runc-k8s.io-677bf5a9d934d3822e7ac71eb46ad0b4aae818b13a655a91eabb532a1c91d82f-runc.aoRe9M.mount: Deactivated successfully. Dec 13 06:52:14.081526 env[1189]: time="2024-12-13T06:52:14.081428219Z" level=info msg="StopPodSandbox for \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\"" Dec 13 06:52:14.082584 env[1189]: time="2024-12-13T06:52:14.082488485Z" level=info msg="TearDown network for sandbox \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\" successfully" Dec 13 06:52:14.082846 env[1189]: time="2024-12-13T06:52:14.082810938Z" level=info msg="StopPodSandbox for \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\" returns successfully" Dec 13 06:52:14.083741 env[1189]: time="2024-12-13T06:52:14.083697648Z" level=info msg="RemovePodSandbox for \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\"" Dec 13 06:52:14.083864 env[1189]: time="2024-12-13T06:52:14.083765636Z" level=info msg="Forcibly stopping sandbox \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\"" Dec 13 06:52:14.084042 env[1189]: time="2024-12-13T06:52:14.083927140Z" level=info msg="TearDown network for sandbox \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\" successfully" Dec 13 06:52:14.090523 env[1189]: time="2024-12-13T06:52:14.090482227Z" level=info msg="RemovePodSandbox \"976dd6c8be99eb45113d8c0ecbaeb2b94922d38cdfde3f1698c947a1b54fc762\" returns successfully" Dec 13 06:52:14.091014 env[1189]: time="2024-12-13T06:52:14.090965693Z" level=info msg="StopPodSandbox for \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\"" Dec 13 06:52:14.091126 env[1189]: time="2024-12-13T06:52:14.091073748Z" level=info msg="TearDown network for sandbox \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\" successfully" Dec 13 06:52:14.091210 env[1189]: time="2024-12-13T06:52:14.091123372Z" level=info msg="StopPodSandbox for \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\" returns successfully" Dec 13 06:52:14.091495 env[1189]: time="2024-12-13T06:52:14.091458832Z" level=info msg="RemovePodSandbox for \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\"" Dec 13 06:52:14.091665 env[1189]: time="2024-12-13T06:52:14.091615868Z" level=info msg="Forcibly stopping sandbox \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\"" Dec 13 06:52:14.091867 env[1189]: time="2024-12-13T06:52:14.091829743Z" level=info msg="TearDown network for sandbox \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\" successfully" Dec 13 06:52:14.095289 env[1189]: time="2024-12-13T06:52:14.095244112Z" level=info msg="RemovePodSandbox \"af419f687c8fb80d25b0f3c58ccd84de09c8169121823cda362f43c2fb8473e8\" returns successfully" Dec 13 06:52:14.095811 env[1189]: time="2024-12-13T06:52:14.095768405Z" level=info msg="StopPodSandbox for \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\"" Dec 13 06:52:14.096146 env[1189]: time="2024-12-13T06:52:14.095881016Z" level=info msg="TearDown network for sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" successfully" Dec 13 06:52:14.096146 env[1189]: time="2024-12-13T06:52:14.095924960Z" level=info msg="StopPodSandbox for \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" returns successfully" Dec 13 06:52:14.096317 env[1189]: time="2024-12-13T06:52:14.096278038Z" level=info msg="RemovePodSandbox for \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\"" Dec 13 06:52:14.096520 env[1189]: time="2024-12-13T06:52:14.096319387Z" level=info msg="Forcibly stopping sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\"" Dec 13 06:52:14.096520 env[1189]: time="2024-12-13T06:52:14.096479567Z" level=info msg="TearDown network for sandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" successfully" Dec 13 06:52:14.100053 env[1189]: time="2024-12-13T06:52:14.100008342Z" level=info msg="RemovePodSandbox \"9edd33d684e407593a7e002934287210fff710bb812a671c9da3e763606c66a1\" returns successfully" Dec 13 06:52:15.806041 systemd[1]: run-containerd-runc-k8s.io-677bf5a9d934d3822e7ac71eb46ad0b4aae818b13a655a91eabb532a1c91d82f-runc.INZfzh.mount: Deactivated successfully. Dec 13 06:52:16.040931 sshd[3856]: pam_unix(sshd:session): session closed for user core Dec 13 06:52:16.046912 systemd[1]: sshd@23-10.230.20.2:22-139.178.89.65:39490.service: Deactivated successfully. Dec 13 06:52:16.048454 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 06:52:16.049599 systemd-logind[1180]: Session 24 logged out. Waiting for processes to exit. Dec 13 06:52:16.052262 systemd-logind[1180]: Removed session 24. Dec 13 06:52:16.241505 kubelet[2027]: W1213 06:52:16.240515 2027 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9248faa_8110_4313_ae9c_0d48dde687b5.slice/cri-containerd-3dee8430491b3557d5bab6dc3bca76061ece4f40a1bee793daf8bdec233f4b80.scope WatchSource:0}: task 3dee8430491b3557d5bab6dc3bca76061ece4f40a1bee793daf8bdec233f4b80 not found: not found