Dec 13 06:42:09.946615 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 06:42:09.946668 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:42:09.946688 kernel: BIOS-provided physical RAM map: Dec 13 06:42:09.946698 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 06:42:09.946708 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 06:42:09.946718 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 06:42:09.946729 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 06:42:09.946740 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 06:42:09.946750 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 06:42:09.946760 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 06:42:09.946774 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 06:42:09.946784 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 06:42:09.946794 kernel: NX (Execute Disable) protection: active Dec 13 06:42:09.946804 kernel: SMBIOS 2.8 present. Dec 13 06:42:09.946817 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 06:42:09.946828 kernel: Hypervisor detected: KVM Dec 13 06:42:09.946842 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 06:42:09.946853 kernel: kvm-clock: cpu 0, msr 2119b001, primary cpu clock Dec 13 06:42:09.946864 kernel: kvm-clock: using sched offset of 5265611101 cycles Dec 13 06:42:09.946876 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 06:42:09.946887 kernel: tsc: Detected 2499.998 MHz processor Dec 13 06:42:09.946898 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 06:42:09.946909 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 06:42:09.946920 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 06:42:09.946931 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 06:42:09.946946 kernel: Using GB pages for direct mapping Dec 13 06:42:09.946957 kernel: ACPI: Early table checksum verification disabled Dec 13 06:42:09.946967 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 06:42:09.946978 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:42:09.946989 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:42:09.947000 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:42:09.947011 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 06:42:09.947022 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:42:09.947033 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:42:09.947048 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:42:09.949488 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:42:09.949505 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 06:42:09.949516 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 06:42:09.949527 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 06:42:09.949539 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 06:42:09.949559 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 06:42:09.949574 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 06:42:09.949586 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 06:42:09.949598 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 06:42:09.949610 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 06:42:09.949621 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 06:42:09.949644 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 06:42:09.949656 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 06:42:09.949672 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 06:42:09.949684 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 06:42:09.949695 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 06:42:09.949707 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 06:42:09.949718 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 06:42:09.949729 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 06:42:09.949741 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 06:42:09.949752 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 06:42:09.949764 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 06:42:09.949775 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 06:42:09.949791 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 06:42:09.949802 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 06:42:09.949814 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 06:42:09.949826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 06:42:09.949837 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 06:42:09.949849 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 06:42:09.949861 kernel: Zone ranges: Dec 13 06:42:09.949873 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 06:42:09.949884 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 06:42:09.949900 kernel: Normal empty Dec 13 06:42:09.949912 kernel: Movable zone start for each node Dec 13 06:42:09.949924 kernel: Early memory node ranges Dec 13 06:42:09.949935 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 06:42:09.949947 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 06:42:09.949959 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 06:42:09.949970 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 06:42:09.949982 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 06:42:09.949993 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 06:42:09.950009 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 06:42:09.950020 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 06:42:09.950032 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 06:42:09.950044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 06:42:09.950078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 06:42:09.950092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 06:42:09.950104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 06:42:09.950116 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 06:42:09.950127 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 06:42:09.950144 kernel: TSC deadline timer available Dec 13 06:42:09.950155 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 06:42:09.950167 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 06:42:09.950179 kernel: Booting paravirtualized kernel on KVM Dec 13 06:42:09.950191 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 06:42:09.950203 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 06:42:09.950215 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 06:42:09.950226 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 06:42:09.950238 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 06:42:09.950254 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Dec 13 06:42:09.950265 kernel: kvm-guest: PV spinlocks enabled Dec 13 06:42:09.950277 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 06:42:09.950289 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 06:42:09.950300 kernel: Policy zone: DMA32 Dec 13 06:42:09.950313 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:42:09.950326 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 06:42:09.950338 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 06:42:09.950354 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 06:42:09.950365 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 06:42:09.950378 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 192524K reserved, 0K cma-reserved) Dec 13 06:42:09.950389 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 06:42:09.950401 kernel: Kernel/User page tables isolation: enabled Dec 13 06:42:09.950413 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 06:42:09.950424 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 06:42:09.950436 kernel: rcu: Hierarchical RCU implementation. Dec 13 06:42:09.950448 kernel: rcu: RCU event tracing is enabled. Dec 13 06:42:09.950464 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 06:42:09.950476 kernel: Rude variant of Tasks RCU enabled. Dec 13 06:42:09.950488 kernel: Tracing variant of Tasks RCU enabled. Dec 13 06:42:09.950500 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 06:42:09.950511 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 06:42:09.950523 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 06:42:09.950535 kernel: random: crng init done Dec 13 06:42:09.950559 kernel: Console: colour VGA+ 80x25 Dec 13 06:42:09.950571 kernel: printk: console [tty0] enabled Dec 13 06:42:09.950583 kernel: printk: console [ttyS0] enabled Dec 13 06:42:09.950595 kernel: ACPI: Core revision 20210730 Dec 13 06:42:09.950608 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 06:42:09.950624 kernel: x2apic enabled Dec 13 06:42:09.950646 kernel: Switched APIC routing to physical x2apic. Dec 13 06:42:09.950659 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 06:42:09.950671 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 06:42:09.950684 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 06:42:09.950701 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 06:42:09.950713 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 06:42:09.950725 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 06:42:09.950737 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 06:42:09.950749 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 06:42:09.950761 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 06:42:09.950773 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 06:42:09.950785 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 06:42:09.950797 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 06:42:09.950809 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 06:42:09.950821 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 06:42:09.950837 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 06:42:09.950850 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 06:42:09.950862 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 06:42:09.950874 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 06:42:09.950886 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 06:42:09.950898 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 06:42:09.950910 kernel: Freeing SMP alternatives memory: 32K Dec 13 06:42:09.950922 kernel: pid_max: default: 32768 minimum: 301 Dec 13 06:42:09.950934 kernel: LSM: Security Framework initializing Dec 13 06:42:09.950946 kernel: SELinux: Initializing. Dec 13 06:42:09.950958 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 06:42:09.950975 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 06:42:09.950987 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 06:42:09.950999 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 06:42:09.951011 kernel: signal: max sigframe size: 1776 Dec 13 06:42:09.951024 kernel: rcu: Hierarchical SRCU implementation. Dec 13 06:42:09.951036 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 06:42:09.951048 kernel: smp: Bringing up secondary CPUs ... Dec 13 06:42:09.951078 kernel: x86: Booting SMP configuration: Dec 13 06:42:09.951091 kernel: .... node #0, CPUs: #1 Dec 13 06:42:09.951108 kernel: kvm-clock: cpu 1, msr 2119b041, secondary cpu clock Dec 13 06:42:09.951121 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 06:42:09.951133 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Dec 13 06:42:09.951145 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 06:42:09.951157 kernel: smpboot: Max logical packages: 16 Dec 13 06:42:09.951170 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 06:42:09.951182 kernel: devtmpfs: initialized Dec 13 06:42:09.951194 kernel: x86/mm: Memory block size: 128MB Dec 13 06:42:09.951207 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 06:42:09.951219 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 06:42:09.951236 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 06:42:09.951248 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 06:42:09.951260 kernel: audit: initializing netlink subsys (disabled) Dec 13 06:42:09.951273 kernel: audit: type=2000 audit(1734072128.684:1): state=initialized audit_enabled=0 res=1 Dec 13 06:42:09.951285 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 06:42:09.951297 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 06:42:09.951309 kernel: cpuidle: using governor menu Dec 13 06:42:09.951322 kernel: ACPI: bus type PCI registered Dec 13 06:42:09.951334 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 06:42:09.951350 kernel: dca service started, version 1.12.1 Dec 13 06:42:09.951363 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 06:42:09.951375 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 06:42:09.951387 kernel: PCI: Using configuration type 1 for base access Dec 13 06:42:09.951399 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 06:42:09.951412 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 06:42:09.951424 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 06:42:09.951436 kernel: ACPI: Added _OSI(Module Device) Dec 13 06:42:09.951452 kernel: ACPI: Added _OSI(Processor Device) Dec 13 06:42:09.951465 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 06:42:09.951477 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 06:42:09.951489 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 06:42:09.951501 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 06:42:09.951513 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 06:42:09.951526 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 06:42:09.951538 kernel: ACPI: Interpreter enabled Dec 13 06:42:09.951550 kernel: ACPI: PM: (supports S0 S5) Dec 13 06:42:09.951562 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 06:42:09.951578 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 06:42:09.951591 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 06:42:09.951603 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 06:42:09.951867 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 06:42:09.952028 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 06:42:09.955520 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 06:42:09.955544 kernel: PCI host bridge to bus 0000:00 Dec 13 06:42:09.955736 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 06:42:09.955879 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 06:42:09.956019 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 06:42:09.956173 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 06:42:09.956312 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 06:42:09.956451 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 06:42:09.956590 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 06:42:09.956816 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 06:42:09.956985 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 06:42:09.957158 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 06:42:09.957314 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 06:42:09.957467 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 06:42:09.957619 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 06:42:09.957803 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 06:42:09.957959 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 06:42:09.960994 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 06:42:09.961185 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 06:42:09.961356 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 06:42:09.961515 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 06:42:09.961702 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 06:42:09.961860 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 06:42:09.962021 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 06:42:09.962193 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 06:42:09.962358 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 06:42:09.962513 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 06:42:09.962719 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 06:42:09.962875 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 06:42:09.963036 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 06:42:09.963203 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 06:42:09.963367 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 06:42:09.963521 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 06:42:09.963688 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 06:42:09.963851 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 06:42:09.964009 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 06:42:09.964196 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 06:42:09.964353 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 06:42:09.964507 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 06:42:09.964677 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 06:42:09.964842 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 06:42:09.965005 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 06:42:09.969232 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 06:42:09.969401 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 06:42:09.969561 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 06:42:09.969742 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 06:42:09.969900 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 06:42:09.970100 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 06:42:09.970266 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 06:42:09.970424 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 06:42:09.970576 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 06:42:09.970743 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:42:09.970939 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 06:42:09.971140 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 06:42:09.971310 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 06:42:09.971469 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 06:42:09.971637 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 06:42:09.971810 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 06:42:09.971970 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 06:42:09.972139 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 06:42:09.972300 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 06:42:09.972451 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:42:09.972620 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 06:42:09.972796 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 06:42:09.972951 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 06:42:09.973115 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 06:42:09.973274 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:42:09.973429 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 06:42:09.973589 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 06:42:09.973756 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:42:09.973911 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 06:42:09.974078 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 06:42:09.974246 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:42:09.974403 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 06:42:09.974578 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 06:42:09.974775 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:42:09.974961 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 06:42:09.975172 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 06:42:09.975346 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:42:09.975507 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 06:42:09.975692 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 06:42:09.975864 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:42:09.975883 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 06:42:09.975904 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 06:42:09.975924 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 06:42:09.975937 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 06:42:09.975950 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 06:42:09.975962 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 06:42:09.975983 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 06:42:09.975996 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 06:42:09.976009 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 06:42:09.976021 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 06:42:09.976037 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 06:42:09.984096 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 06:42:09.984118 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 06:42:09.984132 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 06:42:09.984145 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 06:42:09.984158 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 06:42:09.984171 kernel: iommu: Default domain type: Translated Dec 13 06:42:09.984184 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 06:42:09.984373 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 06:42:09.984533 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 06:42:09.984714 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 06:42:09.984734 kernel: vgaarb: loaded Dec 13 06:42:09.984747 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 06:42:09.984760 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 06:42:09.984773 kernel: PTP clock support registered Dec 13 06:42:09.984785 kernel: PCI: Using ACPI for IRQ routing Dec 13 06:42:09.984798 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 06:42:09.984811 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 06:42:09.984830 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 06:42:09.984842 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 06:42:09.984855 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 06:42:09.984868 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 06:42:09.984881 kernel: pnp: PnP ACPI init Dec 13 06:42:09.985084 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 06:42:09.985107 kernel: pnp: PnP ACPI: found 5 devices Dec 13 06:42:09.985120 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 06:42:09.985139 kernel: NET: Registered PF_INET protocol family Dec 13 06:42:09.985152 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 06:42:09.985165 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 06:42:09.985178 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 06:42:09.985190 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 06:42:09.985203 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 06:42:09.985215 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 06:42:09.985228 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 06:42:09.985241 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 06:42:09.985258 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 06:42:09.985271 kernel: NET: Registered PF_XDP protocol family Dec 13 06:42:09.985425 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 06:42:09.985580 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 06:42:09.985749 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 06:42:09.985903 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 06:42:09.986068 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 06:42:09.986231 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 06:42:09.986384 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 06:42:09.986534 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 06:42:09.986699 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 06:42:09.986853 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 06:42:09.987006 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 06:42:09.987178 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 06:42:09.987352 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 06:42:09.987503 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 06:42:09.987668 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 06:42:09.987821 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 06:42:09.987981 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 06:42:09.988154 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 06:42:09.988329 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 06:42:09.988484 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 06:42:09.988659 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 06:42:09.988832 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:42:09.989005 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 06:42:09.989174 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 06:42:09.989328 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 06:42:09.989482 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:42:09.989647 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 06:42:09.989804 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 06:42:09.989966 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 06:42:09.990135 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:42:09.990287 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 06:42:09.990440 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 06:42:09.990592 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 06:42:09.990766 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:42:09.990926 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 06:42:10.003938 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 06:42:10.004156 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 06:42:10.004316 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:42:10.004473 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 06:42:10.004639 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 06:42:10.004799 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 06:42:10.004952 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:42:10.005124 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 06:42:10.005291 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 06:42:10.005442 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 06:42:10.005594 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:42:10.005763 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 06:42:10.005916 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 06:42:10.006087 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 06:42:10.006241 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:42:10.006388 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 06:42:10.006527 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 06:42:10.006696 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 06:42:10.006836 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 06:42:10.006975 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 06:42:10.007128 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 06:42:10.007317 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 06:42:10.007468 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 06:42:10.007614 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:42:10.007789 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 06:42:10.007973 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 06:42:10.008144 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 06:42:10.008294 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:42:10.008472 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 06:42:10.008622 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 06:42:10.008782 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:42:10.008951 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 06:42:10.009117 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 06:42:10.009278 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:42:10.009446 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 06:42:10.009622 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 06:42:10.009780 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:42:10.009951 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 06:42:10.010126 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 06:42:10.010287 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:42:10.010456 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 06:42:10.010622 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 06:42:10.010792 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:42:10.010981 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 06:42:10.011153 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 06:42:10.011313 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:42:10.011334 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 06:42:10.011348 kernel: PCI: CLS 0 bytes, default 64 Dec 13 06:42:10.011361 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 06:42:10.011382 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 06:42:10.011395 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 06:42:10.011409 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 06:42:10.011423 kernel: Initialise system trusted keyrings Dec 13 06:42:10.011436 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 06:42:10.011449 kernel: Key type asymmetric registered Dec 13 06:42:10.011462 kernel: Asymmetric key parser 'x509' registered Dec 13 06:42:10.011475 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 06:42:10.011488 kernel: io scheduler mq-deadline registered Dec 13 06:42:10.011505 kernel: io scheduler kyber registered Dec 13 06:42:10.011519 kernel: io scheduler bfq registered Dec 13 06:42:10.011694 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 06:42:10.011852 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 06:42:10.012020 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:42:10.012266 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 06:42:10.012420 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 06:42:10.012579 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:42:10.012747 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 06:42:10.012899 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 06:42:10.013050 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:42:10.013219 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 06:42:10.013371 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 06:42:10.013531 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:42:10.013699 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 06:42:10.013851 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 06:42:10.014003 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:42:10.014189 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 06:42:10.014341 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 06:42:10.014500 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:42:10.014665 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 06:42:10.014817 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 06:42:10.014968 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:42:10.015134 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 06:42:10.015286 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 06:42:10.015445 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:42:10.015465 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 06:42:10.015479 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 06:42:10.015493 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 06:42:10.015506 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 06:42:10.015520 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 06:42:10.015533 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 06:42:10.015546 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 06:42:10.015566 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 06:42:10.015579 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 06:42:10.015756 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 06:42:10.015906 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 06:42:10.016072 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T06:42:09 UTC (1734072129) Dec 13 06:42:10.016238 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 06:42:10.016257 kernel: intel_pstate: CPU model not supported Dec 13 06:42:10.016277 kernel: NET: Registered PF_INET6 protocol family Dec 13 06:42:10.016291 kernel: Segment Routing with IPv6 Dec 13 06:42:10.016304 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 06:42:10.016317 kernel: NET: Registered PF_PACKET protocol family Dec 13 06:42:10.016331 kernel: Key type dns_resolver registered Dec 13 06:42:10.016343 kernel: IPI shorthand broadcast: enabled Dec 13 06:42:10.016357 kernel: sched_clock: Marking stable (977500728, 224763269)->(1486621137, -284357140) Dec 13 06:42:10.016370 kernel: registered taskstats version 1 Dec 13 06:42:10.016383 kernel: Loading compiled-in X.509 certificates Dec 13 06:42:10.016396 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 06:42:10.016413 kernel: Key type .fscrypt registered Dec 13 06:42:10.016426 kernel: Key type fscrypt-provisioning registered Dec 13 06:42:10.016440 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 06:42:10.016453 kernel: ima: Allocated hash algorithm: sha1 Dec 13 06:42:10.016466 kernel: ima: No architecture policies found Dec 13 06:42:10.016479 kernel: clk: Disabling unused clocks Dec 13 06:42:10.016492 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 06:42:10.016505 kernel: Write protecting the kernel read-only data: 28672k Dec 13 06:42:10.016523 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 06:42:10.016537 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 06:42:10.016550 kernel: Run /init as init process Dec 13 06:42:10.016563 kernel: with arguments: Dec 13 06:42:10.016576 kernel: /init Dec 13 06:42:10.016589 kernel: with environment: Dec 13 06:42:10.016602 kernel: HOME=/ Dec 13 06:42:10.016615 kernel: TERM=linux Dec 13 06:42:10.016639 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 06:42:10.016663 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 06:42:10.016687 systemd[1]: Detected virtualization kvm. Dec 13 06:42:10.016701 systemd[1]: Detected architecture x86-64. Dec 13 06:42:10.016715 systemd[1]: Running in initrd. Dec 13 06:42:10.016729 systemd[1]: No hostname configured, using default hostname. Dec 13 06:42:10.016742 systemd[1]: Hostname set to . Dec 13 06:42:10.016757 systemd[1]: Initializing machine ID from VM UUID. Dec 13 06:42:10.016775 systemd[1]: Queued start job for default target initrd.target. Dec 13 06:42:10.016789 systemd[1]: Started systemd-ask-password-console.path. Dec 13 06:42:10.016803 systemd[1]: Reached target cryptsetup.target. Dec 13 06:42:10.016817 systemd[1]: Reached target paths.target. Dec 13 06:42:10.016831 systemd[1]: Reached target slices.target. Dec 13 06:42:10.016848 systemd[1]: Reached target swap.target. Dec 13 06:42:10.016863 systemd[1]: Reached target timers.target. Dec 13 06:42:10.016878 systemd[1]: Listening on iscsid.socket. Dec 13 06:42:10.016896 systemd[1]: Listening on iscsiuio.socket. Dec 13 06:42:10.016910 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 06:42:10.016928 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 06:42:10.016942 systemd[1]: Listening on systemd-journald.socket. Dec 13 06:42:10.016957 systemd[1]: Listening on systemd-networkd.socket. Dec 13 06:42:10.016971 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 06:42:10.016984 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 06:42:10.016999 systemd[1]: Reached target sockets.target. Dec 13 06:42:10.017013 systemd[1]: Starting kmod-static-nodes.service... Dec 13 06:42:10.017032 systemd[1]: Finished network-cleanup.service. Dec 13 06:42:10.017046 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 06:42:10.023549 systemd[1]: Starting systemd-journald.service... Dec 13 06:42:10.023571 systemd[1]: Starting systemd-modules-load.service... Dec 13 06:42:10.023586 systemd[1]: Starting systemd-resolved.service... Dec 13 06:42:10.023600 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 06:42:10.023614 systemd[1]: Finished kmod-static-nodes.service. Dec 13 06:42:10.023641 kernel: audit: type=1130 audit(1734072129.945:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.023656 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 06:42:10.023678 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 06:42:10.023704 systemd-journald[201]: Journal started Dec 13 06:42:10.023789 systemd-journald[201]: Runtime Journal (/run/log/journal/e1ebb03d864345799b5fbbf6e26ff086) is 4.7M, max 38.1M, 33.3M free. Dec 13 06:42:09.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:09.952105 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 06:42:10.057081 kernel: Bridge firewalling registered Dec 13 06:42:10.057116 systemd[1]: Started systemd-resolved.service. Dec 13 06:42:10.057138 kernel: audit: type=1130 audit(1734072130.050:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.001568 systemd-resolved[203]: Positive Trust Anchors: Dec 13 06:42:10.065335 systemd[1]: Started systemd-journald.service. Dec 13 06:42:10.065367 kernel: audit: type=1130 audit(1734072130.057:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.065388 kernel: SCSI subsystem initialized Dec 13 06:42:10.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.001587 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 06:42:10.071446 kernel: audit: type=1130 audit(1734072130.065:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.001645 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 06:42:10.082827 kernel: audit: type=1130 audit(1734072130.071:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.082859 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 06:42:10.082878 kernel: device-mapper: uevent: version 1.0.3 Dec 13 06:42:10.082896 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 06:42:10.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.009650 systemd-resolved[203]: Defaulting to hostname 'linux'. Dec 13 06:42:10.029975 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 06:42:10.066345 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 06:42:10.072289 systemd[1]: Reached target nss-lookup.target. Dec 13 06:42:10.087908 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 06:42:10.089635 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 06:42:10.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.104072 kernel: audit: type=1130 audit(1734072130.098:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.098019 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 06:42:10.105388 systemd-modules-load[202]: Inserted module 'dm_multipath' Dec 13 06:42:10.106396 systemd[1]: Finished systemd-modules-load.service. Dec 13 06:42:10.123141 kernel: audit: type=1130 audit(1734072130.107:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.113686 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:42:10.129040 kernel: audit: type=1130 audit(1734072130.123:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.122855 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 06:42:10.149179 kernel: audit: type=1130 audit(1734072130.143:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.123843 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:42:10.148791 systemd[1]: Starting dracut-cmdline.service... Dec 13 06:42:10.162092 dracut-cmdline[224]: dracut-dracut-053 Dec 13 06:42:10.165461 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:42:10.249110 kernel: Loading iSCSI transport class v2.0-870. Dec 13 06:42:10.271114 kernel: iscsi: registered transport (tcp) Dec 13 06:42:10.300558 kernel: iscsi: registered transport (qla4xxx) Dec 13 06:42:10.300656 kernel: QLogic iSCSI HBA Driver Dec 13 06:42:10.352215 systemd[1]: Finished dracut-cmdline.service. Dec 13 06:42:10.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.354298 systemd[1]: Starting dracut-pre-udev.service... Dec 13 06:42:10.416131 kernel: raid6: sse2x4 gen() 13671 MB/s Dec 13 06:42:10.434126 kernel: raid6: sse2x4 xor() 7704 MB/s Dec 13 06:42:10.452127 kernel: raid6: sse2x2 gen() 9435 MB/s Dec 13 06:42:10.470129 kernel: raid6: sse2x2 xor() 7888 MB/s Dec 13 06:42:10.488127 kernel: raid6: sse2x1 gen() 9678 MB/s Dec 13 06:42:10.506803 kernel: raid6: sse2x1 xor() 7179 MB/s Dec 13 06:42:10.506891 kernel: raid6: using algorithm sse2x4 gen() 13671 MB/s Dec 13 06:42:10.506910 kernel: raid6: .... xor() 7704 MB/s, rmw enabled Dec 13 06:42:10.508122 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 06:42:10.525095 kernel: xor: automatically using best checksumming function avx Dec 13 06:42:10.641095 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 06:42:10.655355 systemd[1]: Finished dracut-pre-udev.service. Dec 13 06:42:10.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.656000 audit: BPF prog-id=7 op=LOAD Dec 13 06:42:10.656000 audit: BPF prog-id=8 op=LOAD Dec 13 06:42:10.657335 systemd[1]: Starting systemd-udevd.service... Dec 13 06:42:10.674689 systemd-udevd[401]: Using default interface naming scheme 'v252'. Dec 13 06:42:10.683731 systemd[1]: Started systemd-udevd.service. Dec 13 06:42:10.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.689571 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 06:42:10.707700 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Dec 13 06:42:10.750111 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 06:42:10.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.751954 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 06:42:10.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:10.845805 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 06:42:10.934081 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 06:42:10.984909 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 06:42:10.984936 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 06:42:10.984954 kernel: GPT:17805311 != 125829119 Dec 13 06:42:10.984971 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 06:42:10.984998 kernel: GPT:17805311 != 125829119 Dec 13 06:42:10.985015 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 06:42:10.985031 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:42:10.985048 kernel: ACPI: bus type USB registered Dec 13 06:42:10.985087 kernel: usbcore: registered new interface driver usbfs Dec 13 06:42:10.985105 kernel: usbcore: registered new interface driver hub Dec 13 06:42:10.985122 kernel: usbcore: registered new device driver usb Dec 13 06:42:10.989084 kernel: libata version 3.00 loaded. Dec 13 06:42:10.999855 kernel: AVX version of gcm_enc/dec engaged. Dec 13 06:42:10.999936 kernel: AES CTR mode by8 optimization enabled Dec 13 06:42:11.025088 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 06:42:11.084502 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 06:42:11.084532 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 06:42:11.084746 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 06:42:11.084963 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Dec 13 06:42:11.084984 kernel: scsi host0: ahci Dec 13 06:42:11.085250 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 06:42:11.085452 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 06:42:11.085640 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 06:42:11.085813 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 06:42:11.086000 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 06:42:11.086189 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 06:42:11.086360 kernel: hub 1-0:1.0: USB hub found Dec 13 06:42:11.086560 kernel: hub 1-0:1.0: 4 ports detected Dec 13 06:42:11.088297 kernel: scsi host1: ahci Dec 13 06:42:11.088507 kernel: scsi host2: ahci Dec 13 06:42:11.088772 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 06:42:11.089295 kernel: hub 2-0:1.0: USB hub found Dec 13 06:42:11.089506 kernel: hub 2-0:1.0: 4 ports detected Dec 13 06:42:11.090047 kernel: scsi host3: ahci Dec 13 06:42:11.090312 kernel: scsi host4: ahci Dec 13 06:42:11.090496 kernel: scsi host5: ahci Dec 13 06:42:11.090707 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 Dec 13 06:42:11.090728 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 Dec 13 06:42:11.090746 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 Dec 13 06:42:11.090763 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 Dec 13 06:42:11.090780 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 Dec 13 06:42:11.090803 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 Dec 13 06:42:11.031325 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 06:42:11.046248 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 06:42:11.166836 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 06:42:11.173423 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 06:42:11.180348 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 06:42:11.182250 systemd[1]: Starting disk-uuid.service... Dec 13 06:42:11.188856 disk-uuid[528]: Primary Header is updated. Dec 13 06:42:11.188856 disk-uuid[528]: Secondary Entries is updated. Dec 13 06:42:11.188856 disk-uuid[528]: Secondary Header is updated. Dec 13 06:42:11.194083 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:42:11.201099 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:42:11.209104 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:42:11.317140 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 06:42:11.402339 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 06:42:11.402434 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 06:42:11.404078 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 06:42:11.407559 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 06:42:11.407598 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 06:42:11.409170 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 06:42:11.458098 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 06:42:11.464875 kernel: usbcore: registered new interface driver usbhid Dec 13 06:42:11.464923 kernel: usbhid: USB HID core driver Dec 13 06:42:11.474326 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 06:42:11.474369 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 06:42:12.212015 disk-uuid[529]: The operation has completed successfully. Dec 13 06:42:12.213107 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:42:12.262465 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 06:42:12.262624 systemd[1]: Finished disk-uuid.service. Dec 13 06:42:12.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.269260 systemd[1]: Starting verity-setup.service... Dec 13 06:42:12.292103 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 06:42:12.350917 systemd[1]: Found device dev-mapper-usr.device. Dec 13 06:42:12.352749 systemd[1]: Mounting sysusr-usr.mount... Dec 13 06:42:12.354763 systemd[1]: Finished verity-setup.service. Dec 13 06:42:12.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.449092 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 06:42:12.450017 systemd[1]: Mounted sysusr-usr.mount. Dec 13 06:42:12.450848 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 06:42:12.451833 systemd[1]: Starting ignition-setup.service... Dec 13 06:42:12.455640 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 06:42:12.472888 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 06:42:12.472974 kernel: BTRFS info (device vda6): using free space tree Dec 13 06:42:12.473009 kernel: BTRFS info (device vda6): has skinny extents Dec 13 06:42:12.490846 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 06:42:12.500267 systemd[1]: Finished ignition-setup.service. Dec 13 06:42:12.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.502156 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 06:42:12.589499 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 06:42:12.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.591000 audit: BPF prog-id=9 op=LOAD Dec 13 06:42:12.592433 systemd[1]: Starting systemd-networkd.service... Dec 13 06:42:12.634731 systemd-networkd[710]: lo: Link UP Dec 13 06:42:12.634745 systemd-networkd[710]: lo: Gained carrier Dec 13 06:42:12.636248 systemd-networkd[710]: Enumeration completed Dec 13 06:42:12.637014 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 06:42:12.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.637194 systemd[1]: Started systemd-networkd.service. Dec 13 06:42:12.641981 systemd-networkd[710]: eth0: Link UP Dec 13 06:42:12.641987 systemd-networkd[710]: eth0: Gained carrier Dec 13 06:42:12.652894 systemd[1]: Reached target network.target. Dec 13 06:42:12.657298 systemd[1]: Starting iscsiuio.service... Dec 13 06:42:12.668655 systemd[1]: Started iscsiuio.service. Dec 13 06:42:12.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.671873 systemd[1]: Starting iscsid.service... Dec 13 06:42:12.677124 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 06:42:12.677124 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 06:42:12.677124 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 06:42:12.677124 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 06:42:12.682743 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 06:42:12.682743 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 06:42:12.680699 systemd[1]: Started iscsid.service. Dec 13 06:42:12.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.687169 systemd[1]: Starting dracut-initqueue.service... Dec 13 06:42:12.702645 systemd-networkd[710]: eth0: DHCPv4 address 10.244.18.198/30, gateway 10.244.18.197 acquired from 10.244.18.197 Dec 13 06:42:12.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.711241 systemd[1]: Finished dracut-initqueue.service. Dec 13 06:42:12.712288 systemd[1]: Reached target remote-fs-pre.target. Dec 13 06:42:12.712952 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 06:42:12.713738 systemd[1]: Reached target remote-fs.target. Dec 13 06:42:12.716803 systemd[1]: Starting dracut-pre-mount.service... Dec 13 06:42:12.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.732407 systemd[1]: Finished dracut-pre-mount.service. Dec 13 06:42:12.735204 ignition[637]: Ignition 2.14.0 Dec 13 06:42:12.735238 ignition[637]: Stage: fetch-offline Dec 13 06:42:12.735369 ignition[637]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:42:12.735415 ignition[637]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:42:12.739446 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 06:42:12.737341 ignition[637]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:42:12.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.737505 ignition[637]: parsed url from cmdline: "" Dec 13 06:42:12.741728 systemd[1]: Starting ignition-fetch.service... Dec 13 06:42:12.737512 ignition[637]: no config URL provided Dec 13 06:42:12.737523 ignition[637]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 06:42:12.737540 ignition[637]: no config at "/usr/lib/ignition/user.ign" Dec 13 06:42:12.737550 ignition[637]: failed to fetch config: resource requires networking Dec 13 06:42:12.738017 ignition[637]: Ignition finished successfully Dec 13 06:42:12.755402 ignition[729]: Ignition 2.14.0 Dec 13 06:42:12.755417 ignition[729]: Stage: fetch Dec 13 06:42:12.755722 ignition[729]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:42:12.755763 ignition[729]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:42:12.757210 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:42:12.757362 ignition[729]: parsed url from cmdline: "" Dec 13 06:42:12.757370 ignition[729]: no config URL provided Dec 13 06:42:12.757379 ignition[729]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 06:42:12.757395 ignition[729]: no config at "/usr/lib/ignition/user.ign" Dec 13 06:42:12.763916 ignition[729]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 06:42:12.763939 ignition[729]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 06:42:12.764006 ignition[729]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 06:42:12.786027 ignition[729]: GET result: OK Dec 13 06:42:12.786212 ignition[729]: parsing config with SHA512: 0b99d9666d692bbaf93d29f6e8bceea4c5a8f5ca5cacb9c4dc98b9bdbee2a1b9db285b85f7ce13a1095b068f3f19bdb603c9e8e410f7dc61eb570ac2c4f9199b Dec 13 06:42:12.797843 unknown[729]: fetched base config from "system" Dec 13 06:42:12.798808 unknown[729]: fetched base config from "system" Dec 13 06:42:12.799603 unknown[729]: fetched user config from "openstack" Dec 13 06:42:12.800937 ignition[729]: fetch: fetch complete Dec 13 06:42:12.801673 ignition[729]: fetch: fetch passed Dec 13 06:42:12.802416 ignition[729]: Ignition finished successfully Dec 13 06:42:12.804703 systemd[1]: Finished ignition-fetch.service. Dec 13 06:42:12.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.806716 systemd[1]: Starting ignition-kargs.service... Dec 13 06:42:12.820193 ignition[734]: Ignition 2.14.0 Dec 13 06:42:12.820217 ignition[734]: Stage: kargs Dec 13 06:42:12.820384 ignition[734]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:42:12.820420 ignition[734]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:42:12.822164 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:42:12.823736 ignition[734]: kargs: kargs passed Dec 13 06:42:12.823817 ignition[734]: Ignition finished successfully Dec 13 06:42:12.825046 systemd[1]: Finished ignition-kargs.service. Dec 13 06:42:12.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.827401 systemd[1]: Starting ignition-disks.service... Dec 13 06:42:12.839170 ignition[739]: Ignition 2.14.0 Dec 13 06:42:12.840120 ignition[739]: Stage: disks Dec 13 06:42:12.840324 ignition[739]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:42:12.840373 ignition[739]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:42:12.841767 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:42:12.843469 ignition[739]: disks: disks passed Dec 13 06:42:12.843566 ignition[739]: Ignition finished successfully Dec 13 06:42:12.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.844502 systemd[1]: Finished ignition-disks.service. Dec 13 06:42:12.845363 systemd[1]: Reached target initrd-root-device.target. Dec 13 06:42:12.846747 systemd[1]: Reached target local-fs-pre.target. Dec 13 06:42:12.848002 systemd[1]: Reached target local-fs.target. Dec 13 06:42:12.849324 systemd[1]: Reached target sysinit.target. Dec 13 06:42:12.850805 systemd[1]: Reached target basic.target. Dec 13 06:42:12.853379 systemd[1]: Starting systemd-fsck-root.service... Dec 13 06:42:12.873585 systemd-fsck[746]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 06:42:12.878379 systemd[1]: Finished systemd-fsck-root.service. Dec 13 06:42:12.880153 systemd[1]: Mounting sysroot.mount... Dec 13 06:42:12.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:12.891073 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 06:42:12.891840 systemd[1]: Mounted sysroot.mount. Dec 13 06:42:12.892700 systemd[1]: Reached target initrd-root-fs.target. Dec 13 06:42:12.895208 systemd[1]: Mounting sysroot-usr.mount... Dec 13 06:42:12.896372 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 06:42:12.897241 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 06:42:12.897995 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 06:42:12.898090 systemd[1]: Reached target ignition-diskful.target. Dec 13 06:42:12.900048 systemd[1]: Mounted sysroot-usr.mount. Dec 13 06:42:12.901572 systemd[1]: Starting initrd-setup-root.service... Dec 13 06:42:12.911736 initrd-setup-root[757]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 06:42:12.923084 initrd-setup-root[765]: cut: /sysroot/etc/group: No such file or directory Dec 13 06:42:12.930037 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 06:42:12.940692 initrd-setup-root[781]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 06:42:13.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:13.015319 systemd[1]: Finished initrd-setup-root.service. Dec 13 06:42:13.018200 systemd[1]: Starting ignition-mount.service... Dec 13 06:42:13.023810 systemd[1]: Starting sysroot-boot.service... Dec 13 06:42:13.035668 bash[800]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 06:42:13.060966 ignition[801]: INFO : Ignition 2.14.0 Dec 13 06:42:13.062186 ignition[801]: INFO : Stage: mount Dec 13 06:42:13.063083 coreos-metadata[752]: Dec 13 06:42:13.062 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 06:42:13.065046 ignition[801]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:42:13.066127 ignition[801]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:42:13.069901 systemd[1]: Finished sysroot-boot.service. Dec 13 06:42:13.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:13.071800 ignition[801]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:42:13.074884 ignition[801]: INFO : mount: mount passed Dec 13 06:42:13.075706 ignition[801]: INFO : Ignition finished successfully Dec 13 06:42:13.077434 systemd[1]: Finished ignition-mount.service. Dec 13 06:42:13.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:13.081698 coreos-metadata[752]: Dec 13 06:42:13.081 INFO Fetch successful Dec 13 06:42:13.082808 coreos-metadata[752]: Dec 13 06:42:13.082 INFO wrote hostname srv-7lx2b.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 06:42:13.094315 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 06:42:13.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:13.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:13.094454 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 06:42:13.375195 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 06:42:13.387110 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (810) Dec 13 06:42:13.391641 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 06:42:13.391726 kernel: BTRFS info (device vda6): using free space tree Dec 13 06:42:13.391745 kernel: BTRFS info (device vda6): has skinny extents Dec 13 06:42:13.398956 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 06:42:13.400894 systemd[1]: Starting ignition-files.service... Dec 13 06:42:13.422663 ignition[830]: INFO : Ignition 2.14.0 Dec 13 06:42:13.423942 ignition[830]: INFO : Stage: files Dec 13 06:42:13.424865 ignition[830]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:42:13.425838 ignition[830]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:42:13.428579 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:42:13.430864 ignition[830]: DEBUG : files: compiled without relabeling support, skipping Dec 13 06:42:13.431883 ignition[830]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 06:42:13.431883 ignition[830]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 06:42:13.435767 ignition[830]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 06:42:13.437143 ignition[830]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 06:42:13.439554 unknown[830]: wrote ssh authorized keys file for user: core Dec 13 06:42:13.441183 ignition[830]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 06:42:13.442195 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 06:42:13.442195 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 06:42:13.643388 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 06:42:13.753274 systemd-networkd[710]: eth0: Gained IPv6LL Dec 13 06:42:13.910710 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 06:42:13.912220 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 06:42:13.912220 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 06:42:14.527930 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 06:42:14.895744 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 06:42:14.895744 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 06:42:14.898042 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 06:42:15.262816 systemd-networkd[710]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4b1:24:19ff:fef4:12c6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4b1:24:19ff:fef4:12c6/64 assigned by NDisc. Dec 13 06:42:15.262833 systemd-networkd[710]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 06:42:15.388689 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 06:42:16.607071 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 06:42:16.609104 ignition[830]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 06:42:16.610137 ignition[830]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 06:42:16.611169 ignition[830]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Dec 13 06:42:16.612776 ignition[830]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 06:42:16.615120 ignition[830]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 06:42:16.615120 ignition[830]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Dec 13 06:42:16.615120 ignition[830]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 06:42:16.615120 ignition[830]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 06:42:16.615120 ignition[830]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 06:42:16.615120 ignition[830]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 06:42:16.624422 ignition[830]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 06:42:16.625579 ignition[830]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 06:42:16.625579 ignition[830]: INFO : files: files passed Dec 13 06:42:16.625579 ignition[830]: INFO : Ignition finished successfully Dec 13 06:42:16.627939 systemd[1]: Finished ignition-files.service. Dec 13 06:42:16.638191 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 06:42:16.638232 kernel: audit: type=1130 audit(1734072136.630:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.631550 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 06:42:16.638155 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 06:42:16.639287 systemd[1]: Starting ignition-quench.service... Dec 13 06:42:16.644565 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 06:42:16.644722 systemd[1]: Finished ignition-quench.service. Dec 13 06:42:16.656335 kernel: audit: type=1130 audit(1734072136.646:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.656370 kernel: audit: type=1131 audit(1734072136.646:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.656495 initrd-setup-root-after-ignition[855]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 06:42:16.662868 kernel: audit: type=1130 audit(1734072136.656:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.652876 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 06:42:16.657147 systemd[1]: Reached target ignition-complete.target. Dec 13 06:42:16.664606 systemd[1]: Starting initrd-parse-etc.service... Dec 13 06:42:16.683870 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 06:42:16.684048 systemd[1]: Finished initrd-parse-etc.service. Dec 13 06:42:16.695614 kernel: audit: type=1130 audit(1734072136.685:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.695669 kernel: audit: type=1131 audit(1734072136.685:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.685665 systemd[1]: Reached target initrd-fs.target. Dec 13 06:42:16.696279 systemd[1]: Reached target initrd.target. Dec 13 06:42:16.698307 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 06:42:16.699527 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 06:42:16.717399 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 06:42:16.723636 kernel: audit: type=1130 audit(1734072136.718:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.719233 systemd[1]: Starting initrd-cleanup.service... Dec 13 06:42:16.733130 systemd[1]: Stopped target nss-lookup.target. Dec 13 06:42:16.734081 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 06:42:16.744852 kernel: audit: type=1131 audit(1734072136.737:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.735577 systemd[1]: Stopped target timers.target. Dec 13 06:42:16.736325 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 06:42:16.736574 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 06:42:16.737548 systemd[1]: Stopped target initrd.target. Dec 13 06:42:16.738324 systemd[1]: Stopped target basic.target. Dec 13 06:42:16.744404 systemd[1]: Stopped target ignition-complete.target. Dec 13 06:42:16.745738 systemd[1]: Stopped target ignition-diskful.target. Dec 13 06:42:16.762975 kernel: audit: type=1131 audit(1734072136.757:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.748168 systemd[1]: Stopped target initrd-root-device.target. Dec 13 06:42:16.749621 systemd[1]: Stopped target remote-fs.target. Dec 13 06:42:16.791376 kernel: audit: type=1131 audit(1734072136.764:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.750947 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 06:42:16.792538 iscsid[715]: iscsid shutting down. Dec 13 06:42:16.751971 systemd[1]: Stopped target sysinit.target. Dec 13 06:42:16.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.800553 ignition[868]: INFO : Ignition 2.14.0 Dec 13 06:42:16.800553 ignition[868]: INFO : Stage: umount Dec 13 06:42:16.800553 ignition[868]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:42:16.800553 ignition[868]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:42:16.800553 ignition[868]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:42:16.800553 ignition[868]: INFO : umount: umount passed Dec 13 06:42:16.800553 ignition[868]: INFO : Ignition finished successfully Dec 13 06:42:16.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.753319 systemd[1]: Stopped target local-fs.target. Dec 13 06:42:16.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.754660 systemd[1]: Stopped target local-fs-pre.target. Dec 13 06:42:16.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.755462 systemd[1]: Stopped target swap.target. Dec 13 06:42:16.756173 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 06:42:16.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.756431 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 06:42:16.757480 systemd[1]: Stopped target cryptsetup.target. Dec 13 06:42:16.763879 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 06:42:16.764163 systemd[1]: Stopped dracut-initqueue.service. Dec 13 06:42:16.765193 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 06:42:16.765451 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 06:42:16.766353 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 06:42:16.766651 systemd[1]: Stopped ignition-files.service. Dec 13 06:42:16.768894 systemd[1]: Stopping ignition-mount.service... Dec 13 06:42:16.769957 systemd[1]: Stopping iscsid.service... Dec 13 06:42:16.772237 systemd[1]: Stopping sysroot-boot.service... Dec 13 06:42:16.782887 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 06:42:16.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.783266 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 06:42:16.784240 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 06:42:16.784467 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 06:42:16.792246 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 06:42:16.792433 systemd[1]: Stopped iscsid.service. Dec 13 06:42:16.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.799313 systemd[1]: Stopping iscsiuio.service... Dec 13 06:42:16.807195 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 06:42:16.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.807923 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 06:42:16.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.808094 systemd[1]: Stopped iscsiuio.service. Dec 13 06:42:16.809252 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 06:42:16.809386 systemd[1]: Finished initrd-cleanup.service. Dec 13 06:42:16.845000 audit: BPF prog-id=6 op=UNLOAD Dec 13 06:42:16.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.810527 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 06:42:16.810658 systemd[1]: Stopped ignition-mount.service. Dec 13 06:42:16.812237 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 06:42:16.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.812307 systemd[1]: Stopped ignition-disks.service. Dec 13 06:42:16.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.812972 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 06:42:16.813031 systemd[1]: Stopped ignition-kargs.service. Dec 13 06:42:16.814247 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 06:42:16.814326 systemd[1]: Stopped ignition-fetch.service. Dec 13 06:42:16.815576 systemd[1]: Stopped target network.target. Dec 13 06:42:16.816793 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 06:42:16.816867 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 06:42:16.818141 systemd[1]: Stopped target paths.target. Dec 13 06:42:16.819463 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 06:42:16.823153 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 06:42:16.824541 systemd[1]: Stopped target slices.target. Dec 13 06:42:16.825774 systemd[1]: Stopped target sockets.target. Dec 13 06:42:16.827114 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 06:42:16.827185 systemd[1]: Closed iscsid.socket. Dec 13 06:42:16.828369 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 06:42:16.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.828428 systemd[1]: Closed iscsiuio.socket. Dec 13 06:42:16.829526 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 06:42:16.829605 systemd[1]: Stopped ignition-setup.service. Dec 13 06:42:16.831472 systemd[1]: Stopping systemd-networkd.service... Dec 13 06:42:16.832682 systemd[1]: Stopping systemd-resolved.service... Dec 13 06:42:16.834134 systemd-networkd[710]: eth0: DHCPv6 lease lost Dec 13 06:42:16.872000 audit: BPF prog-id=9 op=UNLOAD Dec 13 06:42:16.835685 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 06:42:16.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.835843 systemd[1]: Stopped systemd-networkd.service. Dec 13 06:42:16.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.838203 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 06:42:16.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.838344 systemd[1]: Stopped sysroot-boot.service. Dec 13 06:42:16.841249 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 06:42:16.841390 systemd[1]: Stopped systemd-resolved.service. Dec 13 06:42:16.843242 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 06:42:16.843296 systemd[1]: Closed systemd-networkd.socket. Dec 13 06:42:16.844411 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 06:42:16.844479 systemd[1]: Stopped initrd-setup-root.service. Dec 13 06:42:16.846884 systemd[1]: Stopping network-cleanup.service... Dec 13 06:42:16.847941 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 06:42:16.848034 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 06:42:16.851303 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 06:42:16.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.851397 systemd[1]: Stopped systemd-sysctl.service. Dec 13 06:42:16.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.852576 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 06:42:16.852638 systemd[1]: Stopped systemd-modules-load.service. Dec 13 06:42:16.860199 systemd[1]: Stopping systemd-udevd.service... Dec 13 06:42:16.863137 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 06:42:16.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.866471 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 06:42:16.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:16.866773 systemd[1]: Stopped systemd-udevd.service. Dec 13 06:42:16.868398 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 06:42:16.868446 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 06:42:16.871909 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 06:42:16.871968 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 06:42:16.873210 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 06:42:16.873277 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 06:42:16.874490 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 06:42:16.874566 systemd[1]: Stopped dracut-cmdline.service. Dec 13 06:42:16.875794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 06:42:16.875855 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 06:42:16.878146 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 06:42:16.879125 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 06:42:16.879206 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 06:42:16.889771 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 06:42:16.889868 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 06:42:16.904871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 06:42:16.904975 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 06:42:16.907074 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 06:42:16.907917 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 06:42:16.908074 systemd[1]: Stopped network-cleanup.service. Dec 13 06:42:16.909716 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 06:42:16.909849 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 06:42:16.911050 systemd[1]: Reached target initrd-switch-root.target. Dec 13 06:42:16.913297 systemd[1]: Starting initrd-switch-root.service... Dec 13 06:42:16.931542 systemd[1]: Switching root. Dec 13 06:42:16.952283 systemd-journald[201]: Journal stopped Dec 13 06:42:20.973502 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 06:42:20.973672 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 06:42:20.973712 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 06:42:20.973747 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 06:42:20.973784 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 06:42:20.973817 kernel: SELinux: policy capability open_perms=1 Dec 13 06:42:20.973854 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 06:42:20.973881 kernel: SELinux: policy capability always_check_network=0 Dec 13 06:42:20.973906 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 06:42:20.973931 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 06:42:20.973968 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 06:42:20.973988 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 06:42:20.974022 systemd[1]: Successfully loaded SELinux policy in 59.355ms. Dec 13 06:42:20.974113 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.459ms. Dec 13 06:42:20.974143 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 06:42:20.974188 systemd[1]: Detected virtualization kvm. Dec 13 06:42:20.974222 systemd[1]: Detected architecture x86-64. Dec 13 06:42:20.974251 systemd[1]: Detected first boot. Dec 13 06:42:20.974289 systemd[1]: Hostname set to . Dec 13 06:42:20.974324 systemd[1]: Initializing machine ID from VM UUID. Dec 13 06:42:20.974351 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 06:42:20.974371 systemd[1]: Populated /etc with preset unit settings. Dec 13 06:42:20.974392 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:42:20.974437 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:42:20.974478 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:42:20.974516 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 06:42:20.974552 systemd[1]: Stopped initrd-switch-root.service. Dec 13 06:42:20.974574 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 06:42:20.974602 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 06:42:20.974629 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 06:42:20.974657 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 06:42:20.974684 systemd[1]: Created slice system-getty.slice. Dec 13 06:42:20.974719 systemd[1]: Created slice system-modprobe.slice. Dec 13 06:42:20.974742 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 06:42:20.974779 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 06:42:20.974808 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 06:42:20.974835 systemd[1]: Created slice user.slice. Dec 13 06:42:20.974862 systemd[1]: Started systemd-ask-password-console.path. Dec 13 06:42:20.974904 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 06:42:20.974930 systemd[1]: Set up automount boot.automount. Dec 13 06:42:20.974951 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 06:42:20.974984 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 06:42:20.975006 systemd[1]: Stopped target initrd-fs.target. Dec 13 06:42:20.975027 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 06:42:20.975047 systemd[1]: Reached target integritysetup.target. Dec 13 06:42:20.975121 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 06:42:20.975152 systemd[1]: Reached target remote-fs.target. Dec 13 06:42:20.975181 systemd[1]: Reached target slices.target. Dec 13 06:42:20.975202 systemd[1]: Reached target swap.target. Dec 13 06:42:20.975227 systemd[1]: Reached target torcx.target. Dec 13 06:42:20.975259 systemd[1]: Reached target veritysetup.target. Dec 13 06:42:20.975291 systemd[1]: Listening on systemd-coredump.socket. Dec 13 06:42:20.975311 systemd[1]: Listening on systemd-initctl.socket. Dec 13 06:42:20.975343 systemd[1]: Listening on systemd-networkd.socket. Dec 13 06:42:20.975369 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 06:42:20.975409 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 06:42:20.982969 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 06:42:20.983014 systemd[1]: Mounting dev-hugepages.mount... Dec 13 06:42:20.983038 systemd[1]: Mounting dev-mqueue.mount... Dec 13 06:42:20.983103 systemd[1]: Mounting media.mount... Dec 13 06:42:20.983129 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:42:20.983157 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 06:42:20.983180 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 06:42:20.983200 systemd[1]: Mounting tmp.mount... Dec 13 06:42:20.983221 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 06:42:20.983248 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:42:20.983270 systemd[1]: Starting kmod-static-nodes.service... Dec 13 06:42:20.983291 systemd[1]: Starting modprobe@configfs.service... Dec 13 06:42:20.983323 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:42:20.983346 systemd[1]: Starting modprobe@drm.service... Dec 13 06:42:20.983373 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:42:20.983401 systemd[1]: Starting modprobe@fuse.service... Dec 13 06:42:20.983423 systemd[1]: Starting modprobe@loop.service... Dec 13 06:42:20.983460 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 06:42:20.983485 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 06:42:20.983519 kernel: fuse: init (API version 7.34) Dec 13 06:42:20.983548 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 06:42:20.983581 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 06:42:20.983604 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 06:42:20.983625 systemd[1]: Stopped systemd-journald.service. Dec 13 06:42:20.983646 systemd[1]: Starting systemd-journald.service... Dec 13 06:42:20.983667 systemd[1]: Starting systemd-modules-load.service... Dec 13 06:42:20.983687 systemd[1]: Starting systemd-network-generator.service... Dec 13 06:42:20.983713 systemd[1]: Starting systemd-remount-fs.service... Dec 13 06:42:20.983736 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 06:42:20.983757 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 06:42:20.983795 systemd[1]: Stopped verity-setup.service. Dec 13 06:42:20.983818 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:42:20.983845 kernel: loop: module loaded Dec 13 06:42:20.983865 systemd[1]: Mounted dev-hugepages.mount. Dec 13 06:42:20.983886 systemd[1]: Mounted dev-mqueue.mount. Dec 13 06:42:20.983912 systemd[1]: Mounted media.mount. Dec 13 06:42:20.983934 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 06:42:20.983961 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 06:42:20.983989 systemd[1]: Mounted tmp.mount. Dec 13 06:42:20.984014 systemd-journald[977]: Journal started Dec 13 06:42:20.984314 systemd-journald[977]: Runtime Journal (/run/log/journal/e1ebb03d864345799b5fbbf6e26ff086) is 4.7M, max 38.1M, 33.3M free. Dec 13 06:42:20.984385 systemd[1]: Finished kmod-static-nodes.service. Dec 13 06:42:17.135000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 06:42:17.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 06:42:17.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 06:42:17.213000 audit: BPF prog-id=10 op=LOAD Dec 13 06:42:17.213000 audit: BPF prog-id=10 op=UNLOAD Dec 13 06:42:17.213000 audit: BPF prog-id=11 op=LOAD Dec 13 06:42:17.213000 audit: BPF prog-id=11 op=UNLOAD Dec 13 06:42:17.336000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 06:42:17.336000 audit[900]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00017f8d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:42:17.336000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 06:42:17.339000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 06:42:17.339000 audit[900]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00017f9a9 a2=1ed a3=0 items=2 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:42:17.339000 audit: CWD cwd="/" Dec 13 06:42:17.339000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:17.339000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:17.339000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 06:42:20.725000 audit: BPF prog-id=12 op=LOAD Dec 13 06:42:20.725000 audit: BPF prog-id=3 op=UNLOAD Dec 13 06:42:20.726000 audit: BPF prog-id=13 op=LOAD Dec 13 06:42:20.726000 audit: BPF prog-id=14 op=LOAD Dec 13 06:42:20.726000 audit: BPF prog-id=4 op=UNLOAD Dec 13 06:42:20.726000 audit: BPF prog-id=5 op=UNLOAD Dec 13 06:42:20.727000 audit: BPF prog-id=15 op=LOAD Dec 13 06:42:20.727000 audit: BPF prog-id=12 op=UNLOAD Dec 13 06:42:20.727000 audit: BPF prog-id=16 op=LOAD Dec 13 06:42:20.727000 audit: BPF prog-id=17 op=LOAD Dec 13 06:42:20.727000 audit: BPF prog-id=13 op=UNLOAD Dec 13 06:42:20.727000 audit: BPF prog-id=14 op=UNLOAD Dec 13 06:42:20.728000 audit: BPF prog-id=18 op=LOAD Dec 13 06:42:20.728000 audit: BPF prog-id=15 op=UNLOAD Dec 13 06:42:20.728000 audit: BPF prog-id=19 op=LOAD Dec 13 06:42:20.728000 audit: BPF prog-id=20 op=LOAD Dec 13 06:42:20.728000 audit: BPF prog-id=16 op=UNLOAD Dec 13 06:42:20.728000 audit: BPF prog-id=17 op=UNLOAD Dec 13 06:42:20.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.738000 audit: BPF prog-id=18 op=UNLOAD Dec 13 06:42:20.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.919000 audit: BPF prog-id=21 op=LOAD Dec 13 06:42:20.919000 audit: BPF prog-id=22 op=LOAD Dec 13 06:42:20.920000 audit: BPF prog-id=23 op=LOAD Dec 13 06:42:20.920000 audit: BPF prog-id=19 op=UNLOAD Dec 13 06:42:20.920000 audit: BPF prog-id=20 op=UNLOAD Dec 13 06:42:20.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.970000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 06:42:20.970000 audit[977]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffcbfdb7090 a2=4000 a3=7ffcbfdb712c items=0 ppid=1 pid=977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:42:20.970000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 06:42:20.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:17.330420 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:42:20.721409 systemd[1]: Queued start job for default target multi-user.target. Dec 13 06:42:17.331365 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 06:42:20.721440 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 06:42:17.331415 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 06:42:20.729523 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 06:42:17.331481 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 06:42:17.331499 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 06:42:20.991257 systemd[1]: Started systemd-journald.service. Dec 13 06:42:20.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:17.331570 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 06:42:20.990476 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 06:42:17.331607 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 06:42:20.990668 systemd[1]: Finished modprobe@configfs.service. Dec 13 06:42:17.331974 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 06:42:20.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:17.332029 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 06:42:17.332075 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 06:42:17.335567 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 06:42:20.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:17.335632 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 06:42:20.993957 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:42:17.335666 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 06:42:20.994150 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:42:17.335693 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 06:42:20.995437 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 06:42:20.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:17.335725 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 06:42:20.995636 systemd[1]: Finished modprobe@drm.service. Dec 13 06:42:17.335750 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 06:42:20.996806 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:42:21.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.120577 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:20Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:42:20.997020 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:42:20.120984 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:20Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:42:20.998214 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 06:42:21.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:20.121201 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:20Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:42:20.998441 systemd[1]: Finished modprobe@fuse.service. Dec 13 06:42:20.121654 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:20Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:42:20.999547 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:42:20.121746 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:20Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 06:42:20.999756 systemd[1]: Finished modprobe@loop.service. Dec 13 06:42:20.121863 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:42:20Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 06:42:21.000875 systemd[1]: Finished systemd-modules-load.service. Dec 13 06:42:21.004311 systemd[1]: Finished systemd-network-generator.service. Dec 13 06:42:21.006478 systemd[1]: Finished systemd-remount-fs.service. Dec 13 06:42:21.008026 systemd[1]: Reached target network-pre.target. Dec 13 06:42:21.010764 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 06:42:21.013116 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 06:42:21.017862 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 06:42:21.020263 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 06:42:21.023285 systemd[1]: Starting systemd-journal-flush.service... Dec 13 06:42:21.024142 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:42:21.026304 systemd[1]: Starting systemd-random-seed.service... Dec 13 06:42:21.028711 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:42:21.033836 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:42:21.036993 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 06:42:21.037838 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 06:42:21.047862 systemd-journald[977]: Time spent on flushing to /var/log/journal/e1ebb03d864345799b5fbbf6e26ff086 is 47.583ms for 1302 entries. Dec 13 06:42:21.047862 systemd-journald[977]: System Journal (/var/log/journal/e1ebb03d864345799b5fbbf6e26ff086) is 8.0M, max 584.8M, 576.8M free. Dec 13 06:42:21.103391 systemd-journald[977]: Received client request to flush runtime journal. Dec 13 06:42:21.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.053330 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 06:42:21.055738 systemd[1]: Starting systemd-sysusers.service... Dec 13 06:42:21.061388 systemd[1]: Finished systemd-random-seed.service. Dec 13 06:42:21.062263 systemd[1]: Reached target first-boot-complete.target. Dec 13 06:42:21.078345 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:42:21.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.107250 systemd[1]: Finished systemd-journal-flush.service. Dec 13 06:42:21.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.122405 systemd[1]: Finished systemd-sysusers.service. Dec 13 06:42:21.125745 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 06:42:21.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.175932 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 06:42:21.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.190923 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 06:42:21.193539 systemd[1]: Starting systemd-udev-settle.service... Dec 13 06:42:21.205782 udevadm[1011]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 06:42:21.718780 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 06:42:21.727485 kernel: kauditd_printk_skb: 108 callbacks suppressed Dec 13 06:42:21.727643 kernel: audit: type=1130 audit(1734072141.720:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.729605 kernel: audit: type=1334 audit(1734072141.721:149): prog-id=24 op=LOAD Dec 13 06:42:21.721000 audit: BPF prog-id=24 op=LOAD Dec 13 06:42:21.728644 systemd[1]: Starting systemd-udevd.service... Dec 13 06:42:21.727000 audit: BPF prog-id=25 op=LOAD Dec 13 06:42:21.730075 kernel: audit: type=1334 audit(1734072141.727:150): prog-id=25 op=LOAD Dec 13 06:42:21.727000 audit: BPF prog-id=7 op=UNLOAD Dec 13 06:42:21.727000 audit: BPF prog-id=8 op=UNLOAD Dec 13 06:42:21.732097 kernel: audit: type=1334 audit(1734072141.727:151): prog-id=7 op=UNLOAD Dec 13 06:42:21.732149 kernel: audit: type=1334 audit(1734072141.727:152): prog-id=8 op=UNLOAD Dec 13 06:42:21.759334 systemd-udevd[1012]: Using default interface naming scheme 'v252'. Dec 13 06:42:21.790481 systemd[1]: Started systemd-udevd.service. Dec 13 06:42:21.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.801092 kernel: audit: type=1130 audit(1734072141.791:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.802000 audit: BPF prog-id=26 op=LOAD Dec 13 06:42:21.803835 systemd[1]: Starting systemd-networkd.service... Dec 13 06:42:21.809343 kernel: audit: type=1334 audit(1734072141.802:154): prog-id=26 op=LOAD Dec 13 06:42:21.827144 kernel: audit: type=1334 audit(1734072141.819:155): prog-id=27 op=LOAD Dec 13 06:42:21.827240 kernel: audit: type=1334 audit(1734072141.819:156): prog-id=28 op=LOAD Dec 13 06:42:21.827287 kernel: audit: type=1334 audit(1734072141.819:157): prog-id=29 op=LOAD Dec 13 06:42:21.819000 audit: BPF prog-id=27 op=LOAD Dec 13 06:42:21.819000 audit: BPF prog-id=28 op=LOAD Dec 13 06:42:21.819000 audit: BPF prog-id=29 op=LOAD Dec 13 06:42:21.824392 systemd[1]: Starting systemd-userdbd.service... Dec 13 06:42:21.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.883451 systemd[1]: Started systemd-userdbd.service. Dec 13 06:42:21.910625 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 06:42:21.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:21.974165 systemd-networkd[1026]: lo: Link UP Dec 13 06:42:21.974179 systemd-networkd[1026]: lo: Gained carrier Dec 13 06:42:21.975020 systemd-networkd[1026]: Enumeration completed Dec 13 06:42:21.975176 systemd[1]: Started systemd-networkd.service. Dec 13 06:42:21.976896 systemd-networkd[1026]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 06:42:21.979294 systemd-networkd[1026]: eth0: Link UP Dec 13 06:42:21.979416 systemd-networkd[1026]: eth0: Gained carrier Dec 13 06:42:22.006117 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 06:42:22.006500 systemd-networkd[1026]: eth0: DHCPv4 address 10.244.18.198/30, gateway 10.244.18.197 acquired from 10.244.18.197 Dec 13 06:42:22.014095 kernel: ACPI: button: Power Button [PWRF] Dec 13 06:42:22.032561 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 06:42:22.031787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 06:42:22.073000 audit[1023]: AVC avc: denied { confidentiality } for pid=1023 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 06:42:22.073000 audit[1023]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ea47e0b430 a1=337fc a2=7f6cc53b7bc5 a3=5 items=110 ppid=1012 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:42:22.073000 audit: CWD cwd="/" Dec 13 06:42:22.073000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=1 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=2 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=3 name=(null) inode=15606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=4 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=5 name=(null) inode=15607 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=6 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=7 name=(null) inode=15608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=8 name=(null) inode=15608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=9 name=(null) inode=15609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=10 name=(null) inode=15608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=11 name=(null) inode=15610 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=12 name=(null) inode=15608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=13 name=(null) inode=15611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=14 name=(null) inode=15608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=15 name=(null) inode=15612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=16 name=(null) inode=15608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=17 name=(null) inode=15613 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=18 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=19 name=(null) inode=15614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=20 name=(null) inode=15614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=21 name=(null) inode=15615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=22 name=(null) inode=15614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=23 name=(null) inode=15616 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=24 name=(null) inode=15614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=25 name=(null) inode=15617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=26 name=(null) inode=15614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=27 name=(null) inode=15618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=28 name=(null) inode=15614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=29 name=(null) inode=15619 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=30 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=31 name=(null) inode=15620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=32 name=(null) inode=15620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=33 name=(null) inode=15621 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=34 name=(null) inode=15620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=35 name=(null) inode=15622 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=36 name=(null) inode=15620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=37 name=(null) inode=15623 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=38 name=(null) inode=15620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=39 name=(null) inode=15624 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=40 name=(null) inode=15620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=41 name=(null) inode=15625 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=42 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=43 name=(null) inode=15626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=44 name=(null) inode=15626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=45 name=(null) inode=15627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=46 name=(null) inode=15626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=47 name=(null) inode=15628 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=48 name=(null) inode=15626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=49 name=(null) inode=15629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=50 name=(null) inode=15626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=51 name=(null) inode=15630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=52 name=(null) inode=15626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=53 name=(null) inode=15631 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=55 name=(null) inode=15632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=56 name=(null) inode=15632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=57 name=(null) inode=15633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=58 name=(null) inode=15632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=59 name=(null) inode=15634 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=60 name=(null) inode=15632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=61 name=(null) inode=15635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=62 name=(null) inode=15635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=63 name=(null) inode=15636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=64 name=(null) inode=15635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=65 name=(null) inode=15637 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=66 name=(null) inode=15635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=67 name=(null) inode=15638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=68 name=(null) inode=15635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=69 name=(null) inode=15639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=70 name=(null) inode=15635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=71 name=(null) inode=15640 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=72 name=(null) inode=15632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=73 name=(null) inode=15641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=74 name=(null) inode=15641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=75 name=(null) inode=15642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=76 name=(null) inode=15641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=77 name=(null) inode=15643 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=78 name=(null) inode=15641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=79 name=(null) inode=15644 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=80 name=(null) inode=15641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=81 name=(null) inode=15645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=82 name=(null) inode=15641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=83 name=(null) inode=15646 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=84 name=(null) inode=15632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=85 name=(null) inode=15647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=86 name=(null) inode=15647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=87 name=(null) inode=15648 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=88 name=(null) inode=15647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=89 name=(null) inode=15649 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=90 name=(null) inode=15647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=91 name=(null) inode=15650 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=92 name=(null) inode=15647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=93 name=(null) inode=15651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=94 name=(null) inode=15647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=95 name=(null) inode=15652 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=96 name=(null) inode=15632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=97 name=(null) inode=15653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=98 name=(null) inode=15653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=99 name=(null) inode=15654 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=100 name=(null) inode=15653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=101 name=(null) inode=15655 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=102 name=(null) inode=15653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=103 name=(null) inode=15656 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=104 name=(null) inode=15653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=105 name=(null) inode=15657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=106 name=(null) inode=15653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=107 name=(null) inode=15658 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PATH item=109 name=(null) inode=15659 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:42:22.073000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 06:42:22.122088 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 06:42:22.126094 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 06:42:22.133823 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 06:42:22.134165 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 06:42:22.310194 systemd[1]: Finished systemd-udev-settle.service. Dec 13 06:42:22.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.313169 systemd[1]: Starting lvm2-activation-early.service... Dec 13 06:42:22.340085 lvm[1041]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 06:42:22.375877 systemd[1]: Finished lvm2-activation-early.service. Dec 13 06:42:22.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.376842 systemd[1]: Reached target cryptsetup.target. Dec 13 06:42:22.379460 systemd[1]: Starting lvm2-activation.service... Dec 13 06:42:22.385154 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 06:42:22.410815 systemd[1]: Finished lvm2-activation.service. Dec 13 06:42:22.411749 systemd[1]: Reached target local-fs-pre.target. Dec 13 06:42:22.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.412453 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 06:42:22.412504 systemd[1]: Reached target local-fs.target. Dec 13 06:42:22.413121 systemd[1]: Reached target machines.target. Dec 13 06:42:22.415680 systemd[1]: Starting ldconfig.service... Dec 13 06:42:22.417047 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:42:22.417129 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:42:22.419118 systemd[1]: Starting systemd-boot-update.service... Dec 13 06:42:22.422660 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 06:42:22.428706 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 06:42:22.432694 systemd[1]: Starting systemd-sysext.service... Dec 13 06:42:22.435204 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1044 (bootctl) Dec 13 06:42:22.438417 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 06:42:22.449140 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 06:42:22.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.481493 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 06:42:22.496908 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 06:42:22.497221 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 06:42:22.500301 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 06:42:22.501506 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 06:42:22.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.522190 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 06:42:22.544536 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 06:42:22.566100 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 06:42:22.583069 (sd-sysext)[1056]: Using extensions 'kubernetes'. Dec 13 06:42:22.584655 (sd-sysext)[1056]: Merged extensions into '/usr'. Dec 13 06:42:22.594716 systemd-fsck[1052]: fsck.fat 4.2 (2021-01-31) Dec 13 06:42:22.594716 systemd-fsck[1052]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 06:42:22.599247 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 06:42:22.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.601989 systemd[1]: Mounting boot.mount... Dec 13 06:42:22.632332 systemd[1]: Mounted boot.mount. Dec 13 06:42:22.634746 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:42:22.638035 systemd[1]: Mounting usr-share-oem.mount... Dec 13 06:42:22.639100 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:42:22.641437 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:42:22.645132 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:42:22.648960 systemd[1]: Starting modprobe@loop.service... Dec 13 06:42:22.650372 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:42:22.650614 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:42:22.650842 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:42:22.653684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:42:22.654225 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:42:22.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.660814 systemd[1]: Mounted usr-share-oem.mount. Dec 13 06:42:22.662133 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:42:22.662348 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:42:22.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.663817 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:42:22.663993 systemd[1]: Finished modprobe@loop.service. Dec 13 06:42:22.665372 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:42:22.665554 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:42:22.669239 systemd[1]: Finished systemd-sysext.service. Dec 13 06:42:22.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.674124 systemd[1]: Starting ensure-sysext.service... Dec 13 06:42:22.676600 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 06:42:22.691301 systemd[1]: Finished systemd-boot-update.service. Dec 13 06:42:22.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:22.692925 systemd[1]: Reloading. Dec 13 06:42:22.698040 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 06:42:22.702796 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 06:42:22.707342 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 06:42:22.795330 /usr/lib/systemd/system-generators/torcx-generator[1083]: time="2024-12-13T06:42:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:42:22.795918 /usr/lib/systemd/system-generators/torcx-generator[1083]: time="2024-12-13T06:42:22Z" level=info msg="torcx already run" Dec 13 06:42:22.958698 ldconfig[1043]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 06:42:22.968398 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:42:22.968436 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:42:22.996284 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:42:23.074000 audit: BPF prog-id=30 op=LOAD Dec 13 06:42:23.074000 audit: BPF prog-id=21 op=UNLOAD Dec 13 06:42:23.074000 audit: BPF prog-id=31 op=LOAD Dec 13 06:42:23.074000 audit: BPF prog-id=32 op=LOAD Dec 13 06:42:23.075000 audit: BPF prog-id=22 op=UNLOAD Dec 13 06:42:23.075000 audit: BPF prog-id=23 op=UNLOAD Dec 13 06:42:23.077000 audit: BPF prog-id=33 op=LOAD Dec 13 06:42:23.077000 audit: BPF prog-id=27 op=UNLOAD Dec 13 06:42:23.077000 audit: BPF prog-id=34 op=LOAD Dec 13 06:42:23.077000 audit: BPF prog-id=35 op=LOAD Dec 13 06:42:23.077000 audit: BPF prog-id=28 op=UNLOAD Dec 13 06:42:23.077000 audit: BPF prog-id=29 op=UNLOAD Dec 13 06:42:23.078000 audit: BPF prog-id=36 op=LOAD Dec 13 06:42:23.078000 audit: BPF prog-id=37 op=LOAD Dec 13 06:42:23.078000 audit: BPF prog-id=24 op=UNLOAD Dec 13 06:42:23.078000 audit: BPF prog-id=25 op=UNLOAD Dec 13 06:42:23.079000 audit: BPF prog-id=38 op=LOAD Dec 13 06:42:23.079000 audit: BPF prog-id=26 op=UNLOAD Dec 13 06:42:23.087031 systemd[1]: Finished ldconfig.service. Dec 13 06:42:23.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.090048 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 06:42:23.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.095461 systemd[1]: Starting audit-rules.service... Dec 13 06:42:23.098031 systemd[1]: Starting clean-ca-certificates.service... Dec 13 06:42:23.101154 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 06:42:23.106000 audit: BPF prog-id=39 op=LOAD Dec 13 06:42:23.111000 audit: BPF prog-id=40 op=LOAD Dec 13 06:42:23.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.109875 systemd[1]: Starting systemd-resolved.service... Dec 13 06:42:23.113224 systemd[1]: Starting systemd-timesyncd.service... Dec 13 06:42:23.117566 systemd[1]: Starting systemd-update-utmp.service... Dec 13 06:42:23.119280 systemd[1]: Finished clean-ca-certificates.service. Dec 13 06:42:23.124000 audit[1142]: SYSTEM_BOOT pid=1142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.128689 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:42:23.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.133875 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:42:23.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.136753 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:42:23.140495 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:42:23.144508 systemd[1]: Starting modprobe@loop.service... Dec 13 06:42:23.145289 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:42:23.145857 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:42:23.146235 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:42:23.150619 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:42:23.150852 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:42:23.152382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:42:23.152595 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:42:23.158146 systemd[1]: Finished systemd-update-utmp.service. Dec 13 06:42:23.162879 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:42:23.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.164743 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:42:23.168129 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:42:23.169285 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:42:23.169504 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:42:23.169734 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:42:23.172275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:42:23.172486 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:42:23.178970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:42:23.179498 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:42:23.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.182637 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:42:23.184918 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:42:23.189162 systemd[1]: Starting modprobe@drm.service... Dec 13 06:42:23.190050 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:42:23.190420 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:42:23.193533 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 06:42:23.194400 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:42:23.194710 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:42:23.198812 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 06:42:23.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.200565 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:42:23.200766 systemd[1]: Finished modprobe@loop.service. Dec 13 06:42:23.202014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:42:23.202765 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:42:23.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.204537 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 06:42:23.204718 systemd[1]: Finished modprobe@drm.service. Dec 13 06:42:23.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.206611 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:42:23.209639 systemd[1]: Starting systemd-update-done.service... Dec 13 06:42:23.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.211172 systemd[1]: Finished ensure-sysext.service. Dec 13 06:42:23.226595 systemd[1]: Finished systemd-update-done.service. Dec 13 06:42:23.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:42:23.243000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 06:42:23.243000 audit[1160]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb57cc1c0 a2=420 a3=0 items=0 ppid=1131 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:42:23.243000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 06:42:23.243545 augenrules[1160]: No rules Dec 13 06:42:23.244654 systemd[1]: Finished audit-rules.service. Dec 13 06:42:23.270629 systemd-resolved[1137]: Positive Trust Anchors: Dec 13 06:42:23.271107 systemd-resolved[1137]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 06:42:23.271279 systemd-resolved[1137]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 06:42:23.276301 systemd[1]: Started systemd-timesyncd.service. Dec 13 06:42:23.277136 systemd[1]: Reached target time-set.target. Dec 13 06:42:23.279152 systemd-resolved[1137]: Using system hostname 'srv-7lx2b.gb1.brightbox.com'. Dec 13 06:42:23.291798 systemd[1]: Started systemd-resolved.service. Dec 13 06:42:23.292644 systemd[1]: Reached target network.target. Dec 13 06:42:23.293273 systemd[1]: Reached target nss-lookup.target. Dec 13 06:42:23.293932 systemd[1]: Reached target sysinit.target. Dec 13 06:42:23.294657 systemd[1]: Started motdgen.path. Dec 13 06:42:23.295296 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 06:42:23.296268 systemd[1]: Started logrotate.timer. Dec 13 06:42:23.297010 systemd[1]: Started mdadm.timer. Dec 13 06:42:23.297606 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 06:42:23.298258 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 06:42:23.298307 systemd[1]: Reached target paths.target. Dec 13 06:42:23.298899 systemd[1]: Reached target timers.target. Dec 13 06:42:23.299955 systemd[1]: Listening on dbus.socket. Dec 13 06:42:23.302144 systemd[1]: Starting docker.socket... Dec 13 06:42:23.306454 systemd[1]: Listening on sshd.socket. Dec 13 06:42:23.307222 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:42:23.307842 systemd[1]: Listening on docker.socket. Dec 13 06:42:23.308632 systemd[1]: Reached target sockets.target. Dec 13 06:42:23.309256 systemd[1]: Reached target basic.target. Dec 13 06:42:23.309909 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 06:42:23.309963 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 06:42:23.311430 systemd[1]: Starting containerd.service... Dec 13 06:42:23.314571 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 06:42:23.316827 systemd[1]: Starting dbus.service... Dec 13 06:42:23.319368 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 06:42:23.323526 systemd[1]: Starting extend-filesystems.service... Dec 13 06:42:23.325256 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 06:42:23.330121 systemd[1]: Starting motdgen.service... Dec 13 06:42:23.333441 systemd[1]: Starting prepare-helm.service... Dec 13 06:42:23.338904 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 06:42:23.341936 systemd[1]: Starting sshd-keygen.service... Dec 13 06:42:23.349094 systemd[1]: Starting systemd-logind.service... Dec 13 06:42:23.351852 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:42:23.352011 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 06:42:23.352799 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 06:42:23.353999 systemd[1]: Starting update-engine.service... Dec 13 06:42:23.358706 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 06:42:23.366801 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 06:42:23.367552 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 06:42:23.388876 jq[1173]: false Dec 13 06:42:23.389451 jq[1185]: true Dec 13 06:42:23.390593 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 06:42:23.390862 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 06:42:23.420229 tar[1187]: linux-amd64/helm Dec 13 06:42:23.420876 extend-filesystems[1174]: Found loop1 Dec 13 06:42:23.422608 extend-filesystems[1174]: Found vda Dec 13 06:42:23.424008 extend-filesystems[1174]: Found vda1 Dec 13 06:42:23.425359 extend-filesystems[1174]: Found vda2 Dec 13 06:42:23.426174 extend-filesystems[1174]: Found vda3 Dec 13 06:42:23.428042 extend-filesystems[1174]: Found usr Dec 13 06:42:23.431971 extend-filesystems[1174]: Found vda4 Dec 13 06:42:23.434077 jq[1198]: true Dec 13 06:42:23.435020 extend-filesystems[1174]: Found vda6 Dec 13 06:42:23.435020 extend-filesystems[1174]: Found vda7 Dec 13 06:42:23.435020 extend-filesystems[1174]: Found vda9 Dec 13 06:42:23.441902 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 06:42:23.447492 extend-filesystems[1174]: Checking size of /dev/vda9 Dec 13 06:42:23.442179 systemd[1]: Finished motdgen.service. Dec 13 06:42:24.019824 systemd-resolved[1137]: Clock change detected. Flushing caches. Dec 13 06:42:24.020070 systemd-timesyncd[1138]: Contacted time server 77.104.162.218:123 (0.flatcar.pool.ntp.org). Dec 13 06:42:24.020307 systemd-timesyncd[1138]: Initial clock synchronization to Fri 2024-12-13 06:42:24.019422 UTC. Dec 13 06:42:24.036347 dbus-daemon[1170]: [system] SELinux support is enabled Dec 13 06:42:24.036624 systemd[1]: Started dbus.service. Dec 13 06:42:24.040274 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 06:42:24.040321 systemd[1]: Reached target system-config.target. Dec 13 06:42:24.041710 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 06:42:24.041764 systemd[1]: Reached target user-config.target. Dec 13 06:42:24.043416 dbus-daemon[1170]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1026 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 06:42:24.061724 systemd[1]: Starting systemd-hostnamed.service... Dec 13 06:42:24.079336 systemd[1]: Created slice system-sshd.slice. Dec 13 06:42:24.080081 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:42:24.080132 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:42:24.083556 extend-filesystems[1174]: Resized partition /dev/vda9 Dec 13 06:42:24.105904 extend-filesystems[1225]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 06:42:24.112128 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 06:42:24.133598 update_engine[1184]: I1213 06:42:24.132570 1184 main.cc:92] Flatcar Update Engine starting Dec 13 06:42:24.141305 systemd[1]: Started update-engine.service. Dec 13 06:42:24.141651 update_engine[1184]: I1213 06:42:24.141610 1184 update_check_scheduler.cc:74] Next update check in 4m28s Dec 13 06:42:24.144809 systemd[1]: Started locksmithd.service. Dec 13 06:42:24.176775 systemd-logind[1181]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 06:42:24.176827 systemd-logind[1181]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 06:42:24.177160 systemd-logind[1181]: New seat seat0. Dec 13 06:42:24.180450 systemd[1]: Started systemd-logind.service. Dec 13 06:42:24.185139 bash[1224]: Updated "/home/core/.ssh/authorized_keys" Dec 13 06:42:24.185651 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 06:42:24.266453 env[1193]: time="2024-12-13T06:42:24.266318611Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 06:42:24.326732 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 06:42:24.306615 systemd-networkd[1026]: eth0: Gained IPv6LL Dec 13 06:42:24.309532 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 06:42:24.310713 systemd[1]: Reached target network-online.target. Dec 13 06:42:24.315494 systemd[1]: Starting kubelet.service... Dec 13 06:42:24.331189 extend-filesystems[1225]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 06:42:24.331189 extend-filesystems[1225]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 06:42:24.331189 extend-filesystems[1225]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 06:42:24.335217 extend-filesystems[1174]: Resized filesystem in /dev/vda9 Dec 13 06:42:24.331616 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 06:42:24.331853 systemd[1]: Finished extend-filesystems.service. Dec 13 06:42:24.354538 env[1193]: time="2024-12-13T06:42:24.354483156Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 06:42:24.361881 env[1193]: time="2024-12-13T06:42:24.361834710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:42:24.365189 env[1193]: time="2024-12-13T06:42:24.365137393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:42:24.365189 env[1193]: time="2024-12-13T06:42:24.365184379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:42:24.365508 env[1193]: time="2024-12-13T06:42:24.365469058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:42:24.365508 env[1193]: time="2024-12-13T06:42:24.365504730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 06:42:24.365613 env[1193]: time="2024-12-13T06:42:24.365526587Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 06:42:24.365613 env[1193]: time="2024-12-13T06:42:24.365543562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 06:42:24.365704 env[1193]: time="2024-12-13T06:42:24.365664188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:42:24.366288 env[1193]: time="2024-12-13T06:42:24.366241787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:42:24.366474 env[1193]: time="2024-12-13T06:42:24.366436872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:42:24.366474 env[1193]: time="2024-12-13T06:42:24.366471083Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 06:42:24.366593 env[1193]: time="2024-12-13T06:42:24.366564010Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 06:42:24.366668 env[1193]: time="2024-12-13T06:42:24.366604454Z" level=info msg="metadata content store policy set" policy=shared Dec 13 06:42:24.367414 dbus-daemon[1170]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 06:42:24.367904 systemd[1]: Started systemd-hostnamed.service. Dec 13 06:42:24.368220 dbus-daemon[1170]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1217 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 06:42:24.372733 systemd[1]: Starting polkit.service... Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.373648510Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.373703084Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.373729898Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.373792638Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.373824322Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.373848559Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.373876997Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.373899505Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.373934854Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.373957180Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.373977220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.374005555Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.374162178Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 06:42:24.374807 env[1193]: time="2024-12-13T06:42:24.374332282Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.375901783Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376023379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376074416Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376169411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376195511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376221703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376247163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376281435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376301596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376320227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376338443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376361624Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376578154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376604380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.377839 env[1193]: time="2024-12-13T06:42:24.376624365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.378973 env[1193]: time="2024-12-13T06:42:24.376642841Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 06:42:24.378973 env[1193]: time="2024-12-13T06:42:24.376664639Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 06:42:24.378973 env[1193]: time="2024-12-13T06:42:24.376682469Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 06:42:24.378973 env[1193]: time="2024-12-13T06:42:24.376749467Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 06:42:24.378973 env[1193]: time="2024-12-13T06:42:24.376826012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 06:42:24.379222 env[1193]: time="2024-12-13T06:42:24.377115005Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 06:42:24.379222 env[1193]: time="2024-12-13T06:42:24.377207497Z" level=info msg="Connect containerd service" Dec 13 06:42:24.379222 env[1193]: time="2024-12-13T06:42:24.377289861Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 06:42:24.381591 env[1193]: time="2024-12-13T06:42:24.381380477Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 06:42:24.381880 env[1193]: time="2024-12-13T06:42:24.381834966Z" level=info msg="Start subscribing containerd event" Dec 13 06:42:24.382066 env[1193]: time="2024-12-13T06:42:24.382034228Z" level=info msg="Start recovering state" Dec 13 06:42:24.382293 env[1193]: time="2024-12-13T06:42:24.382253734Z" level=info msg="Start event monitor" Dec 13 06:42:24.382431 env[1193]: time="2024-12-13T06:42:24.382398601Z" level=info msg="Start snapshots syncer" Dec 13 06:42:24.382563 env[1193]: time="2024-12-13T06:42:24.382533581Z" level=info msg="Start cni network conf syncer for default" Dec 13 06:42:24.382673 env[1193]: time="2024-12-13T06:42:24.382646164Z" level=info msg="Start streaming server" Dec 13 06:42:24.383391 env[1193]: time="2024-12-13T06:42:24.383361479Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 06:42:24.384057 env[1193]: time="2024-12-13T06:42:24.384027112Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 06:42:24.384306 env[1193]: time="2024-12-13T06:42:24.384277110Z" level=info msg="containerd successfully booted in 0.124183s" Dec 13 06:42:24.384359 systemd[1]: Started containerd.service. Dec 13 06:42:24.403640 polkitd[1234]: Started polkitd version 121 Dec 13 06:42:24.431942 polkitd[1234]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 06:42:24.432059 polkitd[1234]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 06:42:24.445890 polkitd[1234]: Finished loading, compiling and executing 2 rules Dec 13 06:42:24.446514 dbus-daemon[1170]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 06:42:24.447302 systemd[1]: Started polkit.service. Dec 13 06:42:24.448444 polkitd[1234]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 06:42:24.475243 systemd-hostnamed[1217]: Hostname set to (static) Dec 13 06:42:24.954037 locksmithd[1226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 06:42:25.121686 tar[1187]: linux-amd64/LICENSE Dec 13 06:42:25.122458 tar[1187]: linux-amd64/README.md Dec 13 06:42:25.129659 systemd[1]: Finished prepare-helm.service. Dec 13 06:42:25.194290 sshd_keygen[1197]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 06:42:25.226557 systemd[1]: Finished sshd-keygen.service. Dec 13 06:42:25.230021 systemd[1]: Starting issuegen.service... Dec 13 06:42:25.232498 systemd[1]: Started sshd@0-10.244.18.198:22-139.178.89.65:37870.service. Dec 13 06:42:25.240774 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 06:42:25.241053 systemd[1]: Finished issuegen.service. Dec 13 06:42:25.244496 systemd[1]: Starting systemd-user-sessions.service... Dec 13 06:42:25.258157 systemd[1]: Finished systemd-user-sessions.service. Dec 13 06:42:25.261387 systemd[1]: Started getty@tty1.service. Dec 13 06:42:25.265548 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 06:42:25.266691 systemd[1]: Reached target getty.target. Dec 13 06:42:25.549365 systemd[1]: Started kubelet.service. Dec 13 06:42:25.819157 systemd-networkd[1026]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4b1:24:19ff:fef4:12c6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4b1:24:19ff:fef4:12c6/64 assigned by NDisc. Dec 13 06:42:25.819172 systemd-networkd[1026]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 06:42:26.165109 sshd[1257]: Accepted publickey for core from 139.178.89.65 port 37870 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:42:26.167770 sshd[1257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:42:26.195310 systemd-logind[1181]: New session 1 of user core. Dec 13 06:42:26.198033 systemd[1]: Created slice user-500.slice. Dec 13 06:42:26.206091 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 06:42:26.231192 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 06:42:26.237968 systemd[1]: Starting user@500.service... Dec 13 06:42:26.248013 (systemd)[1274]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:42:26.362958 kubelet[1266]: E1213 06:42:26.362844 1266 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:42:26.365630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:42:26.365854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:42:26.366321 systemd[1]: kubelet.service: Consumed 1.131s CPU time. Dec 13 06:42:26.371432 systemd[1274]: Queued start job for default target default.target. Dec 13 06:42:26.372378 systemd[1274]: Reached target paths.target. Dec 13 06:42:26.372418 systemd[1274]: Reached target sockets.target. Dec 13 06:42:26.372441 systemd[1274]: Reached target timers.target. Dec 13 06:42:26.372461 systemd[1274]: Reached target basic.target. Dec 13 06:42:26.372537 systemd[1274]: Reached target default.target. Dec 13 06:42:26.372590 systemd[1274]: Startup finished in 112ms. Dec 13 06:42:26.372624 systemd[1]: Started user@500.service. Dec 13 06:42:26.374971 systemd[1]: Started session-1.scope. Dec 13 06:42:27.003652 systemd[1]: Started sshd@1-10.244.18.198:22-139.178.89.65:37874.service. Dec 13 06:42:27.896413 sshd[1284]: Accepted publickey for core from 139.178.89.65 port 37874 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:42:27.897233 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:42:27.904347 systemd-logind[1181]: New session 2 of user core. Dec 13 06:42:27.905195 systemd[1]: Started session-2.scope. Dec 13 06:42:28.519467 sshd[1284]: pam_unix(sshd:session): session closed for user core Dec 13 06:42:28.523495 systemd-logind[1181]: Session 2 logged out. Waiting for processes to exit. Dec 13 06:42:28.524033 systemd[1]: sshd@1-10.244.18.198:22-139.178.89.65:37874.service: Deactivated successfully. Dec 13 06:42:28.525184 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 06:42:28.528480 systemd-logind[1181]: Removed session 2. Dec 13 06:42:28.664218 systemd[1]: Started sshd@2-10.244.18.198:22-139.178.89.65:56452.service. Dec 13 06:42:29.551855 sshd[1290]: Accepted publickey for core from 139.178.89.65 port 56452 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:42:29.553980 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:42:29.561622 systemd[1]: Started session-3.scope. Dec 13 06:42:29.562999 systemd-logind[1181]: New session 3 of user core. Dec 13 06:42:30.170535 sshd[1290]: pam_unix(sshd:session): session closed for user core Dec 13 06:42:30.175297 systemd[1]: sshd@2-10.244.18.198:22-139.178.89.65:56452.service: Deactivated successfully. Dec 13 06:42:30.176347 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 06:42:30.177225 systemd-logind[1181]: Session 3 logged out. Waiting for processes to exit. Dec 13 06:42:30.178970 systemd-logind[1181]: Removed session 3. Dec 13 06:42:31.024609 coreos-metadata[1169]: Dec 13 06:42:31.024 WARN failed to locate config-drive, using the metadata service API instead Dec 13 06:42:31.089951 coreos-metadata[1169]: Dec 13 06:42:31.089 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 06:42:31.124897 coreos-metadata[1169]: Dec 13 06:42:31.124 INFO Fetch successful Dec 13 06:42:31.125319 coreos-metadata[1169]: Dec 13 06:42:31.125 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 06:42:31.165758 coreos-metadata[1169]: Dec 13 06:42:31.165 INFO Fetch successful Dec 13 06:42:31.167519 unknown[1169]: wrote ssh authorized keys file for user: core Dec 13 06:42:31.180517 update-ssh-keys[1297]: Updated "/home/core/.ssh/authorized_keys" Dec 13 06:42:31.181024 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 06:42:31.181554 systemd[1]: Reached target multi-user.target. Dec 13 06:42:31.183522 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 06:42:31.194987 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 06:42:31.195245 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 06:42:31.199082 systemd[1]: Startup finished in 1.164s (kernel) + 7.364s (initrd) + 13.565s (userspace) = 22.094s. Dec 13 06:42:36.617241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 06:42:36.617561 systemd[1]: Stopped kubelet.service. Dec 13 06:42:36.617639 systemd[1]: kubelet.service: Consumed 1.131s CPU time. Dec 13 06:42:36.619778 systemd[1]: Starting kubelet.service... Dec 13 06:42:36.774131 systemd[1]: Started kubelet.service. Dec 13 06:42:36.845422 kubelet[1303]: E1213 06:42:36.845321 1303 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:42:36.849779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:42:36.850097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:42:40.318222 systemd[1]: Started sshd@3-10.244.18.198:22-139.178.89.65:42408.service. Dec 13 06:42:41.213768 sshd[1310]: Accepted publickey for core from 139.178.89.65 port 42408 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:42:41.217434 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:42:41.227532 systemd[1]: Started session-4.scope. Dec 13 06:42:41.228480 systemd-logind[1181]: New session 4 of user core. Dec 13 06:42:41.836330 sshd[1310]: pam_unix(sshd:session): session closed for user core Dec 13 06:42:41.840405 systemd[1]: sshd@3-10.244.18.198:22-139.178.89.65:42408.service: Deactivated successfully. Dec 13 06:42:41.841510 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 06:42:41.842466 systemd-logind[1181]: Session 4 logged out. Waiting for processes to exit. Dec 13 06:42:41.844126 systemd-logind[1181]: Removed session 4. Dec 13 06:42:41.983360 systemd[1]: Started sshd@4-10.244.18.198:22-139.178.89.65:42422.service. Dec 13 06:42:42.871747 sshd[1316]: Accepted publickey for core from 139.178.89.65 port 42422 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:42:42.874494 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:42:42.883361 systemd[1]: Started session-5.scope. Dec 13 06:42:42.883859 systemd-logind[1181]: New session 5 of user core. Dec 13 06:42:43.483559 sshd[1316]: pam_unix(sshd:session): session closed for user core Dec 13 06:42:43.486887 systemd[1]: sshd@4-10.244.18.198:22-139.178.89.65:42422.service: Deactivated successfully. Dec 13 06:42:43.487889 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 06:42:43.488769 systemd-logind[1181]: Session 5 logged out. Waiting for processes to exit. Dec 13 06:42:43.490502 systemd-logind[1181]: Removed session 5. Dec 13 06:42:43.634557 systemd[1]: Started sshd@5-10.244.18.198:22-139.178.89.65:42426.service. Dec 13 06:42:44.556366 sshd[1322]: Accepted publickey for core from 139.178.89.65 port 42426 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:42:44.558415 sshd[1322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:42:44.566106 systemd-logind[1181]: New session 6 of user core. Dec 13 06:42:44.567115 systemd[1]: Started session-6.scope. Dec 13 06:42:45.183166 sshd[1322]: pam_unix(sshd:session): session closed for user core Dec 13 06:42:45.188299 systemd[1]: sshd@5-10.244.18.198:22-139.178.89.65:42426.service: Deactivated successfully. Dec 13 06:42:45.189455 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 06:42:45.190541 systemd-logind[1181]: Session 6 logged out. Waiting for processes to exit. Dec 13 06:42:45.192184 systemd-logind[1181]: Removed session 6. Dec 13 06:42:45.332284 systemd[1]: Started sshd@6-10.244.18.198:22-139.178.89.65:42442.service. Dec 13 06:42:46.225414 sshd[1328]: Accepted publickey for core from 139.178.89.65 port 42442 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:42:46.228314 sshd[1328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:42:46.236447 systemd-logind[1181]: New session 7 of user core. Dec 13 06:42:46.237600 systemd[1]: Started session-7.scope. Dec 13 06:42:46.716112 sudo[1331]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 06:42:46.716522 sudo[1331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 06:42:46.760122 systemd[1]: Starting docker.service... Dec 13 06:42:46.829587 env[1341]: time="2024-12-13T06:42:46.829488218Z" level=info msg="Starting up" Dec 13 06:42:46.836699 env[1341]: time="2024-12-13T06:42:46.836646412Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 06:42:46.836970 env[1341]: time="2024-12-13T06:42:46.836938826Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 06:42:46.837176 env[1341]: time="2024-12-13T06:42:46.837141172Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 06:42:46.837315 env[1341]: time="2024-12-13T06:42:46.837284962Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 06:42:46.854308 env[1341]: time="2024-12-13T06:42:46.854229641Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 06:42:46.854308 env[1341]: time="2024-12-13T06:42:46.854285735Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 06:42:46.854308 env[1341]: time="2024-12-13T06:42:46.854318973Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 06:42:46.854969 env[1341]: time="2024-12-13T06:42:46.854368048Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 06:42:46.870523 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4268203846-merged.mount: Deactivated successfully. Dec 13 06:42:46.872930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 06:42:46.873139 systemd[1]: Stopped kubelet.service. Dec 13 06:42:46.876609 systemd[1]: Starting kubelet.service... Dec 13 06:42:47.012080 systemd[1]: Started kubelet.service. Dec 13 06:42:47.089171 env[1341]: time="2024-12-13T06:42:47.086545683Z" level=info msg="Loading containers: start." Dec 13 06:42:47.108302 kubelet[1355]: E1213 06:42:47.108248 1355 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:42:47.111845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:42:47.112181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:42:47.279013 kernel: Initializing XFRM netlink socket Dec 13 06:42:47.332063 env[1341]: time="2024-12-13T06:42:47.331999137Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 06:42:47.427533 systemd-networkd[1026]: docker0: Link UP Dec 13 06:42:47.449556 env[1341]: time="2024-12-13T06:42:47.449502866Z" level=info msg="Loading containers: done." Dec 13 06:42:47.468754 env[1341]: time="2024-12-13T06:42:47.468685868Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 06:42:47.469073 env[1341]: time="2024-12-13T06:42:47.469045840Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 06:42:47.469272 env[1341]: time="2024-12-13T06:42:47.469235767Z" level=info msg="Daemon has completed initialization" Dec 13 06:42:47.488078 systemd[1]: Started docker.service. Dec 13 06:42:47.498154 env[1341]: time="2024-12-13T06:42:47.498069026Z" level=info msg="API listen on /run/docker.sock" Dec 13 06:42:48.931463 env[1193]: time="2024-12-13T06:42:48.931363706Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 06:42:49.771155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583674473.mount: Deactivated successfully. Dec 13 06:42:52.474334 env[1193]: time="2024-12-13T06:42:52.472592985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:52.477404 env[1193]: time="2024-12-13T06:42:52.477336390Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:52.479722 env[1193]: time="2024-12-13T06:42:52.479686177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:52.482170 env[1193]: time="2024-12-13T06:42:52.482116870Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:52.483565 env[1193]: time="2024-12-13T06:42:52.483525882Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 06:42:52.499227 env[1193]: time="2024-12-13T06:42:52.499169903Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 06:42:55.464285 env[1193]: time="2024-12-13T06:42:55.464196726Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:55.466272 env[1193]: time="2024-12-13T06:42:55.466236436Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:55.468685 env[1193]: time="2024-12-13T06:42:55.468645071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:55.471238 env[1193]: time="2024-12-13T06:42:55.471188688Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:55.472605 env[1193]: time="2024-12-13T06:42:55.472564490Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 06:42:55.488543 env[1193]: time="2024-12-13T06:42:55.488470087Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 06:42:55.837945 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 06:42:57.363691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 06:42:57.364101 systemd[1]: Stopped kubelet.service. Dec 13 06:42:57.368123 systemd[1]: Starting kubelet.service... Dec 13 06:42:57.557403 systemd[1]: Started kubelet.service. Dec 13 06:42:57.642866 kubelet[1500]: E1213 06:42:57.642174 1500 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:42:57.644325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:42:57.644578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:42:58.205772 env[1193]: time="2024-12-13T06:42:58.205692867Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:58.208745 env[1193]: time="2024-12-13T06:42:58.208709998Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:58.211344 env[1193]: time="2024-12-13T06:42:58.211303256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:58.214565 env[1193]: time="2024-12-13T06:42:58.214528380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:42:58.215808 env[1193]: time="2024-12-13T06:42:58.215758795Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 06:42:58.232544 env[1193]: time="2024-12-13T06:42:58.232467435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 06:43:00.030832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4128609195.mount: Deactivated successfully. Dec 13 06:43:01.005152 env[1193]: time="2024-12-13T06:43:01.005080603Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:01.007528 env[1193]: time="2024-12-13T06:43:01.007489323Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:01.009252 env[1193]: time="2024-12-13T06:43:01.009207761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:01.010819 env[1193]: time="2024-12-13T06:43:01.010780807Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:01.011817 env[1193]: time="2024-12-13T06:43:01.011774739Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 06:43:01.026337 env[1193]: time="2024-12-13T06:43:01.026261829Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 06:43:02.096982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368220015.mount: Deactivated successfully. Dec 13 06:43:03.601989 env[1193]: time="2024-12-13T06:43:03.601903770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:03.603999 env[1193]: time="2024-12-13T06:43:03.603955589Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:03.606487 env[1193]: time="2024-12-13T06:43:03.606450575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:03.609020 env[1193]: time="2024-12-13T06:43:03.608961958Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:03.610327 env[1193]: time="2024-12-13T06:43:03.610285890Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 06:43:03.624379 env[1193]: time="2024-12-13T06:43:03.624324409Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 06:43:04.186541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046126513.mount: Deactivated successfully. Dec 13 06:43:04.192617 env[1193]: time="2024-12-13T06:43:04.192558630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:04.194248 env[1193]: time="2024-12-13T06:43:04.194205257Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:04.211705 env[1193]: time="2024-12-13T06:43:04.211616954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:04.214532 env[1193]: time="2024-12-13T06:43:04.214474875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:04.215530 env[1193]: time="2024-12-13T06:43:04.215488371Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 06:43:04.229948 env[1193]: time="2024-12-13T06:43:04.229871457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 06:43:04.865150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358707681.mount: Deactivated successfully. Dec 13 06:43:07.756034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 06:43:07.757074 systemd[1]: Stopped kubelet.service. Dec 13 06:43:07.760470 systemd[1]: Starting kubelet.service... Dec 13 06:43:08.297428 systemd[1]: Started kubelet.service. Dec 13 06:43:08.392127 kubelet[1533]: E1213 06:43:08.392041 1533 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:43:08.394599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:43:08.394877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:43:08.680139 env[1193]: time="2024-12-13T06:43:08.679497352Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:08.683431 env[1193]: time="2024-12-13T06:43:08.683382873Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:08.686937 env[1193]: time="2024-12-13T06:43:08.686870551Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:08.694117 env[1193]: time="2024-12-13T06:43:08.694039153Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 06:43:08.695123 env[1193]: time="2024-12-13T06:43:08.695087142Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:09.119760 update_engine[1184]: I1213 06:43:09.119096 1184 update_attempter.cc:509] Updating boot flags... Dec 13 06:43:13.593073 systemd[1]: Stopped kubelet.service. Dec 13 06:43:13.597963 systemd[1]: Starting kubelet.service... Dec 13 06:43:13.633039 systemd[1]: Reloading. Dec 13 06:43:13.776729 /usr/lib/systemd/system-generators/torcx-generator[1640]: time="2024-12-13T06:43:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:43:13.776800 /usr/lib/systemd/system-generators/torcx-generator[1640]: time="2024-12-13T06:43:13Z" level=info msg="torcx already run" Dec 13 06:43:13.900003 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:43:13.901079 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:43:13.929673 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:43:14.063496 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 06:43:14.064049 systemd[1]: Stopped kubelet.service. Dec 13 06:43:14.067566 systemd[1]: Starting kubelet.service... Dec 13 06:43:14.238440 systemd[1]: Started kubelet.service. Dec 13 06:43:14.320477 kubelet[1692]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:43:14.321215 kubelet[1692]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 06:43:14.321358 kubelet[1692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:43:14.323079 kubelet[1692]: I1213 06:43:14.322988 1692 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 06:43:14.829599 kubelet[1692]: I1213 06:43:14.829529 1692 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 06:43:14.829599 kubelet[1692]: I1213 06:43:14.829575 1692 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 06:43:14.829905 kubelet[1692]: I1213 06:43:14.829872 1692 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 06:43:14.853774 kubelet[1692]: I1213 06:43:14.853326 1692 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 06:43:14.854804 kubelet[1692]: E1213 06:43:14.854771 1692 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.18.198:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:14.874098 kubelet[1692]: I1213 06:43:14.873593 1692 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 06:43:14.877703 kubelet[1692]: I1213 06:43:14.877611 1692 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 06:43:14.877941 kubelet[1692]: I1213 06:43:14.877681 1692 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-7lx2b.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 06:43:14.878194 kubelet[1692]: I1213 06:43:14.877977 1692 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 06:43:14.878194 kubelet[1692]: I1213 06:43:14.877998 1692 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 06:43:14.878385 kubelet[1692]: I1213 06:43:14.878197 1692 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:43:14.879364 kubelet[1692]: I1213 06:43:14.879320 1692 kubelet.go:400] "Attempting to sync node with API server" Dec 13 06:43:14.879364 kubelet[1692]: I1213 06:43:14.879364 1692 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 06:43:14.879542 kubelet[1692]: I1213 06:43:14.879418 1692 kubelet.go:312] "Adding apiserver pod source" Dec 13 06:43:14.879542 kubelet[1692]: I1213 06:43:14.879453 1692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 06:43:14.885552 kubelet[1692]: I1213 06:43:14.885515 1692 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 06:43:14.885720 kubelet[1692]: W1213 06:43:14.885506 1692 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.18.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-7lx2b.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:14.885878 kubelet[1692]: E1213 06:43:14.885848 1692 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.18.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-7lx2b.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:14.893037 kubelet[1692]: I1213 06:43:14.892993 1692 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 06:43:14.893184 kubelet[1692]: W1213 06:43:14.893136 1692 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 06:43:14.894104 kubelet[1692]: I1213 06:43:14.894076 1692 server.go:1264] "Started kubelet" Dec 13 06:43:14.894307 kubelet[1692]: W1213 06:43:14.894241 1692 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.18.198:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:14.894401 kubelet[1692]: E1213 06:43:14.894315 1692 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.18.198:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:14.909273 kubelet[1692]: I1213 06:43:14.909191 1692 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 06:43:14.910223 kubelet[1692]: I1213 06:43:14.910143 1692 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 06:43:14.910769 kubelet[1692]: I1213 06:43:14.910731 1692 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 06:43:14.911150 kubelet[1692]: E1213 06:43:14.910987 1692 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.18.198:6443/api/v1/namespaces/default/events\": dial tcp 10.244.18.198:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-7lx2b.gb1.brightbox.com.1810a979478a7780 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-7lx2b.gb1.brightbox.com,UID:srv-7lx2b.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-7lx2b.gb1.brightbox.com,},FirstTimestamp:2024-12-13 06:43:14.894034816 +0000 UTC m=+0.640554874,LastTimestamp:2024-12-13 06:43:14.894034816 +0000 UTC m=+0.640554874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-7lx2b.gb1.brightbox.com,}" Dec 13 06:43:14.911636 kubelet[1692]: I1213 06:43:14.911597 1692 server.go:455] "Adding debug handlers to kubelet server" Dec 13 06:43:14.918464 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 06:43:14.918802 kubelet[1692]: I1213 06:43:14.918774 1692 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 06:43:14.926898 kubelet[1692]: I1213 06:43:14.926868 1692 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 06:43:14.927875 kubelet[1692]: I1213 06:43:14.927186 1692 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 06:43:14.928007 kubelet[1692]: I1213 06:43:14.927980 1692 reconciler.go:26] "Reconciler: start to sync state" Dec 13 06:43:14.928412 kubelet[1692]: E1213 06:43:14.928333 1692 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-7lx2b.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.198:6443: connect: connection refused" interval="200ms" Dec 13 06:43:14.929159 kubelet[1692]: W1213 06:43:14.929108 1692 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.18.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:14.929776 kubelet[1692]: E1213 06:43:14.929725 1692 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.18.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:14.929949 kubelet[1692]: I1213 06:43:14.929560 1692 factory.go:221] Registration of the systemd container factory successfully Dec 13 06:43:14.930197 kubelet[1692]: I1213 06:43:14.930163 1692 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 06:43:14.930714 kubelet[1692]: E1213 06:43:14.929534 1692 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 06:43:14.932056 kubelet[1692]: I1213 06:43:14.932033 1692 factory.go:221] Registration of the containerd container factory successfully Dec 13 06:43:14.961990 kubelet[1692]: I1213 06:43:14.961951 1692 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 06:43:14.961990 kubelet[1692]: I1213 06:43:14.961981 1692 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 06:43:14.962242 kubelet[1692]: I1213 06:43:14.962017 1692 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:43:14.963838 kubelet[1692]: I1213 06:43:14.963801 1692 policy_none.go:49] "None policy: Start" Dec 13 06:43:14.964552 kubelet[1692]: I1213 06:43:14.964524 1692 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 06:43:14.964645 kubelet[1692]: I1213 06:43:14.964567 1692 state_mem.go:35] "Initializing new in-memory state store" Dec 13 06:43:14.976290 systemd[1]: Created slice kubepods.slice. Dec 13 06:43:14.986073 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 06:43:14.990037 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 06:43:14.998171 kubelet[1692]: I1213 06:43:14.998128 1692 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 06:43:14.998697 kubelet[1692]: I1213 06:43:14.998628 1692 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 06:43:14.998937 kubelet[1692]: I1213 06:43:14.998867 1692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 06:43:14.999259 kubelet[1692]: I1213 06:43:14.999236 1692 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 06:43:15.007098 kubelet[1692]: E1213 06:43:15.007057 1692 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-7lx2b.gb1.brightbox.com\" not found" Dec 13 06:43:15.009184 kubelet[1692]: I1213 06:43:15.009128 1692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 06:43:15.009184 kubelet[1692]: I1213 06:43:15.009184 1692 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 06:43:15.009346 kubelet[1692]: I1213 06:43:15.009218 1692 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 06:43:15.009346 kubelet[1692]: E1213 06:43:15.009303 1692 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 06:43:15.010386 kubelet[1692]: W1213 06:43:15.010315 1692 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.18.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:15.010534 kubelet[1692]: E1213 06:43:15.010508 1692 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.18.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:15.030837 kubelet[1692]: I1213 06:43:15.030791 1692 kubelet_node_status.go:73] "Attempting to register node" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.031512 kubelet[1692]: E1213 06:43:15.031478 1692 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.198:6443/api/v1/nodes\": dial tcp 10.244.18.198:6443: connect: connection refused" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.110834 kubelet[1692]: I1213 06:43:15.109703 1692 topology_manager.go:215] "Topology Admit Handler" podUID="debcd7dfb0f857651ce26972f8eadd99" podNamespace="kube-system" podName="kube-scheduler-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.113250 kubelet[1692]: I1213 06:43:15.113213 1692 topology_manager.go:215] "Topology Admit Handler" podUID="56478b0be813688bbdf86526403e663a" podNamespace="kube-system" podName="kube-apiserver-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.115724 kubelet[1692]: I1213 06:43:15.115685 1692 topology_manager.go:215] "Topology Admit Handler" podUID="1d90b6826950f1275d5ac5d9c191f182" podNamespace="kube-system" podName="kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.124014 systemd[1]: Created slice kubepods-burstable-poddebcd7dfb0f857651ce26972f8eadd99.slice. Dec 13 06:43:15.129149 kubelet[1692]: I1213 06:43:15.129090 1692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d90b6826950f1275d5ac5d9c191f182-ca-certs\") pod \"kube-controller-manager-srv-7lx2b.gb1.brightbox.com\" (UID: \"1d90b6826950f1275d5ac5d9c191f182\") " pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.129149 kubelet[1692]: I1213 06:43:15.129147 1692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/debcd7dfb0f857651ce26972f8eadd99-kubeconfig\") pod \"kube-scheduler-srv-7lx2b.gb1.brightbox.com\" (UID: \"debcd7dfb0f857651ce26972f8eadd99\") " pod="kube-system/kube-scheduler-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.129387 kubelet[1692]: I1213 06:43:15.129177 1692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56478b0be813688bbdf86526403e663a-k8s-certs\") pod \"kube-apiserver-srv-7lx2b.gb1.brightbox.com\" (UID: \"56478b0be813688bbdf86526403e663a\") " pod="kube-system/kube-apiserver-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.129387 kubelet[1692]: I1213 06:43:15.129210 1692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d90b6826950f1275d5ac5d9c191f182-flexvolume-dir\") pod \"kube-controller-manager-srv-7lx2b.gb1.brightbox.com\" (UID: \"1d90b6826950f1275d5ac5d9c191f182\") " pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.129387 kubelet[1692]: I1213 06:43:15.129239 1692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d90b6826950f1275d5ac5d9c191f182-k8s-certs\") pod \"kube-controller-manager-srv-7lx2b.gb1.brightbox.com\" (UID: \"1d90b6826950f1275d5ac5d9c191f182\") " pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.129387 kubelet[1692]: I1213 06:43:15.129266 1692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d90b6826950f1275d5ac5d9c191f182-kubeconfig\") pod \"kube-controller-manager-srv-7lx2b.gb1.brightbox.com\" (UID: \"1d90b6826950f1275d5ac5d9c191f182\") " pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.129387 kubelet[1692]: I1213 06:43:15.129295 1692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d90b6826950f1275d5ac5d9c191f182-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-7lx2b.gb1.brightbox.com\" (UID: \"1d90b6826950f1275d5ac5d9c191f182\") " pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.129687 kubelet[1692]: I1213 06:43:15.129324 1692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56478b0be813688bbdf86526403e663a-ca-certs\") pod \"kube-apiserver-srv-7lx2b.gb1.brightbox.com\" (UID: \"56478b0be813688bbdf86526403e663a\") " pod="kube-system/kube-apiserver-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.129687 kubelet[1692]: I1213 06:43:15.129373 1692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56478b0be813688bbdf86526403e663a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-7lx2b.gb1.brightbox.com\" (UID: \"56478b0be813688bbdf86526403e663a\") " pod="kube-system/kube-apiserver-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.129687 kubelet[1692]: E1213 06:43:15.129601 1692 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-7lx2b.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.198:6443: connect: connection refused" interval="400ms" Dec 13 06:43:15.138198 systemd[1]: Created slice kubepods-burstable-pod56478b0be813688bbdf86526403e663a.slice. Dec 13 06:43:15.144235 systemd[1]: Created slice kubepods-burstable-pod1d90b6826950f1275d5ac5d9c191f182.slice. Dec 13 06:43:15.235635 kubelet[1692]: I1213 06:43:15.235595 1692 kubelet_node_status.go:73] "Attempting to register node" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.236432 kubelet[1692]: E1213 06:43:15.236397 1692 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.198:6443/api/v1/nodes\": dial tcp 10.244.18.198:6443: connect: connection refused" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.439756 env[1193]: time="2024-12-13T06:43:15.438380977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-7lx2b.gb1.brightbox.com,Uid:debcd7dfb0f857651ce26972f8eadd99,Namespace:kube-system,Attempt:0,}" Dec 13 06:43:15.443895 env[1193]: time="2024-12-13T06:43:15.443846128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-7lx2b.gb1.brightbox.com,Uid:56478b0be813688bbdf86526403e663a,Namespace:kube-system,Attempt:0,}" Dec 13 06:43:15.447956 env[1193]: time="2024-12-13T06:43:15.447640515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-7lx2b.gb1.brightbox.com,Uid:1d90b6826950f1275d5ac5d9c191f182,Namespace:kube-system,Attempt:0,}" Dec 13 06:43:15.530632 kubelet[1692]: E1213 06:43:15.530561 1692 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-7lx2b.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.198:6443: connect: connection refused" interval="800ms" Dec 13 06:43:15.639094 kubelet[1692]: I1213 06:43:15.639055 1692 kubelet_node_status.go:73] "Attempting to register node" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.639492 kubelet[1692]: E1213 06:43:15.639445 1692 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.198:6443/api/v1/nodes\": dial tcp 10.244.18.198:6443: connect: connection refused" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:15.882155 kubelet[1692]: E1213 06:43:15.881955 1692 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.18.198:6443/api/v1/namespaces/default/events\": dial tcp 10.244.18.198:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-7lx2b.gb1.brightbox.com.1810a979478a7780 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-7lx2b.gb1.brightbox.com,UID:srv-7lx2b.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-7lx2b.gb1.brightbox.com,},FirstTimestamp:2024-12-13 06:43:14.894034816 +0000 UTC m=+0.640554874,LastTimestamp:2024-12-13 06:43:14.894034816 +0000 UTC m=+0.640554874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-7lx2b.gb1.brightbox.com,}" Dec 13 06:43:15.980859 kubelet[1692]: W1213 06:43:15.980737 1692 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.18.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-7lx2b.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:15.980859 kubelet[1692]: E1213 06:43:15.980819 1692 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.18.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-7lx2b.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:16.075684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675168251.mount: Deactivated successfully. Dec 13 06:43:16.081998 env[1193]: time="2024-12-13T06:43:16.081952179Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.084720 env[1193]: time="2024-12-13T06:43:16.084632515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.087073 env[1193]: time="2024-12-13T06:43:16.087012954Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.088955 env[1193]: time="2024-12-13T06:43:16.088893599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.089890 env[1193]: time="2024-12-13T06:43:16.089848752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.091720 env[1193]: time="2024-12-13T06:43:16.091684414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.094691 env[1193]: time="2024-12-13T06:43:16.094657179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.098392 env[1193]: time="2024-12-13T06:43:16.098349860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.101305 env[1193]: time="2024-12-13T06:43:16.101258757Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.103986 env[1193]: time="2024-12-13T06:43:16.103940740Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.106675 env[1193]: time="2024-12-13T06:43:16.106636120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.107955 env[1193]: time="2024-12-13T06:43:16.107889958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:16.150994 env[1193]: time="2024-12-13T06:43:16.150553745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:43:16.150994 env[1193]: time="2024-12-13T06:43:16.150642046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:43:16.150994 env[1193]: time="2024-12-13T06:43:16.150661456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:43:16.152188 env[1193]: time="2024-12-13T06:43:16.151591959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:43:16.152188 env[1193]: time="2024-12-13T06:43:16.151646081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:43:16.152188 env[1193]: time="2024-12-13T06:43:16.151662911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:43:16.152188 env[1193]: time="2024-12-13T06:43:16.151842147Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad51a883a8c955a36d834727e58874ff616183feae0469249cf0a980bee3518c pid=1744 runtime=io.containerd.runc.v2 Dec 13 06:43:16.152463 env[1193]: time="2024-12-13T06:43:16.150958867Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/732ec9d6eb54197ddda9ed1b62b0b3ae8454d09e0801f8f1e6c4e92bffd1b198 pid=1743 runtime=io.containerd.runc.v2 Dec 13 06:43:16.156355 env[1193]: time="2024-12-13T06:43:16.156273755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:43:16.156464 env[1193]: time="2024-12-13T06:43:16.156379837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:43:16.156555 env[1193]: time="2024-12-13T06:43:16.156459919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:43:16.156751 env[1193]: time="2024-12-13T06:43:16.156683363Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21cccd25f2fcb3317e242acdd2ea07988c19f50fdcc8c115b783a807a4d3dd06 pid=1757 runtime=io.containerd.runc.v2 Dec 13 06:43:16.184438 systemd[1]: Started cri-containerd-ad51a883a8c955a36d834727e58874ff616183feae0469249cf0a980bee3518c.scope. Dec 13 06:43:16.193676 kubelet[1692]: W1213 06:43:16.193626 1692 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.18.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:16.193676 kubelet[1692]: E1213 06:43:16.193681 1692 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.18.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:16.205231 systemd[1]: Started cri-containerd-21cccd25f2fcb3317e242acdd2ea07988c19f50fdcc8c115b783a807a4d3dd06.scope. Dec 13 06:43:16.224135 systemd[1]: Started cri-containerd-732ec9d6eb54197ddda9ed1b62b0b3ae8454d09e0801f8f1e6c4e92bffd1b198.scope. Dec 13 06:43:16.289574 env[1193]: time="2024-12-13T06:43:16.289503645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-7lx2b.gb1.brightbox.com,Uid:56478b0be813688bbdf86526403e663a,Namespace:kube-system,Attempt:0,} returns sandbox id \"21cccd25f2fcb3317e242acdd2ea07988c19f50fdcc8c115b783a807a4d3dd06\"" Dec 13 06:43:16.302360 env[1193]: time="2024-12-13T06:43:16.302309217Z" level=info msg="CreateContainer within sandbox \"21cccd25f2fcb3317e242acdd2ea07988c19f50fdcc8c115b783a807a4d3dd06\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 06:43:16.322708 env[1193]: time="2024-12-13T06:43:16.322649101Z" level=info msg="CreateContainer within sandbox \"21cccd25f2fcb3317e242acdd2ea07988c19f50fdcc8c115b783a807a4d3dd06\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1f9c816324d08413fd10b742512b1bfe28c2be7044a9b6d6325ce9074565ce48\"" Dec 13 06:43:16.323716 env[1193]: time="2024-12-13T06:43:16.323664512Z" level=info msg="StartContainer for \"1f9c816324d08413fd10b742512b1bfe28c2be7044a9b6d6325ce9074565ce48\"" Dec 13 06:43:16.331840 kubelet[1692]: E1213 06:43:16.331789 1692 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-7lx2b.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.198:6443: connect: connection refused" interval="1.6s" Dec 13 06:43:16.360184 env[1193]: time="2024-12-13T06:43:16.360127234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-7lx2b.gb1.brightbox.com,Uid:1d90b6826950f1275d5ac5d9c191f182,Namespace:kube-system,Attempt:0,} returns sandbox id \"732ec9d6eb54197ddda9ed1b62b0b3ae8454d09e0801f8f1e6c4e92bffd1b198\"" Dec 13 06:43:16.363896 env[1193]: time="2024-12-13T06:43:16.363856744Z" level=info msg="CreateContainer within sandbox \"732ec9d6eb54197ddda9ed1b62b0b3ae8454d09e0801f8f1e6c4e92bffd1b198\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 06:43:16.371187 env[1193]: time="2024-12-13T06:43:16.371142621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-7lx2b.gb1.brightbox.com,Uid:debcd7dfb0f857651ce26972f8eadd99,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad51a883a8c955a36d834727e58874ff616183feae0469249cf0a980bee3518c\"" Dec 13 06:43:16.374934 env[1193]: time="2024-12-13T06:43:16.374872523Z" level=info msg="CreateContainer within sandbox \"ad51a883a8c955a36d834727e58874ff616183feae0469249cf0a980bee3518c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 06:43:16.385573 env[1193]: time="2024-12-13T06:43:16.385508273Z" level=info msg="CreateContainer within sandbox \"732ec9d6eb54197ddda9ed1b62b0b3ae8454d09e0801f8f1e6c4e92bffd1b198\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6c09d6757fbf33844df45bb3e0bc3b49b4158b92d652da14ad759b20658f3e0d\"" Dec 13 06:43:16.386561 env[1193]: time="2024-12-13T06:43:16.386525856Z" level=info msg="StartContainer for \"6c09d6757fbf33844df45bb3e0bc3b49b4158b92d652da14ad759b20658f3e0d\"" Dec 13 06:43:16.395197 env[1193]: time="2024-12-13T06:43:16.395130960Z" level=info msg="CreateContainer within sandbox \"ad51a883a8c955a36d834727e58874ff616183feae0469249cf0a980bee3518c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"311dba191e23c192c074d93a38823ffec2793ff2c4f175d5bd1388fc4e4e2dc1\"" Dec 13 06:43:16.398362 systemd[1]: Started cri-containerd-1f9c816324d08413fd10b742512b1bfe28c2be7044a9b6d6325ce9074565ce48.scope. Dec 13 06:43:16.405298 kubelet[1692]: W1213 06:43:16.405185 1692 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.18.198:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:16.405298 kubelet[1692]: E1213 06:43:16.405266 1692 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.18.198:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:16.405505 env[1193]: time="2024-12-13T06:43:16.405375020Z" level=info msg="StartContainer for \"311dba191e23c192c074d93a38823ffec2793ff2c4f175d5bd1388fc4e4e2dc1\"" Dec 13 06:43:16.434397 kubelet[1692]: W1213 06:43:16.434278 1692 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.18.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:16.434397 kubelet[1692]: E1213 06:43:16.434359 1692 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.18.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:16.444674 kubelet[1692]: I1213 06:43:16.444235 1692 kubelet_node_status.go:73] "Attempting to register node" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:16.444674 kubelet[1692]: E1213 06:43:16.444626 1692 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.198:6443/api/v1/nodes\": dial tcp 10.244.18.198:6443: connect: connection refused" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:16.453050 systemd[1]: Started cri-containerd-6c09d6757fbf33844df45bb3e0bc3b49b4158b92d652da14ad759b20658f3e0d.scope. Dec 13 06:43:16.465138 systemd[1]: Started cri-containerd-311dba191e23c192c074d93a38823ffec2793ff2c4f175d5bd1388fc4e4e2dc1.scope. Dec 13 06:43:16.493986 env[1193]: time="2024-12-13T06:43:16.493906664Z" level=info msg="StartContainer for \"1f9c816324d08413fd10b742512b1bfe28c2be7044a9b6d6325ce9074565ce48\" returns successfully" Dec 13 06:43:16.558719 env[1193]: time="2024-12-13T06:43:16.558660390Z" level=info msg="StartContainer for \"6c09d6757fbf33844df45bb3e0bc3b49b4158b92d652da14ad759b20658f3e0d\" returns successfully" Dec 13 06:43:16.594326 env[1193]: time="2024-12-13T06:43:16.594265422Z" level=info msg="StartContainer for \"311dba191e23c192c074d93a38823ffec2793ff2c4f175d5bd1388fc4e4e2dc1\" returns successfully" Dec 13 06:43:16.979213 kubelet[1692]: E1213 06:43:16.979166 1692 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.18.198:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.18.198:6443: connect: connection refused Dec 13 06:43:18.048079 kubelet[1692]: I1213 06:43:18.047437 1692 kubelet_node_status.go:73] "Attempting to register node" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:19.593185 kubelet[1692]: E1213 06:43:19.593076 1692 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-7lx2b.gb1.brightbox.com\" not found" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:19.670644 kubelet[1692]: I1213 06:43:19.670586 1692 kubelet_node_status.go:76] "Successfully registered node" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:19.888733 kubelet[1692]: I1213 06:43:19.888550 1692 apiserver.go:52] "Watching apiserver" Dec 13 06:43:19.928240 kubelet[1692]: I1213 06:43:19.928177 1692 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 06:43:21.710891 systemd[1]: Reloading. Dec 13 06:43:21.851547 /usr/lib/systemd/system-generators/torcx-generator[1988]: time="2024-12-13T06:43:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:43:21.851607 /usr/lib/systemd/system-generators/torcx-generator[1988]: time="2024-12-13T06:43:21Z" level=info msg="torcx already run" Dec 13 06:43:21.939790 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:43:21.940128 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:43:21.969693 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:43:22.150398 kubelet[1692]: I1213 06:43:22.150354 1692 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 06:43:22.153340 systemd[1]: Stopping kubelet.service... Dec 13 06:43:22.168670 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 06:43:22.168978 systemd[1]: Stopped kubelet.service. Dec 13 06:43:22.169049 systemd[1]: kubelet.service: Consumed 1.148s CPU time. Dec 13 06:43:22.171880 systemd[1]: Starting kubelet.service... Dec 13 06:43:23.321043 systemd[1]: Started kubelet.service. Dec 13 06:43:23.448274 kubelet[2036]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:43:23.448810 kubelet[2036]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 06:43:23.448928 kubelet[2036]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:43:23.450779 sudo[2047]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 06:43:23.451222 sudo[2047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 06:43:23.452481 kubelet[2036]: I1213 06:43:23.452431 2036 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 06:43:23.461883 kubelet[2036]: I1213 06:43:23.461838 2036 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 06:43:23.462140 kubelet[2036]: I1213 06:43:23.462116 2036 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 06:43:23.462700 kubelet[2036]: I1213 06:43:23.462663 2036 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 06:43:23.464732 kubelet[2036]: I1213 06:43:23.464704 2036 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 06:43:23.469170 kubelet[2036]: I1213 06:43:23.469137 2036 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 06:43:23.484035 kubelet[2036]: I1213 06:43:23.483977 2036 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 06:43:23.484726 kubelet[2036]: I1213 06:43:23.484678 2036 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 06:43:23.485128 kubelet[2036]: I1213 06:43:23.484846 2036 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-7lx2b.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 06:43:23.485361 kubelet[2036]: I1213 06:43:23.485336 2036 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 06:43:23.485481 kubelet[2036]: I1213 06:43:23.485459 2036 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 06:43:23.485697 kubelet[2036]: I1213 06:43:23.485675 2036 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:43:23.485996 kubelet[2036]: I1213 06:43:23.485975 2036 kubelet.go:400] "Attempting to sync node with API server" Dec 13 06:43:23.486702 kubelet[2036]: I1213 06:43:23.486679 2036 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 06:43:23.486862 kubelet[2036]: I1213 06:43:23.486839 2036 kubelet.go:312] "Adding apiserver pod source" Dec 13 06:43:23.493652 kubelet[2036]: I1213 06:43:23.493624 2036 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 06:43:23.497118 kubelet[2036]: I1213 06:43:23.497092 2036 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 06:43:23.498419 kubelet[2036]: I1213 06:43:23.498395 2036 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 06:43:23.498981 kubelet[2036]: I1213 06:43:23.498958 2036 server.go:1264] "Started kubelet" Dec 13 06:43:23.515607 kubelet[2036]: I1213 06:43:23.515522 2036 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 06:43:23.517155 kubelet[2036]: I1213 06:43:23.517128 2036 server.go:455] "Adding debug handlers to kubelet server" Dec 13 06:43:23.520671 kubelet[2036]: I1213 06:43:23.520616 2036 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 06:43:23.521132 kubelet[2036]: I1213 06:43:23.521108 2036 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 06:43:23.522689 kubelet[2036]: I1213 06:43:23.522663 2036 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 06:43:23.539060 kubelet[2036]: I1213 06:43:23.539026 2036 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 06:43:23.543983 kubelet[2036]: I1213 06:43:23.543955 2036 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 06:43:23.544368 kubelet[2036]: I1213 06:43:23.544345 2036 reconciler.go:26] "Reconciler: start to sync state" Dec 13 06:43:23.554888 kubelet[2036]: I1213 06:43:23.554809 2036 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 06:43:23.557371 kubelet[2036]: I1213 06:43:23.557345 2036 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 06:43:23.557559 kubelet[2036]: I1213 06:43:23.557535 2036 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 06:43:23.557769 kubelet[2036]: I1213 06:43:23.557745 2036 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 06:43:23.558141 kubelet[2036]: E1213 06:43:23.558069 2036 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 06:43:23.559323 kubelet[2036]: I1213 06:43:23.559282 2036 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 06:43:23.560525 kubelet[2036]: E1213 06:43:23.560067 2036 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 06:43:23.566032 kubelet[2036]: I1213 06:43:23.565938 2036 factory.go:221] Registration of the containerd container factory successfully Dec 13 06:43:23.566997 kubelet[2036]: I1213 06:43:23.566950 2036 factory.go:221] Registration of the systemd container factory successfully Dec 13 06:43:23.639764 kubelet[2036]: I1213 06:43:23.639648 2036 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 06:43:23.640013 kubelet[2036]: I1213 06:43:23.639986 2036 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 06:43:23.640155 kubelet[2036]: I1213 06:43:23.640133 2036 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:43:23.640540 kubelet[2036]: I1213 06:43:23.640514 2036 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 06:43:23.640695 kubelet[2036]: I1213 06:43:23.640655 2036 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 06:43:23.640823 kubelet[2036]: I1213 06:43:23.640800 2036 policy_none.go:49] "None policy: Start" Dec 13 06:43:23.641738 kubelet[2036]: I1213 06:43:23.641713 2036 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 06:43:23.641895 kubelet[2036]: I1213 06:43:23.641873 2036 state_mem.go:35] "Initializing new in-memory state store" Dec 13 06:43:23.642305 kubelet[2036]: I1213 06:43:23.642281 2036 state_mem.go:75] "Updated machine memory state" Dec 13 06:43:23.648259 kubelet[2036]: I1213 06:43:23.648234 2036 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 06:43:23.648647 kubelet[2036]: I1213 06:43:23.648573 2036 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 06:43:23.649204 kubelet[2036]: I1213 06:43:23.649182 2036 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 06:43:23.660020 kubelet[2036]: I1213 06:43:23.659973 2036 topology_manager.go:215] "Topology Admit Handler" podUID="56478b0be813688bbdf86526403e663a" podNamespace="kube-system" podName="kube-apiserver-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.660446 kubelet[2036]: I1213 06:43:23.660418 2036 topology_manager.go:215] "Topology Admit Handler" podUID="1d90b6826950f1275d5ac5d9c191f182" podNamespace="kube-system" podName="kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.661999 kubelet[2036]: I1213 06:43:23.661970 2036 topology_manager.go:215] "Topology Admit Handler" podUID="debcd7dfb0f857651ce26972f8eadd99" podNamespace="kube-system" podName="kube-scheduler-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.670736 kubelet[2036]: W1213 06:43:23.670684 2036 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 06:43:23.673096 kubelet[2036]: W1213 06:43:23.673071 2036 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 06:43:23.673495 kubelet[2036]: W1213 06:43:23.673469 2036 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 06:43:23.688844 kubelet[2036]: I1213 06:43:23.688814 2036 kubelet_node_status.go:73] "Attempting to register node" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.699218 kubelet[2036]: I1213 06:43:23.699183 2036 kubelet_node_status.go:112] "Node was previously registered" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.700506 kubelet[2036]: I1213 06:43:23.699272 2036 kubelet_node_status.go:76] "Successfully registered node" node="srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.845866 kubelet[2036]: I1213 06:43:23.845809 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d90b6826950f1275d5ac5d9c191f182-k8s-certs\") pod \"kube-controller-manager-srv-7lx2b.gb1.brightbox.com\" (UID: \"1d90b6826950f1275d5ac5d9c191f182\") " pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.846188 kubelet[2036]: I1213 06:43:23.846145 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d90b6826950f1275d5ac5d9c191f182-kubeconfig\") pod \"kube-controller-manager-srv-7lx2b.gb1.brightbox.com\" (UID: \"1d90b6826950f1275d5ac5d9c191f182\") " pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.846343 kubelet[2036]: I1213 06:43:23.846312 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d90b6826950f1275d5ac5d9c191f182-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-7lx2b.gb1.brightbox.com\" (UID: \"1d90b6826950f1275d5ac5d9c191f182\") " pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.846509 kubelet[2036]: I1213 06:43:23.846482 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56478b0be813688bbdf86526403e663a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-7lx2b.gb1.brightbox.com\" (UID: \"56478b0be813688bbdf86526403e663a\") " pod="kube-system/kube-apiserver-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.846690 kubelet[2036]: I1213 06:43:23.846663 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d90b6826950f1275d5ac5d9c191f182-ca-certs\") pod \"kube-controller-manager-srv-7lx2b.gb1.brightbox.com\" (UID: \"1d90b6826950f1275d5ac5d9c191f182\") " pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.846871 kubelet[2036]: I1213 06:43:23.846843 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d90b6826950f1275d5ac5d9c191f182-flexvolume-dir\") pod \"kube-controller-manager-srv-7lx2b.gb1.brightbox.com\" (UID: \"1d90b6826950f1275d5ac5d9c191f182\") " pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.847094 kubelet[2036]: I1213 06:43:23.847057 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/debcd7dfb0f857651ce26972f8eadd99-kubeconfig\") pod \"kube-scheduler-srv-7lx2b.gb1.brightbox.com\" (UID: \"debcd7dfb0f857651ce26972f8eadd99\") " pod="kube-system/kube-scheduler-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.847274 kubelet[2036]: I1213 06:43:23.847247 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56478b0be813688bbdf86526403e663a-ca-certs\") pod \"kube-apiserver-srv-7lx2b.gb1.brightbox.com\" (UID: \"56478b0be813688bbdf86526403e663a\") " pod="kube-system/kube-apiserver-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:23.847435 kubelet[2036]: I1213 06:43:23.847408 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56478b0be813688bbdf86526403e663a-k8s-certs\") pod \"kube-apiserver-srv-7lx2b.gb1.brightbox.com\" (UID: \"56478b0be813688bbdf86526403e663a\") " pod="kube-system/kube-apiserver-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:24.420063 sudo[2047]: pam_unix(sudo:session): session closed for user root Dec 13 06:43:24.494761 kubelet[2036]: I1213 06:43:24.494696 2036 apiserver.go:52] "Watching apiserver" Dec 13 06:43:24.545258 kubelet[2036]: I1213 06:43:24.545201 2036 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 06:43:24.637347 kubelet[2036]: W1213 06:43:24.637283 2036 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 06:43:24.637638 kubelet[2036]: E1213 06:43:24.637409 2036 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-7lx2b.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:24.638530 kubelet[2036]: W1213 06:43:24.638488 2036 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 06:43:24.638651 kubelet[2036]: E1213 06:43:24.638559 2036 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-7lx2b.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-7lx2b.gb1.brightbox.com" Dec 13 06:43:24.654609 kubelet[2036]: I1213 06:43:24.654494 2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-7lx2b.gb1.brightbox.com" podStartSLOduration=1.654421215 podStartE2EDuration="1.654421215s" podCreationTimestamp="2024-12-13 06:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:43:24.652125351 +0000 UTC m=+1.305080451" watchObservedRunningTime="2024-12-13 06:43:24.654421215 +0000 UTC m=+1.307376316" Dec 13 06:43:24.664294 kubelet[2036]: I1213 06:43:24.664211 2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-7lx2b.gb1.brightbox.com" podStartSLOduration=1.664188046 podStartE2EDuration="1.664188046s" podCreationTimestamp="2024-12-13 06:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:43:24.664061166 +0000 UTC m=+1.317016275" watchObservedRunningTime="2024-12-13 06:43:24.664188046 +0000 UTC m=+1.317143140" Dec 13 06:43:24.697242 kubelet[2036]: I1213 06:43:24.697040 2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-7lx2b.gb1.brightbox.com" podStartSLOduration=1.697014891 podStartE2EDuration="1.697014891s" podCreationTimestamp="2024-12-13 06:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:43:24.679717203 +0000 UTC m=+1.332672315" watchObservedRunningTime="2024-12-13 06:43:24.697014891 +0000 UTC m=+1.349970011" Dec 13 06:43:26.358755 sudo[1331]: pam_unix(sudo:session): session closed for user root Dec 13 06:43:26.503722 sshd[1328]: pam_unix(sshd:session): session closed for user core Dec 13 06:43:26.507999 systemd[1]: sshd@6-10.244.18.198:22-139.178.89.65:42442.service: Deactivated successfully. Dec 13 06:43:26.509050 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 06:43:26.509311 systemd[1]: session-7.scope: Consumed 6.942s CPU time. Dec 13 06:43:26.510058 systemd-logind[1181]: Session 7 logged out. Waiting for processes to exit. Dec 13 06:43:26.512012 systemd-logind[1181]: Removed session 7. Dec 13 06:43:37.224149 kubelet[2036]: I1213 06:43:37.224106 2036 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 06:43:37.225505 env[1193]: time="2024-12-13T06:43:37.225422008Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 06:43:37.225968 kubelet[2036]: I1213 06:43:37.225750 2036 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 06:43:37.796107 kubelet[2036]: I1213 06:43:37.796031 2036 topology_manager.go:215] "Topology Admit Handler" podUID="9d649087-60af-42f0-8f53-3fe4a25cf89f" podNamespace="kube-system" podName="kube-proxy-d9kdb" Dec 13 06:43:37.804053 systemd[1]: Created slice kubepods-besteffort-pod9d649087_60af_42f0_8f53_3fe4a25cf89f.slice. Dec 13 06:43:37.813824 kubelet[2036]: I1213 06:43:37.813751 2036 topology_manager.go:215] "Topology Admit Handler" podUID="7c0ef689-b9f8-48ea-8e4d-ab890c660759" podNamespace="kube-system" podName="cilium-lbhk2" Dec 13 06:43:37.822372 systemd[1]: Created slice kubepods-burstable-pod7c0ef689_b9f8_48ea_8e4d_ab890c660759.slice. Dec 13 06:43:37.840991 kubelet[2036]: I1213 06:43:37.840934 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-hostproc\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.840991 kubelet[2036]: I1213 06:43:37.840994 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cni-path\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841306 kubelet[2036]: I1213 06:43:37.841024 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-host-proc-sys-kernel\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841306 kubelet[2036]: I1213 06:43:37.841054 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-run\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841306 kubelet[2036]: I1213 06:43:37.841098 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-config-path\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841306 kubelet[2036]: I1213 06:43:37.841125 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qxr\" (UniqueName: \"kubernetes.io/projected/7c0ef689-b9f8-48ea-8e4d-ab890c660759-kube-api-access-f4qxr\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841306 kubelet[2036]: I1213 06:43:37.841157 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d649087-60af-42f0-8f53-3fe4a25cf89f-kube-proxy\") pod \"kube-proxy-d9kdb\" (UID: \"9d649087-60af-42f0-8f53-3fe4a25cf89f\") " pod="kube-system/kube-proxy-d9kdb" Dec 13 06:43:37.841601 kubelet[2036]: I1213 06:43:37.841184 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6rz\" (UniqueName: \"kubernetes.io/projected/9d649087-60af-42f0-8f53-3fe4a25cf89f-kube-api-access-5s6rz\") pod \"kube-proxy-d9kdb\" (UID: \"9d649087-60af-42f0-8f53-3fe4a25cf89f\") " pod="kube-system/kube-proxy-d9kdb" Dec 13 06:43:37.841601 kubelet[2036]: I1213 06:43:37.841213 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-lib-modules\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841601 kubelet[2036]: I1213 06:43:37.841240 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c0ef689-b9f8-48ea-8e4d-ab890c660759-clustermesh-secrets\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841601 kubelet[2036]: I1213 06:43:37.841265 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-cgroup\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841601 kubelet[2036]: I1213 06:43:37.841295 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-etc-cni-netd\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841938 kubelet[2036]: I1213 06:43:37.841321 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-host-proc-sys-net\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841938 kubelet[2036]: I1213 06:43:37.841349 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d649087-60af-42f0-8f53-3fe4a25cf89f-xtables-lock\") pod \"kube-proxy-d9kdb\" (UID: \"9d649087-60af-42f0-8f53-3fe4a25cf89f\") " pod="kube-system/kube-proxy-d9kdb" Dec 13 06:43:37.841938 kubelet[2036]: I1213 06:43:37.841379 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-bpf-maps\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841938 kubelet[2036]: I1213 06:43:37.841407 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d649087-60af-42f0-8f53-3fe4a25cf89f-lib-modules\") pod \"kube-proxy-d9kdb\" (UID: \"9d649087-60af-42f0-8f53-3fe4a25cf89f\") " pod="kube-system/kube-proxy-d9kdb" Dec 13 06:43:37.841938 kubelet[2036]: I1213 06:43:37.841435 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-xtables-lock\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:37.841938 kubelet[2036]: I1213 06:43:37.841462 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c0ef689-b9f8-48ea-8e4d-ab890c660759-hubble-tls\") pod \"cilium-lbhk2\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " pod="kube-system/cilium-lbhk2" Dec 13 06:43:38.116206 env[1193]: time="2024-12-13T06:43:38.115216134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9kdb,Uid:9d649087-60af-42f0-8f53-3fe4a25cf89f,Namespace:kube-system,Attempt:0,}" Dec 13 06:43:38.131593 env[1193]: time="2024-12-13T06:43:38.131510290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lbhk2,Uid:7c0ef689-b9f8-48ea-8e4d-ab890c660759,Namespace:kube-system,Attempt:0,}" Dec 13 06:43:38.160989 env[1193]: time="2024-12-13T06:43:38.160713018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:43:38.160989 env[1193]: time="2024-12-13T06:43:38.160778901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:43:38.160989 env[1193]: time="2024-12-13T06:43:38.160797254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:43:38.161421 env[1193]: time="2024-12-13T06:43:38.161072436Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38fb1f2e49178fffab4a3d4cb23a67387c251b9a74ac4c04e58e0402acfdfad0 pid=2115 runtime=io.containerd.runc.v2 Dec 13 06:43:38.193002 env[1193]: time="2024-12-13T06:43:38.192853117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:43:38.193534 env[1193]: time="2024-12-13T06:43:38.193484070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:43:38.193747 env[1193]: time="2024-12-13T06:43:38.193692134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:43:38.196476 env[1193]: time="2024-12-13T06:43:38.196418696Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111 pid=2134 runtime=io.containerd.runc.v2 Dec 13 06:43:38.229626 systemd[1]: Started cri-containerd-38fb1f2e49178fffab4a3d4cb23a67387c251b9a74ac4c04e58e0402acfdfad0.scope. Dec 13 06:43:38.244258 systemd[1]: Started cri-containerd-9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111.scope. Dec 13 06:43:38.272691 kubelet[2036]: I1213 06:43:38.271589 2036 topology_manager.go:215] "Topology Admit Handler" podUID="d5c32872-421c-4b54-b402-4d33a265259a" podNamespace="kube-system" podName="cilium-operator-599987898-qvd79" Dec 13 06:43:38.280413 systemd[1]: Created slice kubepods-besteffort-podd5c32872_421c_4b54_b402_4d33a265259a.slice. Dec 13 06:43:38.376734 env[1193]: time="2024-12-13T06:43:38.376532689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lbhk2,Uid:7c0ef689-b9f8-48ea-8e4d-ab890c660759,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\"" Dec 13 06:43:38.384091 env[1193]: time="2024-12-13T06:43:38.384031359Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 06:43:38.444512 kubelet[2036]: I1213 06:43:38.444451 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvw2n\" (UniqueName: \"kubernetes.io/projected/d5c32872-421c-4b54-b402-4d33a265259a-kube-api-access-rvw2n\") pod \"cilium-operator-599987898-qvd79\" (UID: \"d5c32872-421c-4b54-b402-4d33a265259a\") " pod="kube-system/cilium-operator-599987898-qvd79" Dec 13 06:43:38.445188 kubelet[2036]: I1213 06:43:38.445161 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5c32872-421c-4b54-b402-4d33a265259a-cilium-config-path\") pod \"cilium-operator-599987898-qvd79\" (UID: \"d5c32872-421c-4b54-b402-4d33a265259a\") " pod="kube-system/cilium-operator-599987898-qvd79" Dec 13 06:43:38.455702 env[1193]: time="2024-12-13T06:43:38.455612266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9kdb,Uid:9d649087-60af-42f0-8f53-3fe4a25cf89f,Namespace:kube-system,Attempt:0,} returns sandbox id \"38fb1f2e49178fffab4a3d4cb23a67387c251b9a74ac4c04e58e0402acfdfad0\"" Dec 13 06:43:38.461389 env[1193]: time="2024-12-13T06:43:38.461334541Z" level=info msg="CreateContainer within sandbox \"38fb1f2e49178fffab4a3d4cb23a67387c251b9a74ac4c04e58e0402acfdfad0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 06:43:38.482555 env[1193]: time="2024-12-13T06:43:38.482458045Z" level=info msg="CreateContainer within sandbox \"38fb1f2e49178fffab4a3d4cb23a67387c251b9a74ac4c04e58e0402acfdfad0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"15fa89de3d6b270922ef662aa6c124644e335b78d6395d4dea498a38f08b5123\"" Dec 13 06:43:38.485739 env[1193]: time="2024-12-13T06:43:38.485640844Z" level=info msg="StartContainer for \"15fa89de3d6b270922ef662aa6c124644e335b78d6395d4dea498a38f08b5123\"" Dec 13 06:43:38.520581 systemd[1]: Started cri-containerd-15fa89de3d6b270922ef662aa6c124644e335b78d6395d4dea498a38f08b5123.scope. Dec 13 06:43:38.587877 env[1193]: time="2024-12-13T06:43:38.587221719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qvd79,Uid:d5c32872-421c-4b54-b402-4d33a265259a,Namespace:kube-system,Attempt:0,}" Dec 13 06:43:38.591582 env[1193]: time="2024-12-13T06:43:38.591540764Z" level=info msg="StartContainer for \"15fa89de3d6b270922ef662aa6c124644e335b78d6395d4dea498a38f08b5123\" returns successfully" Dec 13 06:43:38.624978 env[1193]: time="2024-12-13T06:43:38.624769568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:43:38.624978 env[1193]: time="2024-12-13T06:43:38.624855155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:43:38.624978 env[1193]: time="2024-12-13T06:43:38.624873970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:43:38.625964 env[1193]: time="2024-12-13T06:43:38.625214638Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138 pid=2229 runtime=io.containerd.runc.v2 Dec 13 06:43:38.649863 systemd[1]: Started cri-containerd-0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138.scope. Dec 13 06:43:38.674907 kubelet[2036]: I1213 06:43:38.674809 2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d9kdb" podStartSLOduration=1.674778375 podStartE2EDuration="1.674778375s" podCreationTimestamp="2024-12-13 06:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:43:38.673091168 +0000 UTC m=+15.326046275" watchObservedRunningTime="2024-12-13 06:43:38.674778375 +0000 UTC m=+15.327733483" Dec 13 06:43:38.761922 env[1193]: time="2024-12-13T06:43:38.761824820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qvd79,Uid:d5c32872-421c-4b54-b402-4d33a265259a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138\"" Dec 13 06:43:47.005943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2738324492.mount: Deactivated successfully. Dec 13 06:43:51.552824 env[1193]: time="2024-12-13T06:43:51.552676952Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:51.555519 env[1193]: time="2024-12-13T06:43:51.555477986Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:51.560093 env[1193]: time="2024-12-13T06:43:51.558479409Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:51.560506 env[1193]: time="2024-12-13T06:43:51.559565042Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 06:43:51.562329 env[1193]: time="2024-12-13T06:43:51.562289224Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 06:43:51.564885 env[1193]: time="2024-12-13T06:43:51.564454543Z" level=info msg="CreateContainer within sandbox \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:43:51.594826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416847843.mount: Deactivated successfully. Dec 13 06:43:51.603359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260942837.mount: Deactivated successfully. Dec 13 06:43:51.607289 env[1193]: time="2024-12-13T06:43:51.607152496Z" level=info msg="CreateContainer within sandbox \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141\"" Dec 13 06:43:51.609850 env[1193]: time="2024-12-13T06:43:51.607987730Z" level=info msg="StartContainer for \"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141\"" Dec 13 06:43:51.654035 systemd[1]: Started cri-containerd-fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141.scope. Dec 13 06:43:51.716848 env[1193]: time="2024-12-13T06:43:51.716788066Z" level=info msg="StartContainer for \"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141\" returns successfully" Dec 13 06:43:51.732190 systemd[1]: cri-containerd-fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141.scope: Deactivated successfully. Dec 13 06:43:51.854421 env[1193]: time="2024-12-13T06:43:51.854257754Z" level=info msg="shim disconnected" id=fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141 Dec 13 06:43:51.854833 env[1193]: time="2024-12-13T06:43:51.854800000Z" level=warning msg="cleaning up after shim disconnected" id=fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141 namespace=k8s.io Dec 13 06:43:51.854998 env[1193]: time="2024-12-13T06:43:51.854969236Z" level=info msg="cleaning up dead shim" Dec 13 06:43:51.867702 env[1193]: time="2024-12-13T06:43:51.867623763Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:43:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2440 runtime=io.containerd.runc.v2\n" Dec 13 06:43:52.591678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141-rootfs.mount: Deactivated successfully. Dec 13 06:43:52.714412 env[1193]: time="2024-12-13T06:43:52.714342298Z" level=info msg="CreateContainer within sandbox \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 06:43:52.731836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299529773.mount: Deactivated successfully. Dec 13 06:43:52.742182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount382697439.mount: Deactivated successfully. Dec 13 06:43:52.744701 env[1193]: time="2024-12-13T06:43:52.744627410Z" level=info msg="CreateContainer within sandbox \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230\"" Dec 13 06:43:52.746266 env[1193]: time="2024-12-13T06:43:52.746223377Z" level=info msg="StartContainer for \"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230\"" Dec 13 06:43:52.775188 systemd[1]: Started cri-containerd-92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230.scope. Dec 13 06:43:52.835508 env[1193]: time="2024-12-13T06:43:52.835435535Z" level=info msg="StartContainer for \"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230\" returns successfully" Dec 13 06:43:52.854695 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 06:43:52.855073 systemd[1]: Stopped systemd-sysctl.service. Dec 13 06:43:52.856146 systemd[1]: Stopping systemd-sysctl.service... Dec 13 06:43:52.859438 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:43:52.864268 systemd[1]: cri-containerd-92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230.scope: Deactivated successfully. Dec 13 06:43:52.878041 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:43:52.905348 env[1193]: time="2024-12-13T06:43:52.905269159Z" level=info msg="shim disconnected" id=92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230 Dec 13 06:43:52.905348 env[1193]: time="2024-12-13T06:43:52.905337996Z" level=warning msg="cleaning up after shim disconnected" id=92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230 namespace=k8s.io Dec 13 06:43:52.905348 env[1193]: time="2024-12-13T06:43:52.905356093Z" level=info msg="cleaning up dead shim" Dec 13 06:43:52.918749 env[1193]: time="2024-12-13T06:43:52.918663434Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:43:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2502 runtime=io.containerd.runc.v2\n" Dec 13 06:43:53.722655 env[1193]: time="2024-12-13T06:43:53.722150852Z" level=info msg="CreateContainer within sandbox \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 06:43:53.748393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945650621.mount: Deactivated successfully. Dec 13 06:43:53.779845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635734113.mount: Deactivated successfully. Dec 13 06:43:53.788982 env[1193]: time="2024-12-13T06:43:53.788899904Z" level=info msg="CreateContainer within sandbox \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788\"" Dec 13 06:43:53.793136 env[1193]: time="2024-12-13T06:43:53.793080019Z" level=info msg="StartContainer for \"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788\"" Dec 13 06:43:53.856312 systemd[1]: Started cri-containerd-7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788.scope. Dec 13 06:43:53.950248 env[1193]: time="2024-12-13T06:43:53.950176935Z" level=info msg="StartContainer for \"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788\" returns successfully" Dec 13 06:43:53.956187 systemd[1]: cri-containerd-7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788.scope: Deactivated successfully. Dec 13 06:43:54.034489 env[1193]: time="2024-12-13T06:43:54.034413887Z" level=info msg="shim disconnected" id=7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788 Dec 13 06:43:54.034888 env[1193]: time="2024-12-13T06:43:54.034854398Z" level=warning msg="cleaning up after shim disconnected" id=7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788 namespace=k8s.io Dec 13 06:43:54.035063 env[1193]: time="2024-12-13T06:43:54.035033502Z" level=info msg="cleaning up dead shim" Dec 13 06:43:54.054611 env[1193]: time="2024-12-13T06:43:54.054552268Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:43:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2559 runtime=io.containerd.runc.v2\n" Dec 13 06:43:54.733571 env[1193]: time="2024-12-13T06:43:54.733497386Z" level=info msg="CreateContainer within sandbox \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 06:43:54.773749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444624480.mount: Deactivated successfully. Dec 13 06:43:54.784393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1985902910.mount: Deactivated successfully. Dec 13 06:43:54.791410 env[1193]: time="2024-12-13T06:43:54.791353449Z" level=info msg="CreateContainer within sandbox \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd\"" Dec 13 06:43:54.799362 env[1193]: time="2024-12-13T06:43:54.799300823Z" level=info msg="StartContainer for \"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd\"" Dec 13 06:43:54.843589 systemd[1]: Started cri-containerd-1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd.scope. Dec 13 06:43:54.901328 systemd[1]: cri-containerd-1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd.scope: Deactivated successfully. Dec 13 06:43:54.903795 env[1193]: time="2024-12-13T06:43:54.903325617Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c0ef689_b9f8_48ea_8e4d_ab890c660759.slice/cri-containerd-1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd.scope/memory.events\": no such file or directory" Dec 13 06:43:54.912645 env[1193]: time="2024-12-13T06:43:54.912594849Z" level=info msg="StartContainer for \"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd\" returns successfully" Dec 13 06:43:55.056768 env[1193]: time="2024-12-13T06:43:55.056703187Z" level=info msg="shim disconnected" id=1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd Dec 13 06:43:55.056768 env[1193]: time="2024-12-13T06:43:55.056765543Z" level=warning msg="cleaning up after shim disconnected" id=1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd namespace=k8s.io Dec 13 06:43:55.057418 env[1193]: time="2024-12-13T06:43:55.056782512Z" level=info msg="cleaning up dead shim" Dec 13 06:43:55.076976 env[1193]: time="2024-12-13T06:43:55.076891959Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:43:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2615 runtime=io.containerd.runc.v2\n" Dec 13 06:43:55.284747 env[1193]: time="2024-12-13T06:43:55.284671834Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:55.286480 env[1193]: time="2024-12-13T06:43:55.286435983Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:55.289109 env[1193]: time="2024-12-13T06:43:55.289064337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:43:55.290029 env[1193]: time="2024-12-13T06:43:55.289985647Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 06:43:55.296098 env[1193]: time="2024-12-13T06:43:55.296032934Z" level=info msg="CreateContainer within sandbox \"0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 06:43:55.318417 env[1193]: time="2024-12-13T06:43:55.318259662Z" level=info msg="CreateContainer within sandbox \"0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\"" Dec 13 06:43:55.321864 env[1193]: time="2024-12-13T06:43:55.321812752Z" level=info msg="StartContainer for \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\"" Dec 13 06:43:55.349578 systemd[1]: Started cri-containerd-7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020.scope. Dec 13 06:43:55.405193 env[1193]: time="2024-12-13T06:43:55.405113868Z" level=info msg="StartContainer for \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\" returns successfully" Dec 13 06:43:55.732327 env[1193]: time="2024-12-13T06:43:55.732167302Z" level=info msg="CreateContainer within sandbox \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 06:43:55.756253 env[1193]: time="2024-12-13T06:43:55.756199802Z" level=info msg="CreateContainer within sandbox \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\"" Dec 13 06:43:55.757784 env[1193]: time="2024-12-13T06:43:55.757746779Z" level=info msg="StartContainer for \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\"" Dec 13 06:43:55.800841 systemd[1]: Started cri-containerd-b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad.scope. Dec 13 06:43:55.925073 env[1193]: time="2024-12-13T06:43:55.925016019Z" level=info msg="StartContainer for \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\" returns successfully" Dec 13 06:43:56.403838 kubelet[2036]: I1213 06:43:56.402652 2036 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 06:43:56.480857 kubelet[2036]: I1213 06:43:56.480774 2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-qvd79" podStartSLOduration=1.9536386000000001 podStartE2EDuration="18.48075109s" podCreationTimestamp="2024-12-13 06:43:38 +0000 UTC" firstStartedPulling="2024-12-13 06:43:38.764564181 +0000 UTC m=+15.417519269" lastFinishedPulling="2024-12-13 06:43:55.291676659 +0000 UTC m=+31.944631759" observedRunningTime="2024-12-13 06:43:55.915379738 +0000 UTC m=+32.568334844" watchObservedRunningTime="2024-12-13 06:43:56.48075109 +0000 UTC m=+33.133706198" Dec 13 06:43:56.481480 kubelet[2036]: I1213 06:43:56.481445 2036 topology_manager.go:215] "Topology Admit Handler" podUID="81a75eef-a081-4d99-9925-49bb3435da6f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fncxx" Dec 13 06:43:56.485342 kubelet[2036]: I1213 06:43:56.485308 2036 topology_manager.go:215] "Topology Admit Handler" podUID="3d984822-f885-4425-955a-e510756dae90" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2hb5l" Dec 13 06:43:56.490650 systemd[1]: Created slice kubepods-burstable-pod81a75eef_a081_4d99_9925_49bb3435da6f.slice. Dec 13 06:43:56.501264 systemd[1]: Created slice kubepods-burstable-pod3d984822_f885_4425_955a_e510756dae90.slice. Dec 13 06:43:56.518323 kubelet[2036]: W1213 06:43:56.518207 2036 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-7lx2b.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-7lx2b.gb1.brightbox.com' and this object Dec 13 06:43:56.518323 kubelet[2036]: E1213 06:43:56.518320 2036 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-7lx2b.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-7lx2b.gb1.brightbox.com' and this object Dec 13 06:43:56.599886 kubelet[2036]: I1213 06:43:56.599817 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d984822-f885-4425-955a-e510756dae90-config-volume\") pod \"coredns-7db6d8ff4d-2hb5l\" (UID: \"3d984822-f885-4425-955a-e510756dae90\") " pod="kube-system/coredns-7db6d8ff4d-2hb5l" Dec 13 06:43:56.599886 kubelet[2036]: I1213 06:43:56.599888 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqrnm\" (UniqueName: \"kubernetes.io/projected/3d984822-f885-4425-955a-e510756dae90-kube-api-access-vqrnm\") pod \"coredns-7db6d8ff4d-2hb5l\" (UID: \"3d984822-f885-4425-955a-e510756dae90\") " pod="kube-system/coredns-7db6d8ff4d-2hb5l" Dec 13 06:43:56.600166 kubelet[2036]: I1213 06:43:56.599948 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a75eef-a081-4d99-9925-49bb3435da6f-config-volume\") pod \"coredns-7db6d8ff4d-fncxx\" (UID: \"81a75eef-a081-4d99-9925-49bb3435da6f\") " pod="kube-system/coredns-7db6d8ff4d-fncxx" Dec 13 06:43:56.600166 kubelet[2036]: I1213 06:43:56.599983 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7mzk\" (UniqueName: \"kubernetes.io/projected/81a75eef-a081-4d99-9925-49bb3435da6f-kube-api-access-k7mzk\") pod \"coredns-7db6d8ff4d-fncxx\" (UID: \"81a75eef-a081-4d99-9925-49bb3435da6f\") " pod="kube-system/coredns-7db6d8ff4d-fncxx" Dec 13 06:43:56.823424 kubelet[2036]: I1213 06:43:56.823341 2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lbhk2" podStartSLOduration=6.644402779 podStartE2EDuration="19.823292492s" podCreationTimestamp="2024-12-13 06:43:37 +0000 UTC" firstStartedPulling="2024-12-13 06:43:38.382472764 +0000 UTC m=+15.035427852" lastFinishedPulling="2024-12-13 06:43:51.561362467 +0000 UTC m=+28.214317565" observedRunningTime="2024-12-13 06:43:56.818782502 +0000 UTC m=+33.471737612" watchObservedRunningTime="2024-12-13 06:43:56.823292492 +0000 UTC m=+33.476247595" Dec 13 06:43:57.702648 kubelet[2036]: E1213 06:43:57.702581 2036 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 06:43:57.703577 kubelet[2036]: E1213 06:43:57.702857 2036 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 06:43:57.704612 kubelet[2036]: E1213 06:43:57.704569 2036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3d984822-f885-4425-955a-e510756dae90-config-volume podName:3d984822-f885-4425-955a-e510756dae90 nodeName:}" failed. No retries permitted until 2024-12-13 06:43:58.202675607 +0000 UTC m=+34.855630708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3d984822-f885-4425-955a-e510756dae90-config-volume") pod "coredns-7db6d8ff4d-2hb5l" (UID: "3d984822-f885-4425-955a-e510756dae90") : failed to sync configmap cache: timed out waiting for the condition Dec 13 06:43:57.704776 kubelet[2036]: E1213 06:43:57.704618 2036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/81a75eef-a081-4d99-9925-49bb3435da6f-config-volume podName:81a75eef-a081-4d99-9925-49bb3435da6f nodeName:}" failed. No retries permitted until 2024-12-13 06:43:58.204605423 +0000 UTC m=+34.857560517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/81a75eef-a081-4d99-9925-49bb3435da6f-config-volume") pod "coredns-7db6d8ff4d-fncxx" (UID: "81a75eef-a081-4d99-9925-49bb3435da6f") : failed to sync configmap cache: timed out waiting for the condition Dec 13 06:43:58.298306 env[1193]: time="2024-12-13T06:43:58.298227713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fncxx,Uid:81a75eef-a081-4d99-9925-49bb3435da6f,Namespace:kube-system,Attempt:0,}" Dec 13 06:43:58.305748 env[1193]: time="2024-12-13T06:43:58.305671028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hb5l,Uid:3d984822-f885-4425-955a-e510756dae90,Namespace:kube-system,Attempt:0,}" Dec 13 06:43:59.232031 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 06:43:59.235267 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 06:43:59.244102 systemd-networkd[1026]: cilium_host: Link UP Dec 13 06:43:59.246173 systemd-networkd[1026]: cilium_net: Link UP Dec 13 06:43:59.248750 systemd-networkd[1026]: cilium_net: Gained carrier Dec 13 06:43:59.250290 systemd-networkd[1026]: cilium_host: Gained carrier Dec 13 06:43:59.420025 systemd-networkd[1026]: cilium_vxlan: Link UP Dec 13 06:43:59.420039 systemd-networkd[1026]: cilium_vxlan: Gained carrier Dec 13 06:43:59.450237 systemd-networkd[1026]: cilium_net: Gained IPv6LL Dec 13 06:43:59.997208 kernel: NET: Registered PF_ALG protocol family Dec 13 06:44:00.050455 systemd-networkd[1026]: cilium_host: Gained IPv6LL Dec 13 06:44:01.138333 systemd-networkd[1026]: cilium_vxlan: Gained IPv6LL Dec 13 06:44:01.172802 systemd-networkd[1026]: lxc_health: Link UP Dec 13 06:44:01.189330 systemd-networkd[1026]: lxc_health: Gained carrier Dec 13 06:44:01.190072 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 06:44:01.908107 systemd-networkd[1026]: lxc78dba6ba8b63: Link UP Dec 13 06:44:01.913384 systemd-networkd[1026]: lxc91122f386958: Link UP Dec 13 06:44:01.923172 kernel: eth0: renamed from tmp03359 Dec 13 06:44:01.941942 kernel: eth0: renamed from tmp735d7 Dec 13 06:44:01.949951 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc78dba6ba8b63: link becomes ready Dec 13 06:44:01.955317 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc91122f386958: link becomes ready Dec 13 06:44:01.950161 systemd-networkd[1026]: lxc78dba6ba8b63: Gained carrier Dec 13 06:44:01.959300 systemd-networkd[1026]: lxc91122f386958: Gained carrier Dec 13 06:44:02.482281 systemd-networkd[1026]: lxc_health: Gained IPv6LL Dec 13 06:44:03.058235 systemd-networkd[1026]: lxc91122f386958: Gained IPv6LL Dec 13 06:44:03.250315 systemd-networkd[1026]: lxc78dba6ba8b63: Gained IPv6LL Dec 13 06:44:05.495516 kubelet[2036]: I1213 06:44:05.493844 2036 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 06:44:07.708440 env[1193]: time="2024-12-13T06:44:07.708053755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:44:07.708440 env[1193]: time="2024-12-13T06:44:07.708402059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:44:07.709384 env[1193]: time="2024-12-13T06:44:07.708481685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:44:07.709384 env[1193]: time="2024-12-13T06:44:07.708973958Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/735d7db559561b5a6bb71f54ed86d99953618dd8cc514d1c6a8705d02c64ca0f pid=3195 runtime=io.containerd.runc.v2 Dec 13 06:44:07.749980 systemd[1]: run-containerd-runc-k8s.io-735d7db559561b5a6bb71f54ed86d99953618dd8cc514d1c6a8705d02c64ca0f-runc.S5vOEn.mount: Deactivated successfully. Dec 13 06:44:07.762069 systemd[1]: Started cri-containerd-735d7db559561b5a6bb71f54ed86d99953618dd8cc514d1c6a8705d02c64ca0f.scope. Dec 13 06:44:07.791047 env[1193]: time="2024-12-13T06:44:07.790207171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:44:07.791047 env[1193]: time="2024-12-13T06:44:07.790304997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:44:07.791047 env[1193]: time="2024-12-13T06:44:07.790337045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:44:07.791047 env[1193]: time="2024-12-13T06:44:07.790724620Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03359d32611a42206dd080dd66b0b426a181e8cb7a7c3a7e60e41473096bce91 pid=3222 runtime=io.containerd.runc.v2 Dec 13 06:44:07.819837 systemd[1]: Started cri-containerd-03359d32611a42206dd080dd66b0b426a181e8cb7a7c3a7e60e41473096bce91.scope. Dec 13 06:44:07.917957 env[1193]: time="2024-12-13T06:44:07.917843495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hb5l,Uid:3d984822-f885-4425-955a-e510756dae90,Namespace:kube-system,Attempt:0,} returns sandbox id \"03359d32611a42206dd080dd66b0b426a181e8cb7a7c3a7e60e41473096bce91\"" Dec 13 06:44:07.933387 env[1193]: time="2024-12-13T06:44:07.933316646Z" level=info msg="CreateContainer within sandbox \"03359d32611a42206dd080dd66b0b426a181e8cb7a7c3a7e60e41473096bce91\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 06:44:07.956079 env[1193]: time="2024-12-13T06:44:07.956018389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fncxx,Uid:81a75eef-a081-4d99-9925-49bb3435da6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"735d7db559561b5a6bb71f54ed86d99953618dd8cc514d1c6a8705d02c64ca0f\"" Dec 13 06:44:07.960813 env[1193]: time="2024-12-13T06:44:07.959743202Z" level=info msg="CreateContainer within sandbox \"735d7db559561b5a6bb71f54ed86d99953618dd8cc514d1c6a8705d02c64ca0f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 06:44:07.979312 env[1193]: time="2024-12-13T06:44:07.978993661Z" level=info msg="CreateContainer within sandbox \"03359d32611a42206dd080dd66b0b426a181e8cb7a7c3a7e60e41473096bce91\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82321002acea39770fa40a36da5e06e65d3473090d1486b1a5496e075bbcacec\"" Dec 13 06:44:07.981272 env[1193]: time="2024-12-13T06:44:07.981212245Z" level=info msg="StartContainer for \"82321002acea39770fa40a36da5e06e65d3473090d1486b1a5496e075bbcacec\"" Dec 13 06:44:07.984861 env[1193]: time="2024-12-13T06:44:07.984806067Z" level=info msg="CreateContainer within sandbox \"735d7db559561b5a6bb71f54ed86d99953618dd8cc514d1c6a8705d02c64ca0f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fee344b9175936a0f1bf03e7ece216337e4584d38f25f2712225e2ff08506a7\"" Dec 13 06:44:07.985443 env[1193]: time="2024-12-13T06:44:07.985400976Z" level=info msg="StartContainer for \"4fee344b9175936a0f1bf03e7ece216337e4584d38f25f2712225e2ff08506a7\"" Dec 13 06:44:08.026480 systemd[1]: Started cri-containerd-4fee344b9175936a0f1bf03e7ece216337e4584d38f25f2712225e2ff08506a7.scope. Dec 13 06:44:08.042121 systemd[1]: Started cri-containerd-82321002acea39770fa40a36da5e06e65d3473090d1486b1a5496e075bbcacec.scope. Dec 13 06:44:08.101906 env[1193]: time="2024-12-13T06:44:08.101851831Z" level=info msg="StartContainer for \"82321002acea39770fa40a36da5e06e65d3473090d1486b1a5496e075bbcacec\" returns successfully" Dec 13 06:44:08.104217 env[1193]: time="2024-12-13T06:44:08.104126643Z" level=info msg="StartContainer for \"4fee344b9175936a0f1bf03e7ece216337e4584d38f25f2712225e2ff08506a7\" returns successfully" Dec 13 06:44:08.802104 kubelet[2036]: I1213 06:44:08.801965 2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fncxx" podStartSLOduration=30.80190154 podStartE2EDuration="30.80190154s" podCreationTimestamp="2024-12-13 06:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:44:08.801306512 +0000 UTC m=+45.454261606" watchObservedRunningTime="2024-12-13 06:44:08.80190154 +0000 UTC m=+45.454856649" Dec 13 06:44:08.862688 kubelet[2036]: I1213 06:44:08.862605 2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2hb5l" podStartSLOduration=30.862549742 podStartE2EDuration="30.862549742s" podCreationTimestamp="2024-12-13 06:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:44:08.862474225 +0000 UTC m=+45.515429326" watchObservedRunningTime="2024-12-13 06:44:08.862549742 +0000 UTC m=+45.515504850" Dec 13 06:44:34.098421 systemd[1]: Started sshd@7-10.244.18.198:22-139.178.89.65:59216.service. Dec 13 06:44:35.021011 sshd[3360]: Accepted publickey for core from 139.178.89.65 port 59216 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:44:35.023936 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:35.033523 systemd[1]: Started session-8.scope. Dec 13 06:44:35.034532 systemd-logind[1181]: New session 8 of user core. Dec 13 06:44:35.875658 sshd[3360]: pam_unix(sshd:session): session closed for user core Dec 13 06:44:35.879403 systemd[1]: sshd@7-10.244.18.198:22-139.178.89.65:59216.service: Deactivated successfully. Dec 13 06:44:35.880522 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 06:44:35.881414 systemd-logind[1181]: Session 8 logged out. Waiting for processes to exit. Dec 13 06:44:35.882613 systemd-logind[1181]: Removed session 8. Dec 13 06:44:41.023497 systemd[1]: Started sshd@8-10.244.18.198:22-139.178.89.65:37250.service. Dec 13 06:44:41.940270 sshd[3377]: Accepted publickey for core from 139.178.89.65 port 37250 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:44:41.942551 sshd[3377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:41.950487 systemd[1]: Started session-9.scope. Dec 13 06:44:41.951079 systemd-logind[1181]: New session 9 of user core. Dec 13 06:44:42.686354 sshd[3377]: pam_unix(sshd:session): session closed for user core Dec 13 06:44:42.690713 systemd[1]: sshd@8-10.244.18.198:22-139.178.89.65:37250.service: Deactivated successfully. Dec 13 06:44:42.691800 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 06:44:42.692638 systemd-logind[1181]: Session 9 logged out. Waiting for processes to exit. Dec 13 06:44:42.693976 systemd-logind[1181]: Removed session 9. Dec 13 06:44:47.830507 systemd[1]: Started sshd@9-10.244.18.198:22-139.178.89.65:37258.service. Dec 13 06:44:48.722363 sshd[3389]: Accepted publickey for core from 139.178.89.65 port 37258 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:44:48.725422 sshd[3389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:48.736243 systemd[1]: Started session-10.scope. Dec 13 06:44:48.737054 systemd-logind[1181]: New session 10 of user core. Dec 13 06:44:49.447986 sshd[3389]: pam_unix(sshd:session): session closed for user core Dec 13 06:44:49.452802 systemd[1]: sshd@9-10.244.18.198:22-139.178.89.65:37258.service: Deactivated successfully. Dec 13 06:44:49.454131 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 06:44:49.455131 systemd-logind[1181]: Session 10 logged out. Waiting for processes to exit. Dec 13 06:44:49.456837 systemd-logind[1181]: Removed session 10. Dec 13 06:44:54.596689 systemd[1]: Started sshd@10-10.244.18.198:22-139.178.89.65:35756.service. Dec 13 06:44:55.496472 sshd[3402]: Accepted publickey for core from 139.178.89.65 port 35756 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:44:55.502257 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:55.510175 systemd[1]: Started session-11.scope. Dec 13 06:44:55.511052 systemd-logind[1181]: New session 11 of user core. Dec 13 06:44:56.234075 sshd[3402]: pam_unix(sshd:session): session closed for user core Dec 13 06:44:56.240515 systemd[1]: sshd@10-10.244.18.198:22-139.178.89.65:35756.service: Deactivated successfully. Dec 13 06:44:56.241774 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 06:44:56.242859 systemd-logind[1181]: Session 11 logged out. Waiting for processes to exit. Dec 13 06:44:56.244710 systemd-logind[1181]: Removed session 11. Dec 13 06:45:01.391408 systemd[1]: Started sshd@11-10.244.18.198:22-139.178.89.65:54668.service. Dec 13 06:45:02.296485 sshd[3415]: Accepted publickey for core from 139.178.89.65 port 54668 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:02.303308 sshd[3415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:02.312846 systemd[1]: Started session-12.scope. Dec 13 06:45:02.313864 systemd-logind[1181]: New session 12 of user core. Dec 13 06:45:03.044885 sshd[3415]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:03.049984 systemd[1]: sshd@11-10.244.18.198:22-139.178.89.65:54668.service: Deactivated successfully. Dec 13 06:45:03.051252 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 06:45:03.052111 systemd-logind[1181]: Session 12 logged out. Waiting for processes to exit. Dec 13 06:45:03.054016 systemd-logind[1181]: Removed session 12. Dec 13 06:45:03.191600 systemd[1]: Started sshd@12-10.244.18.198:22-139.178.89.65:54680.service. Dec 13 06:45:04.077997 sshd[3427]: Accepted publickey for core from 139.178.89.65 port 54680 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:04.079970 sshd[3427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:04.086700 systemd-logind[1181]: New session 13 of user core. Dec 13 06:45:04.088019 systemd[1]: Started session-13.scope. Dec 13 06:45:04.864811 sshd[3427]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:04.869170 systemd[1]: sshd@12-10.244.18.198:22-139.178.89.65:54680.service: Deactivated successfully. Dec 13 06:45:04.870506 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 06:45:04.871573 systemd-logind[1181]: Session 13 logged out. Waiting for processes to exit. Dec 13 06:45:04.873592 systemd-logind[1181]: Removed session 13. Dec 13 06:45:05.015047 systemd[1]: Started sshd@13-10.244.18.198:22-139.178.89.65:54684.service. Dec 13 06:45:05.912445 sshd[3436]: Accepted publickey for core from 139.178.89.65 port 54684 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:05.914369 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:05.922653 systemd-logind[1181]: New session 14 of user core. Dec 13 06:45:05.923389 systemd[1]: Started session-14.scope. Dec 13 06:45:06.636605 sshd[3436]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:06.641946 systemd[1]: sshd@13-10.244.18.198:22-139.178.89.65:54684.service: Deactivated successfully. Dec 13 06:45:06.643290 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 06:45:06.645552 systemd-logind[1181]: Session 14 logged out. Waiting for processes to exit. Dec 13 06:45:06.648342 systemd-logind[1181]: Removed session 14. Dec 13 06:45:11.783524 systemd[1]: Started sshd@14-10.244.18.198:22-139.178.89.65:35876.service. Dec 13 06:45:12.679654 sshd[3451]: Accepted publickey for core from 139.178.89.65 port 35876 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:12.682263 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:12.688748 systemd-logind[1181]: New session 15 of user core. Dec 13 06:45:12.689745 systemd[1]: Started session-15.scope. Dec 13 06:45:13.379314 sshd[3451]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:13.383310 systemd-logind[1181]: Session 15 logged out. Waiting for processes to exit. Dec 13 06:45:13.384219 systemd[1]: sshd@14-10.244.18.198:22-139.178.89.65:35876.service: Deactivated successfully. Dec 13 06:45:13.385185 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 06:45:13.386458 systemd-logind[1181]: Removed session 15. Dec 13 06:45:18.528611 systemd[1]: Started sshd@15-10.244.18.198:22-139.178.89.65:53176.service. Dec 13 06:45:19.420728 sshd[3463]: Accepted publickey for core from 139.178.89.65 port 53176 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:19.423471 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:19.431754 systemd-logind[1181]: New session 16 of user core. Dec 13 06:45:19.433044 systemd[1]: Started session-16.scope. Dec 13 06:45:20.145385 sshd[3463]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:20.148889 systemd[1]: sshd@15-10.244.18.198:22-139.178.89.65:53176.service: Deactivated successfully. Dec 13 06:45:20.149966 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 06:45:20.150776 systemd-logind[1181]: Session 16 logged out. Waiting for processes to exit. Dec 13 06:45:20.152114 systemd-logind[1181]: Removed session 16. Dec 13 06:45:20.293606 systemd[1]: Started sshd@16-10.244.18.198:22-139.178.89.65:53186.service. Dec 13 06:45:21.189286 sshd[3475]: Accepted publickey for core from 139.178.89.65 port 53186 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:21.191981 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:21.200710 systemd-logind[1181]: New session 17 of user core. Dec 13 06:45:21.201437 systemd[1]: Started session-17.scope. Dec 13 06:45:22.306237 sshd[3475]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:22.310939 systemd-logind[1181]: Session 17 logged out. Waiting for processes to exit. Dec 13 06:45:22.311247 systemd[1]: sshd@16-10.244.18.198:22-139.178.89.65:53186.service: Deactivated successfully. Dec 13 06:45:22.312214 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 06:45:22.313672 systemd-logind[1181]: Removed session 17. Dec 13 06:45:22.454253 systemd[1]: Started sshd@17-10.244.18.198:22-139.178.89.65:53194.service. Dec 13 06:45:23.368117 sshd[3485]: Accepted publickey for core from 139.178.89.65 port 53194 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:23.370768 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:23.378261 systemd[1]: Started session-18.scope. Dec 13 06:45:23.380496 systemd-logind[1181]: New session 18 of user core. Dec 13 06:45:26.200965 sshd[3485]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:26.208231 systemd[1]: sshd@17-10.244.18.198:22-139.178.89.65:53194.service: Deactivated successfully. Dec 13 06:45:26.209493 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 06:45:26.210820 systemd-logind[1181]: Session 18 logged out. Waiting for processes to exit. Dec 13 06:45:26.212228 systemd-logind[1181]: Removed session 18. Dec 13 06:45:26.348836 systemd[1]: Started sshd@18-10.244.18.198:22-139.178.89.65:53198.service. Dec 13 06:45:27.236536 sshd[3504]: Accepted publickey for core from 139.178.89.65 port 53198 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:27.238522 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:27.246219 systemd[1]: Started session-19.scope. Dec 13 06:45:27.246787 systemd-logind[1181]: New session 19 of user core. Dec 13 06:45:28.214459 sshd[3504]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:28.218578 systemd[1]: sshd@18-10.244.18.198:22-139.178.89.65:53198.service: Deactivated successfully. Dec 13 06:45:28.219608 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 06:45:28.221082 systemd-logind[1181]: Session 19 logged out. Waiting for processes to exit. Dec 13 06:45:28.222875 systemd-logind[1181]: Removed session 19. Dec 13 06:45:28.362790 systemd[1]: Started sshd@19-10.244.18.198:22-139.178.89.65:56690.service. Dec 13 06:45:29.258243 sshd[3515]: Accepted publickey for core from 139.178.89.65 port 56690 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:29.261276 sshd[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:29.269000 systemd[1]: Started session-20.scope. Dec 13 06:45:29.269825 systemd-logind[1181]: New session 20 of user core. Dec 13 06:45:29.972966 sshd[3515]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:29.977339 systemd[1]: sshd@19-10.244.18.198:22-139.178.89.65:56690.service: Deactivated successfully. Dec 13 06:45:29.978396 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 06:45:29.979275 systemd-logind[1181]: Session 20 logged out. Waiting for processes to exit. Dec 13 06:45:29.980651 systemd-logind[1181]: Removed session 20. Dec 13 06:45:35.121860 systemd[1]: Started sshd@20-10.244.18.198:22-139.178.89.65:56702.service. Dec 13 06:45:36.015967 sshd[3527]: Accepted publickey for core from 139.178.89.65 port 56702 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:36.017510 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:36.025233 systemd[1]: Started session-21.scope. Dec 13 06:45:36.027028 systemd-logind[1181]: New session 21 of user core. Dec 13 06:45:36.724343 sshd[3527]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:36.728100 systemd[1]: sshd@20-10.244.18.198:22-139.178.89.65:56702.service: Deactivated successfully. Dec 13 06:45:36.729167 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 06:45:36.730143 systemd-logind[1181]: Session 21 logged out. Waiting for processes to exit. Dec 13 06:45:36.731690 systemd-logind[1181]: Removed session 21. Dec 13 06:45:41.872013 systemd[1]: Started sshd@21-10.244.18.198:22-139.178.89.65:39966.service. Dec 13 06:45:42.760328 sshd[3543]: Accepted publickey for core from 139.178.89.65 port 39966 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:42.762685 sshd[3543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:42.771549 systemd[1]: Started session-22.scope. Dec 13 06:45:42.772406 systemd-logind[1181]: New session 22 of user core. Dec 13 06:45:43.459329 sshd[3543]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:43.463011 systemd-logind[1181]: Session 22 logged out. Waiting for processes to exit. Dec 13 06:45:43.463519 systemd[1]: sshd@21-10.244.18.198:22-139.178.89.65:39966.service: Deactivated successfully. Dec 13 06:45:43.464492 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 06:45:43.465831 systemd-logind[1181]: Removed session 22. Dec 13 06:45:48.610491 systemd[1]: Started sshd@22-10.244.18.198:22-139.178.89.65:59738.service. Dec 13 06:45:49.547785 sshd[3555]: Accepted publickey for core from 139.178.89.65 port 59738 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:49.549856 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:49.557991 systemd-logind[1181]: New session 23 of user core. Dec 13 06:45:49.558393 systemd[1]: Started session-23.scope. Dec 13 06:45:50.276279 sshd[3555]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:50.280237 systemd-logind[1181]: Session 23 logged out. Waiting for processes to exit. Dec 13 06:45:50.280517 systemd[1]: sshd@22-10.244.18.198:22-139.178.89.65:59738.service: Deactivated successfully. Dec 13 06:45:50.281503 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 06:45:50.282666 systemd-logind[1181]: Removed session 23. Dec 13 06:45:50.423542 systemd[1]: Started sshd@23-10.244.18.198:22-139.178.89.65:59748.service. Dec 13 06:45:51.323074 sshd[3567]: Accepted publickey for core from 139.178.89.65 port 59748 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:51.324975 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:51.332042 systemd-logind[1181]: New session 24 of user core. Dec 13 06:45:51.332826 systemd[1]: Started session-24.scope. Dec 13 06:45:54.254764 env[1193]: time="2024-12-13T06:45:54.254089143Z" level=info msg="StopContainer for \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\" with timeout 30 (s)" Dec 13 06:45:54.255815 env[1193]: time="2024-12-13T06:45:54.255655160Z" level=info msg="Stop container \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\" with signal terminated" Dec 13 06:45:54.300991 systemd[1]: cri-containerd-7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020.scope: Deactivated successfully. Dec 13 06:45:54.319792 systemd[1]: run-containerd-runc-k8s.io-b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad-runc.z7Fl8q.mount: Deactivated successfully. Dec 13 06:45:54.346904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020-rootfs.mount: Deactivated successfully. Dec 13 06:45:54.356791 env[1193]: time="2024-12-13T06:45:54.356722238Z" level=info msg="shim disconnected" id=7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020 Dec 13 06:45:54.357274 env[1193]: time="2024-12-13T06:45:54.357234760Z" level=warning msg="cleaning up after shim disconnected" id=7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020 namespace=k8s.io Dec 13 06:45:54.357602 env[1193]: time="2024-12-13T06:45:54.357573882Z" level=info msg="cleaning up dead shim" Dec 13 06:45:54.374006 env[1193]: time="2024-12-13T06:45:54.373859425Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:45:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3608 runtime=io.containerd.runc.v2\n" Dec 13 06:45:54.376356 env[1193]: time="2024-12-13T06:45:54.376313527Z" level=info msg="StopContainer for \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\" returns successfully" Dec 13 06:45:54.377415 env[1193]: time="2024-12-13T06:45:54.377374650Z" level=info msg="StopPodSandbox for \"0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138\"" Dec 13 06:45:54.377814 env[1193]: time="2024-12-13T06:45:54.377778469Z" level=info msg="Container to stop \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:45:54.383093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138-shm.mount: Deactivated successfully. Dec 13 06:45:54.391333 env[1193]: time="2024-12-13T06:45:54.391262441Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 06:45:54.393496 env[1193]: time="2024-12-13T06:45:54.393450218Z" level=info msg="StopContainer for \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\" with timeout 2 (s)" Dec 13 06:45:54.393991 env[1193]: time="2024-12-13T06:45:54.393946849Z" level=info msg="Stop container \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\" with signal terminated" Dec 13 06:45:54.403690 systemd[1]: cri-containerd-0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138.scope: Deactivated successfully. Dec 13 06:45:54.414432 systemd-networkd[1026]: lxc_health: Link DOWN Dec 13 06:45:54.414444 systemd-networkd[1026]: lxc_health: Lost carrier Dec 13 06:45:54.470065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138-rootfs.mount: Deactivated successfully. Dec 13 06:45:54.476159 env[1193]: time="2024-12-13T06:45:54.476078521Z" level=info msg="shim disconnected" id=0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138 Dec 13 06:45:54.476497 env[1193]: time="2024-12-13T06:45:54.476461443Z" level=warning msg="cleaning up after shim disconnected" id=0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138 namespace=k8s.io Dec 13 06:45:54.476646 env[1193]: time="2024-12-13T06:45:54.476617321Z" level=info msg="cleaning up dead shim" Dec 13 06:45:54.490334 env[1193]: time="2024-12-13T06:45:54.490278231Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:45:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3652 runtime=io.containerd.runc.v2\n" Dec 13 06:45:54.491311 env[1193]: time="2024-12-13T06:45:54.491253126Z" level=info msg="TearDown network for sandbox \"0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138\" successfully" Dec 13 06:45:54.491475 env[1193]: time="2024-12-13T06:45:54.491439633Z" level=info msg="StopPodSandbox for \"0a3138c587f62f941b9df1d0177fdd5656a33eea3a8cb2f45851e6aa03c5b138\" returns successfully" Dec 13 06:45:54.493485 systemd[1]: cri-containerd-b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad.scope: Deactivated successfully. Dec 13 06:45:54.493938 systemd[1]: cri-containerd-b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad.scope: Consumed 10.448s CPU time. Dec 13 06:45:54.541806 env[1193]: time="2024-12-13T06:45:54.541733108Z" level=info msg="shim disconnected" id=b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad Dec 13 06:45:54.542291 env[1193]: time="2024-12-13T06:45:54.542249395Z" level=warning msg="cleaning up after shim disconnected" id=b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad namespace=k8s.io Dec 13 06:45:54.542632 env[1193]: time="2024-12-13T06:45:54.542397930Z" level=info msg="cleaning up dead shim" Dec 13 06:45:54.557167 env[1193]: time="2024-12-13T06:45:54.557096164Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:45:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3677 runtime=io.containerd.runc.v2\n" Dec 13 06:45:54.559760 env[1193]: time="2024-12-13T06:45:54.559712298Z" level=info msg="StopContainer for \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\" returns successfully" Dec 13 06:45:54.561326 env[1193]: time="2024-12-13T06:45:54.561236115Z" level=info msg="StopPodSandbox for \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\"" Dec 13 06:45:54.561577 env[1193]: time="2024-12-13T06:45:54.561521871Z" level=info msg="Container to stop \"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:45:54.561734 env[1193]: time="2024-12-13T06:45:54.561700502Z" level=info msg="Container to stop \"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:45:54.561892 env[1193]: time="2024-12-13T06:45:54.561858541Z" level=info msg="Container to stop \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:45:54.562244 env[1193]: time="2024-12-13T06:45:54.562208902Z" level=info msg="Container to stop \"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:45:54.562403 env[1193]: time="2024-12-13T06:45:54.562366982Z" level=info msg="Container to stop \"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:45:54.572652 systemd[1]: cri-containerd-9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111.scope: Deactivated successfully. Dec 13 06:45:54.587684 kubelet[2036]: I1213 06:45:54.587494 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5c32872-421c-4b54-b402-4d33a265259a-cilium-config-path\") pod \"d5c32872-421c-4b54-b402-4d33a265259a\" (UID: \"d5c32872-421c-4b54-b402-4d33a265259a\") " Dec 13 06:45:54.587684 kubelet[2036]: I1213 06:45:54.587574 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvw2n\" (UniqueName: \"kubernetes.io/projected/d5c32872-421c-4b54-b402-4d33a265259a-kube-api-access-rvw2n\") pod \"d5c32872-421c-4b54-b402-4d33a265259a\" (UID: \"d5c32872-421c-4b54-b402-4d33a265259a\") " Dec 13 06:45:54.604637 kubelet[2036]: I1213 06:45:54.598205 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5c32872-421c-4b54-b402-4d33a265259a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5c32872-421c-4b54-b402-4d33a265259a" (UID: "d5c32872-421c-4b54-b402-4d33a265259a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:45:54.608571 kubelet[2036]: I1213 06:45:54.608480 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c32872-421c-4b54-b402-4d33a265259a-kube-api-access-rvw2n" (OuterVolumeSpecName: "kube-api-access-rvw2n") pod "d5c32872-421c-4b54-b402-4d33a265259a" (UID: "d5c32872-421c-4b54-b402-4d33a265259a"). InnerVolumeSpecName "kube-api-access-rvw2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:45:54.620595 env[1193]: time="2024-12-13T06:45:54.620518221Z" level=info msg="shim disconnected" id=9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111 Dec 13 06:45:54.620595 env[1193]: time="2024-12-13T06:45:54.620593482Z" level=warning msg="cleaning up after shim disconnected" id=9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111 namespace=k8s.io Dec 13 06:45:54.620932 env[1193]: time="2024-12-13T06:45:54.620612421Z" level=info msg="cleaning up dead shim" Dec 13 06:45:54.631950 env[1193]: time="2024-12-13T06:45:54.631819991Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:45:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3712 runtime=io.containerd.runc.v2\n" Dec 13 06:45:54.632579 env[1193]: time="2024-12-13T06:45:54.632493072Z" level=info msg="TearDown network for sandbox \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" successfully" Dec 13 06:45:54.632579 env[1193]: time="2024-12-13T06:45:54.632544473Z" level=info msg="StopPodSandbox for \"9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111\" returns successfully" Dec 13 06:45:54.688720 kubelet[2036]: I1213 06:45:54.688651 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cni-path\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.689407 kubelet[2036]: I1213 06:45:54.689061 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-config-path\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.689407 kubelet[2036]: I1213 06:45:54.689141 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-xtables-lock\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.689407 kubelet[2036]: I1213 06:45:54.689196 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4qxr\" (UniqueName: \"kubernetes.io/projected/7c0ef689-b9f8-48ea-8e4d-ab890c660759-kube-api-access-f4qxr\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.689677 kubelet[2036]: I1213 06:45:54.689472 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cni-path" (OuterVolumeSpecName: "cni-path") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.689677 kubelet[2036]: I1213 06:45:54.689546 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.689888 kubelet[2036]: I1213 06:45:54.689861 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-cgroup\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.689991 kubelet[2036]: I1213 06:45:54.689961 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-etc-cni-netd\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.690071 kubelet[2036]: I1213 06:45:54.690015 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c0ef689-b9f8-48ea-8e4d-ab890c660759-hubble-tls\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.690071 kubelet[2036]: I1213 06:45:54.690044 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-hostproc\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.690227 kubelet[2036]: I1213 06:45:54.690097 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-lib-modules\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.690305 kubelet[2036]: I1213 06:45:54.690240 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c0ef689-b9f8-48ea-8e4d-ab890c660759-clustermesh-secrets\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.690305 kubelet[2036]: I1213 06:45:54.690273 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-host-proc-sys-kernel\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.690427 kubelet[2036]: I1213 06:45:54.690336 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.690427 kubelet[2036]: I1213 06:45:54.690373 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.690573 kubelet[2036]: I1213 06:45:54.690482 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-run\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.690573 kubelet[2036]: I1213 06:45:54.690515 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-host-proc-sys-net\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.690684 kubelet[2036]: I1213 06:45:54.690542 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-bpf-maps\") pod \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\" (UID: \"7c0ef689-b9f8-48ea-8e4d-ab890c660759\") " Dec 13 06:45:54.690684 kubelet[2036]: I1213 06:45:54.690676 2036 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-cgroup\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.690830 kubelet[2036]: I1213 06:45:54.690699 2036 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-etc-cni-netd\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.691639 kubelet[2036]: I1213 06:45:54.691183 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-hostproc" (OuterVolumeSpecName: "hostproc") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.691639 kubelet[2036]: I1213 06:45:54.691209 2036 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5c32872-421c-4b54-b402-4d33a265259a-cilium-config-path\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.691639 kubelet[2036]: I1213 06:45:54.691247 2036 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cni-path\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.691639 kubelet[2036]: I1213 06:45:54.691237 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.691639 kubelet[2036]: I1213 06:45:54.691266 2036 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-xtables-lock\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.691639 kubelet[2036]: I1213 06:45:54.691287 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.692037 kubelet[2036]: I1213 06:45:54.691298 2036 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rvw2n\" (UniqueName: \"kubernetes.io/projected/d5c32872-421c-4b54-b402-4d33a265259a-kube-api-access-rvw2n\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.692037 kubelet[2036]: I1213 06:45:54.691360 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.692037 kubelet[2036]: I1213 06:45:54.691398 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.692037 kubelet[2036]: I1213 06:45:54.691447 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.697235 kubelet[2036]: I1213 06:45:54.697175 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0ef689-b9f8-48ea-8e4d-ab890c660759-kube-api-access-f4qxr" (OuterVolumeSpecName: "kube-api-access-f4qxr") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "kube-api-access-f4qxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:45:54.699486 kubelet[2036]: I1213 06:45:54.699437 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:45:54.701115 kubelet[2036]: I1213 06:45:54.701035 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0ef689-b9f8-48ea-8e4d-ab890c660759-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:45:54.703008 kubelet[2036]: I1213 06:45:54.702961 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0ef689-b9f8-48ea-8e4d-ab890c660759-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7c0ef689-b9f8-48ea-8e4d-ab890c660759" (UID: "7c0ef689-b9f8-48ea-8e4d-ab890c660759"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:45:54.792252 kubelet[2036]: I1213 06:45:54.791960 2036 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-f4qxr\" (UniqueName: \"kubernetes.io/projected/7c0ef689-b9f8-48ea-8e4d-ab890c660759-kube-api-access-f4qxr\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.792252 kubelet[2036]: I1213 06:45:54.792010 2036 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-lib-modules\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.792252 kubelet[2036]: I1213 06:45:54.792028 2036 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c0ef689-b9f8-48ea-8e4d-ab890c660759-clustermesh-secrets\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.792252 kubelet[2036]: I1213 06:45:54.792047 2036 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c0ef689-b9f8-48ea-8e4d-ab890c660759-hubble-tls\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.792252 kubelet[2036]: I1213 06:45:54.792062 2036 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-hostproc\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.792252 kubelet[2036]: I1213 06:45:54.792076 2036 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-bpf-maps\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.792252 kubelet[2036]: I1213 06:45:54.792089 2036 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-host-proc-sys-kernel\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.792252 kubelet[2036]: I1213 06:45:54.792132 2036 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-run\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.792973 kubelet[2036]: I1213 06:45:54.792150 2036 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c0ef689-b9f8-48ea-8e4d-ab890c660759-host-proc-sys-net\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:54.792973 kubelet[2036]: I1213 06:45:54.792165 2036 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0ef689-b9f8-48ea-8e4d-ab890c660759-cilium-config-path\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:45:55.074808 kubelet[2036]: I1213 06:45:55.074611 2036 scope.go:117] "RemoveContainer" containerID="b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad" Dec 13 06:45:55.079031 env[1193]: time="2024-12-13T06:45:55.078966015Z" level=info msg="RemoveContainer for \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\"" Dec 13 06:45:55.084303 systemd[1]: Removed slice kubepods-burstable-pod7c0ef689_b9f8_48ea_8e4d_ab890c660759.slice. Dec 13 06:45:55.084447 systemd[1]: kubepods-burstable-pod7c0ef689_b9f8_48ea_8e4d_ab890c660759.slice: Consumed 10.622s CPU time. Dec 13 06:45:55.087600 env[1193]: time="2024-12-13T06:45:55.086238933Z" level=info msg="RemoveContainer for \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\" returns successfully" Dec 13 06:45:55.088796 kubelet[2036]: I1213 06:45:55.088757 2036 scope.go:117] "RemoveContainer" containerID="1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd" Dec 13 06:45:55.092413 systemd[1]: Removed slice kubepods-besteffort-podd5c32872_421c_4b54_b402_4d33a265259a.slice. Dec 13 06:45:55.105364 env[1193]: time="2024-12-13T06:45:55.104604023Z" level=info msg="RemoveContainer for \"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd\"" Dec 13 06:45:55.114037 env[1193]: time="2024-12-13T06:45:55.113960158Z" level=info msg="RemoveContainer for \"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd\" returns successfully" Dec 13 06:45:55.117663 kubelet[2036]: I1213 06:45:55.117608 2036 scope.go:117] "RemoveContainer" containerID="7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788" Dec 13 06:45:55.121288 env[1193]: time="2024-12-13T06:45:55.120110748Z" level=info msg="RemoveContainer for \"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788\"" Dec 13 06:45:55.123945 env[1193]: time="2024-12-13T06:45:55.123869168Z" level=info msg="RemoveContainer for \"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788\" returns successfully" Dec 13 06:45:55.127106 kubelet[2036]: I1213 06:45:55.127067 2036 scope.go:117] "RemoveContainer" containerID="92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230" Dec 13 06:45:55.132940 env[1193]: time="2024-12-13T06:45:55.132369167Z" level=info msg="RemoveContainer for \"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230\"" Dec 13 06:45:55.136859 env[1193]: time="2024-12-13T06:45:55.136798660Z" level=info msg="RemoveContainer for \"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230\" returns successfully" Dec 13 06:45:55.137292 kubelet[2036]: I1213 06:45:55.137257 2036 scope.go:117] "RemoveContainer" containerID="fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141" Dec 13 06:45:55.140343 env[1193]: time="2024-12-13T06:45:55.140199086Z" level=info msg="RemoveContainer for \"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141\"" Dec 13 06:45:55.154373 env[1193]: time="2024-12-13T06:45:55.154285258Z" level=info msg="RemoveContainer for \"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141\" returns successfully" Dec 13 06:45:55.155025 kubelet[2036]: I1213 06:45:55.154979 2036 scope.go:117] "RemoveContainer" containerID="b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad" Dec 13 06:45:55.155849 env[1193]: time="2024-12-13T06:45:55.155569959Z" level=error msg="ContainerStatus for \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\": not found" Dec 13 06:45:55.158677 kubelet[2036]: E1213 06:45:55.158598 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\": not found" containerID="b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad" Dec 13 06:45:55.159022 kubelet[2036]: I1213 06:45:55.158871 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad"} err="failed to get container status \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad\": not found" Dec 13 06:45:55.159431 kubelet[2036]: I1213 06:45:55.159192 2036 scope.go:117] "RemoveContainer" containerID="1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd" Dec 13 06:45:55.160286 env[1193]: time="2024-12-13T06:45:55.160207888Z" level=error msg="ContainerStatus for \"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd\": not found" Dec 13 06:45:55.160594 kubelet[2036]: E1213 06:45:55.160562 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd\": not found" containerID="1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd" Dec 13 06:45:55.160837 kubelet[2036]: I1213 06:45:55.160717 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd"} err="failed to get container status \"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c483e67de125025805d895fd87e69a5ce2837a77108fc7e292ab01d776731cd\": not found" Dec 13 06:45:55.161024 kubelet[2036]: I1213 06:45:55.160997 2036 scope.go:117] "RemoveContainer" containerID="7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788" Dec 13 06:45:55.161567 env[1193]: time="2024-12-13T06:45:55.161478930Z" level=error msg="ContainerStatus for \"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788\": not found" Dec 13 06:45:55.161798 kubelet[2036]: E1213 06:45:55.161744 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788\": not found" containerID="7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788" Dec 13 06:45:55.161998 kubelet[2036]: I1213 06:45:55.161964 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788"} err="failed to get container status \"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788\": rpc error: code = NotFound desc = an error occurred when try to find container \"7829bffcf661275a950406c4f44e5dd62e3685b98c0c0ae2c1abfce217797788\": not found" Dec 13 06:45:55.162142 kubelet[2036]: I1213 06:45:55.162103 2036 scope.go:117] "RemoveContainer" containerID="92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230" Dec 13 06:45:55.162540 env[1193]: time="2024-12-13T06:45:55.162463955Z" level=error msg="ContainerStatus for \"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230\": not found" Dec 13 06:45:55.162766 kubelet[2036]: E1213 06:45:55.162715 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230\": not found" containerID="92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230" Dec 13 06:45:55.162959 kubelet[2036]: I1213 06:45:55.162781 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230"} err="failed to get container status \"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230\": rpc error: code = NotFound desc = an error occurred when try to find container \"92c7214fe74ca1211dc6817c3bd5904d395d04b745646e45260a484e1ca90230\": not found" Dec 13 06:45:55.162959 kubelet[2036]: I1213 06:45:55.162816 2036 scope.go:117] "RemoveContainer" containerID="fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141" Dec 13 06:45:55.163496 env[1193]: time="2024-12-13T06:45:55.163397139Z" level=error msg="ContainerStatus for \"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141\": not found" Dec 13 06:45:55.164014 kubelet[2036]: E1213 06:45:55.163966 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141\": not found" containerID="fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141" Dec 13 06:45:55.164102 kubelet[2036]: I1213 06:45:55.164009 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141"} err="failed to get container status \"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141\": rpc error: code = NotFound desc = an error occurred when try to find container \"fef2a353b548c1264b6a267c90229da9c06ce1142713e94fd4e8885dd2352141\": not found" Dec 13 06:45:55.164102 kubelet[2036]: I1213 06:45:55.164057 2036 scope.go:117] "RemoveContainer" containerID="7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020" Dec 13 06:45:55.167308 env[1193]: time="2024-12-13T06:45:55.167253552Z" level=info msg="RemoveContainer for \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\"" Dec 13 06:45:55.171609 env[1193]: time="2024-12-13T06:45:55.171507855Z" level=info msg="RemoveContainer for \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\" returns successfully" Dec 13 06:45:55.171896 kubelet[2036]: I1213 06:45:55.171864 2036 scope.go:117] "RemoveContainer" containerID="7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020" Dec 13 06:45:55.172325 env[1193]: time="2024-12-13T06:45:55.172238692Z" level=error msg="ContainerStatus for \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\": not found" Dec 13 06:45:55.172522 kubelet[2036]: E1213 06:45:55.172479 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\": not found" containerID="7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020" Dec 13 06:45:55.172600 kubelet[2036]: I1213 06:45:55.172526 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020"} err="failed to get container status \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a9909511906f94d01fffd435dbd023f9eecc4f2fc497801fff76b81cc22d020\": not found" Dec 13 06:45:55.312742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5c3648267635afb66663e7c28902607388785916947d57705f4e9c49c3f59ad-rootfs.mount: Deactivated successfully. Dec 13 06:45:55.312935 systemd[1]: var-lib-kubelet-pods-d5c32872\x2d421c\x2d4b54\x2db402\x2d4d33a265259a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drvw2n.mount: Deactivated successfully. Dec 13 06:45:55.313051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111-rootfs.mount: Deactivated successfully. Dec 13 06:45:55.313170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cba52b07a62963c95351f6ec2300a11ae06b5f4a388c81ac7abcfae88cdb111-shm.mount: Deactivated successfully. Dec 13 06:45:55.313282 systemd[1]: var-lib-kubelet-pods-7c0ef689\x2db9f8\x2d48ea\x2d8e4d\x2dab890c660759-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 06:45:55.313394 systemd[1]: var-lib-kubelet-pods-7c0ef689\x2db9f8\x2d48ea\x2d8e4d\x2dab890c660759-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df4qxr.mount: Deactivated successfully. Dec 13 06:45:55.313494 systemd[1]: var-lib-kubelet-pods-7c0ef689\x2db9f8\x2d48ea\x2d8e4d\x2dab890c660759-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 06:45:55.564190 kubelet[2036]: I1213 06:45:55.564098 2036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0ef689-b9f8-48ea-8e4d-ab890c660759" path="/var/lib/kubelet/pods/7c0ef689-b9f8-48ea-8e4d-ab890c660759/volumes" Dec 13 06:45:55.565966 kubelet[2036]: I1213 06:45:55.565939 2036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5c32872-421c-4b54-b402-4d33a265259a" path="/var/lib/kubelet/pods/d5c32872-421c-4b54-b402-4d33a265259a/volumes" Dec 13 06:45:56.313328 sshd[3567]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:56.318703 systemd[1]: sshd@23-10.244.18.198:22-139.178.89.65:59748.service: Deactivated successfully. Dec 13 06:45:56.320247 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 06:45:56.320471 systemd[1]: session-24.scope: Consumed 1.629s CPU time. Dec 13 06:45:56.321212 systemd-logind[1181]: Session 24 logged out. Waiting for processes to exit. Dec 13 06:45:56.323534 systemd-logind[1181]: Removed session 24. Dec 13 06:45:56.462449 systemd[1]: Started sshd@24-10.244.18.198:22-139.178.89.65:59752.service. Dec 13 06:45:57.367334 sshd[3731]: Accepted publickey for core from 139.178.89.65 port 59752 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:57.369558 sshd[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:57.378678 systemd[1]: Started session-25.scope. Dec 13 06:45:57.379997 systemd-logind[1181]: New session 25 of user core. Dec 13 06:45:58.698356 kubelet[2036]: E1213 06:45:58.698255 2036 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:45:58.782230 kubelet[2036]: I1213 06:45:58.782159 2036 topology_manager.go:215] "Topology Admit Handler" podUID="60b4a283-cdf9-4d11-87f4-38449ca70a3a" podNamespace="kube-system" podName="cilium-zxpjc" Dec 13 06:45:58.782664 kubelet[2036]: E1213 06:45:58.782624 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c0ef689-b9f8-48ea-8e4d-ab890c660759" containerName="mount-cgroup" Dec 13 06:45:58.782833 kubelet[2036]: E1213 06:45:58.782808 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c0ef689-b9f8-48ea-8e4d-ab890c660759" containerName="apply-sysctl-overwrites" Dec 13 06:45:58.783005 kubelet[2036]: E1213 06:45:58.782981 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c0ef689-b9f8-48ea-8e4d-ab890c660759" containerName="mount-bpf-fs" Dec 13 06:45:58.783173 kubelet[2036]: E1213 06:45:58.783150 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5c32872-421c-4b54-b402-4d33a265259a" containerName="cilium-operator" Dec 13 06:45:58.783333 kubelet[2036]: E1213 06:45:58.783311 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c0ef689-b9f8-48ea-8e4d-ab890c660759" containerName="cilium-agent" Dec 13 06:45:58.783478 kubelet[2036]: E1213 06:45:58.783447 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c0ef689-b9f8-48ea-8e4d-ab890c660759" containerName="clean-cilium-state" Dec 13 06:45:58.783739 kubelet[2036]: I1213 06:45:58.783703 2036 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c32872-421c-4b54-b402-4d33a265259a" containerName="cilium-operator" Dec 13 06:45:58.783885 kubelet[2036]: I1213 06:45:58.783862 2036 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0ef689-b9f8-48ea-8e4d-ab890c660759" containerName="cilium-agent" Dec 13 06:45:58.796332 systemd[1]: Created slice kubepods-burstable-pod60b4a283_cdf9_4d11_87f4_38449ca70a3a.slice. Dec 13 06:45:58.816510 kubelet[2036]: I1213 06:45:58.816451 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-cgroup\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.816796 kubelet[2036]: I1213 06:45:58.816758 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-ipsec-secrets\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.817019 kubelet[2036]: I1213 06:45:58.816984 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-etc-cni-netd\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.817204 kubelet[2036]: I1213 06:45:58.817167 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-lib-modules\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.817401 kubelet[2036]: I1213 06:45:58.817356 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8dw6\" (UniqueName: \"kubernetes.io/projected/60b4a283-cdf9-4d11-87f4-38449ca70a3a-kube-api-access-c8dw6\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.817614 kubelet[2036]: I1213 06:45:58.817578 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60b4a283-cdf9-4d11-87f4-38449ca70a3a-clustermesh-secrets\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.818668 kubelet[2036]: I1213 06:45:58.818620 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-hostproc\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.818888 kubelet[2036]: I1213 06:45:58.818826 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cni-path\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.819134 kubelet[2036]: I1213 06:45:58.819062 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-bpf-maps\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.819365 kubelet[2036]: I1213 06:45:58.819282 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60b4a283-cdf9-4d11-87f4-38449ca70a3a-hubble-tls\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.819535 kubelet[2036]: I1213 06:45:58.819510 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-xtables-lock\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.819745 kubelet[2036]: I1213 06:45:58.819688 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-run\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.819984 kubelet[2036]: I1213 06:45:58.819882 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-config-path\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.820187 kubelet[2036]: I1213 06:45:58.820161 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-host-proc-sys-net\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.820417 kubelet[2036]: I1213 06:45:58.820369 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-host-proc-sys-kernel\") pod \"cilium-zxpjc\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " pod="kube-system/cilium-zxpjc" Dec 13 06:45:58.893322 sshd[3731]: pam_unix(sshd:session): session closed for user core Dec 13 06:45:58.898015 systemd-logind[1181]: Session 25 logged out. Waiting for processes to exit. Dec 13 06:45:58.898141 systemd[1]: sshd@24-10.244.18.198:22-139.178.89.65:59752.service: Deactivated successfully. Dec 13 06:45:58.899300 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 06:45:58.901149 systemd-logind[1181]: Removed session 25. Dec 13 06:45:59.043241 systemd[1]: Started sshd@25-10.244.18.198:22-139.178.89.65:48992.service. Dec 13 06:45:59.111729 env[1193]: time="2024-12-13T06:45:59.111670530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zxpjc,Uid:60b4a283-cdf9-4d11-87f4-38449ca70a3a,Namespace:kube-system,Attempt:0,}" Dec 13 06:45:59.145590 env[1193]: time="2024-12-13T06:45:59.145481137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:45:59.145957 env[1193]: time="2024-12-13T06:45:59.145543259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:45:59.146165 env[1193]: time="2024-12-13T06:45:59.146094363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:45:59.146707 env[1193]: time="2024-12-13T06:45:59.146533419Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2 pid=3755 runtime=io.containerd.runc.v2 Dec 13 06:45:59.169089 systemd[1]: Started cri-containerd-02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2.scope. Dec 13 06:45:59.211062 env[1193]: time="2024-12-13T06:45:59.210990093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zxpjc,Uid:60b4a283-cdf9-4d11-87f4-38449ca70a3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2\"" Dec 13 06:45:59.217725 env[1193]: time="2024-12-13T06:45:59.217650264Z" level=info msg="CreateContainer within sandbox \"02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:45:59.231707 env[1193]: time="2024-12-13T06:45:59.231624582Z" level=info msg="CreateContainer within sandbox \"02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1\"" Dec 13 06:45:59.234240 env[1193]: time="2024-12-13T06:45:59.233050518Z" level=info msg="StartContainer for \"3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1\"" Dec 13 06:45:59.256697 systemd[1]: Started cri-containerd-3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1.scope. Dec 13 06:45:59.293456 systemd[1]: cri-containerd-3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1.scope: Deactivated successfully. Dec 13 06:45:59.314153 env[1193]: time="2024-12-13T06:45:59.314053498Z" level=info msg="shim disconnected" id=3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1 Dec 13 06:45:59.314516 env[1193]: time="2024-12-13T06:45:59.314482302Z" level=warning msg="cleaning up after shim disconnected" id=3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1 namespace=k8s.io Dec 13 06:45:59.314674 env[1193]: time="2024-12-13T06:45:59.314644917Z" level=info msg="cleaning up dead shim" Dec 13 06:45:59.325897 env[1193]: time="2024-12-13T06:45:59.325836102Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:45:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3813 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T06:45:59Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 06:45:59.326405 env[1193]: time="2024-12-13T06:45:59.326228291Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Dec 13 06:45:59.326782 env[1193]: time="2024-12-13T06:45:59.326720014Z" level=error msg="Failed to pipe stdout of container \"3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1\"" error="reading from a closed fifo" Dec 13 06:45:59.327286 env[1193]: time="2024-12-13T06:45:59.327238468Z" level=error msg="Failed to pipe stderr of container \"3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1\"" error="reading from a closed fifo" Dec 13 06:45:59.328615 env[1193]: time="2024-12-13T06:45:59.328559451Z" level=error msg="StartContainer for \"3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 06:45:59.330023 kubelet[2036]: E1213 06:45:59.329761 2036 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1" Dec 13 06:45:59.332976 kubelet[2036]: E1213 06:45:59.332932 2036 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 06:45:59.332976 kubelet[2036]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 06:45:59.332976 kubelet[2036]: rm /hostbin/cilium-mount Dec 13 06:45:59.333190 kubelet[2036]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8dw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zxpjc_kube-system(60b4a283-cdf9-4d11-87f4-38449ca70a3a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 06:45:59.333752 kubelet[2036]: E1213 06:45:59.333709 2036 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zxpjc" podUID="60b4a283-cdf9-4d11-87f4-38449ca70a3a" Dec 13 06:45:59.937881 sshd[3746]: Accepted publickey for core from 139.178.89.65 port 48992 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:45:59.939293 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:45:59.948515 systemd[1]: Started session-26.scope. Dec 13 06:45:59.948896 systemd-logind[1181]: New session 26 of user core. Dec 13 06:46:00.097862 env[1193]: time="2024-12-13T06:46:00.097801792Z" level=info msg="CreateContainer within sandbox \"02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 06:46:00.128707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3573247775.mount: Deactivated successfully. Dec 13 06:46:00.140268 env[1193]: time="2024-12-13T06:46:00.139831103Z" level=info msg="CreateContainer within sandbox \"02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb\"" Dec 13 06:46:00.142722 env[1193]: time="2024-12-13T06:46:00.141574382Z" level=info msg="StartContainer for \"0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb\"" Dec 13 06:46:00.182994 systemd[1]: Started cri-containerd-0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb.scope. Dec 13 06:46:00.196679 systemd[1]: cri-containerd-0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb.scope: Deactivated successfully. Dec 13 06:46:00.208864 env[1193]: time="2024-12-13T06:46:00.208796447Z" level=info msg="shim disconnected" id=0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb Dec 13 06:46:00.208864 env[1193]: time="2024-12-13T06:46:00.208872150Z" level=warning msg="cleaning up after shim disconnected" id=0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb namespace=k8s.io Dec 13 06:46:00.209187 env[1193]: time="2024-12-13T06:46:00.208889886Z" level=info msg="cleaning up dead shim" Dec 13 06:46:00.220659 env[1193]: time="2024-12-13T06:46:00.220589916Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3850 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T06:46:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 06:46:00.221064 env[1193]: time="2024-12-13T06:46:00.220981232Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Dec 13 06:46:00.221613 env[1193]: time="2024-12-13T06:46:00.221561601Z" level=error msg="Failed to pipe stdout of container \"0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb\"" error="reading from a closed fifo" Dec 13 06:46:00.222708 env[1193]: time="2024-12-13T06:46:00.222629924Z" level=error msg="Failed to pipe stderr of container \"0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb\"" error="reading from a closed fifo" Dec 13 06:46:00.224229 env[1193]: time="2024-12-13T06:46:00.224172355Z" level=error msg="StartContainer for \"0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 06:46:00.224595 kubelet[2036]: E1213 06:46:00.224534 2036 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb" Dec 13 06:46:00.229423 kubelet[2036]: E1213 06:46:00.229382 2036 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 06:46:00.229423 kubelet[2036]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 06:46:00.229423 kubelet[2036]: rm /hostbin/cilium-mount Dec 13 06:46:00.229423 kubelet[2036]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8dw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zxpjc_kube-system(60b4a283-cdf9-4d11-87f4-38449ca70a3a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 06:46:00.229816 kubelet[2036]: E1213 06:46:00.229453 2036 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zxpjc" podUID="60b4a283-cdf9-4d11-87f4-38449ca70a3a" Dec 13 06:46:00.694662 sshd[3746]: pam_unix(sshd:session): session closed for user core Dec 13 06:46:00.698985 systemd[1]: sshd@25-10.244.18.198:22-139.178.89.65:48992.service: Deactivated successfully. Dec 13 06:46:00.700008 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 06:46:00.700808 systemd-logind[1181]: Session 26 logged out. Waiting for processes to exit. Dec 13 06:46:00.701931 systemd-logind[1181]: Removed session 26. Dec 13 06:46:00.842608 systemd[1]: Started sshd@26-10.244.18.198:22-139.178.89.65:49000.service. Dec 13 06:46:00.941654 systemd[1]: run-containerd-runc-k8s.io-0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb-runc.iE3Qnl.mount: Deactivated successfully. Dec 13 06:46:00.942097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb-rootfs.mount: Deactivated successfully. Dec 13 06:46:01.099165 kubelet[2036]: I1213 06:46:01.099121 2036 scope.go:117] "RemoveContainer" containerID="3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1" Dec 13 06:46:01.100367 env[1193]: time="2024-12-13T06:46:01.100217985Z" level=info msg="StopPodSandbox for \"02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2\"" Dec 13 06:46:01.100582 env[1193]: time="2024-12-13T06:46:01.100536700Z" level=info msg="Container to stop \"3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:46:01.101469 env[1193]: time="2024-12-13T06:46:01.101321194Z" level=info msg="Container to stop \"0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:46:01.106391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2-shm.mount: Deactivated successfully. Dec 13 06:46:01.108075 env[1193]: time="2024-12-13T06:46:01.107994344Z" level=info msg="RemoveContainer for \"3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1\"" Dec 13 06:46:01.112208 env[1193]: time="2024-12-13T06:46:01.112166029Z" level=info msg="RemoveContainer for \"3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1\" returns successfully" Dec 13 06:46:01.120501 systemd[1]: cri-containerd-02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2.scope: Deactivated successfully. Dec 13 06:46:01.170677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2-rootfs.mount: Deactivated successfully. Dec 13 06:46:01.182082 env[1193]: time="2024-12-13T06:46:01.181965416Z" level=info msg="shim disconnected" id=02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2 Dec 13 06:46:01.182082 env[1193]: time="2024-12-13T06:46:01.182035101Z" level=warning msg="cleaning up after shim disconnected" id=02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2 namespace=k8s.io Dec 13 06:46:01.182844 env[1193]: time="2024-12-13T06:46:01.182082520Z" level=info msg="cleaning up dead shim" Dec 13 06:46:01.198409 env[1193]: time="2024-12-13T06:46:01.198346802Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3893 runtime=io.containerd.runc.v2\n" Dec 13 06:46:01.199303 env[1193]: time="2024-12-13T06:46:01.199263228Z" level=info msg="TearDown network for sandbox \"02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2\" successfully" Dec 13 06:46:01.199445 env[1193]: time="2024-12-13T06:46:01.199412212Z" level=info msg="StopPodSandbox for \"02be1da51462778a7907e2bb7d4907235d48b5a50aae7ded5e92dea0fb5f3fe2\" returns successfully" Dec 13 06:46:01.341931 kubelet[2036]: I1213 06:46:01.341832 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-lib-modules\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.341931 kubelet[2036]: I1213 06:46:01.341936 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-hostproc\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.341969 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cni-path\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342026 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60b4a283-cdf9-4d11-87f4-38449ca70a3a-hubble-tls\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342127 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-host-proc-sys-net\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342161 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-host-proc-sys-kernel\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342213 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-ipsec-secrets\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342240 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-run\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342285 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-etc-cni-netd\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342314 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60b4a283-cdf9-4d11-87f4-38449ca70a3a-clustermesh-secrets\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342357 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-bpf-maps\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342385 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-cgroup\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342410 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-xtables-lock\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342463 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-config-path\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.342628 kubelet[2036]: I1213 06:46:01.342493 2036 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8dw6\" (UniqueName: \"kubernetes.io/projected/60b4a283-cdf9-4d11-87f4-38449ca70a3a-kube-api-access-c8dw6\") pod \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\" (UID: \"60b4a283-cdf9-4d11-87f4-38449ca70a3a\") " Dec 13 06:46:01.343516 kubelet[2036]: I1213 06:46:01.343480 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.343594 kubelet[2036]: I1213 06:46:01.343548 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.343722 kubelet[2036]: I1213 06:46:01.343689 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.343880 kubelet[2036]: I1213 06:46:01.343851 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-hostproc" (OuterVolumeSpecName: "hostproc") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.344097 kubelet[2036]: I1213 06:46:01.344031 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cni-path" (OuterVolumeSpecName: "cni-path") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.344281 kubelet[2036]: I1213 06:46:01.344254 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.346407 kubelet[2036]: I1213 06:46:01.346371 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.346509 kubelet[2036]: I1213 06:46:01.346429 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.346509 kubelet[2036]: I1213 06:46:01.346465 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.351283 kubelet[2036]: I1213 06:46:01.349238 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.353835 systemd[1]: var-lib-kubelet-pods-60b4a283\x2dcdf9\x2d4d11\x2d87f4\x2d38449ca70a3a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 06:46:01.358262 systemd[1]: var-lib-kubelet-pods-60b4a283\x2dcdf9\x2d4d11\x2d87f4\x2d38449ca70a3a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 06:46:01.360014 kubelet[2036]: I1213 06:46:01.350488 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:46:01.360164 kubelet[2036]: I1213 06:46:01.360124 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:46:01.360272 kubelet[2036]: I1213 06:46:01.360252 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b4a283-cdf9-4d11-87f4-38449ca70a3a-kube-api-access-c8dw6" (OuterVolumeSpecName: "kube-api-access-c8dw6") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "kube-api-access-c8dw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:46:01.360375 kubelet[2036]: I1213 06:46:01.360334 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b4a283-cdf9-4d11-87f4-38449ca70a3a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:46:01.362730 kubelet[2036]: I1213 06:46:01.362675 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b4a283-cdf9-4d11-87f4-38449ca70a3a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "60b4a283-cdf9-4d11-87f4-38449ca70a3a" (UID: "60b4a283-cdf9-4d11-87f4-38449ca70a3a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:46:01.443179 kubelet[2036]: I1213 06:46:01.443127 2036 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-cgroup\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.443614 kubelet[2036]: I1213 06:46:01.443589 2036 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-xtables-lock\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.443784 kubelet[2036]: I1213 06:46:01.443759 2036 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-config-path\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.443974 kubelet[2036]: I1213 06:46:01.443949 2036 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c8dw6\" (UniqueName: \"kubernetes.io/projected/60b4a283-cdf9-4d11-87f4-38449ca70a3a-kube-api-access-c8dw6\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.444194 kubelet[2036]: I1213 06:46:01.444168 2036 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-lib-modules\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.444350 kubelet[2036]: I1213 06:46:01.444324 2036 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-hostproc\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.444513 kubelet[2036]: I1213 06:46:01.444489 2036 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cni-path\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.444688 kubelet[2036]: I1213 06:46:01.444665 2036 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60b4a283-cdf9-4d11-87f4-38449ca70a3a-hubble-tls\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.444836 kubelet[2036]: I1213 06:46:01.444812 2036 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-host-proc-sys-net\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.445115 kubelet[2036]: I1213 06:46:01.445091 2036 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-host-proc-sys-kernel\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.445265 kubelet[2036]: I1213 06:46:01.445241 2036 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-ipsec-secrets\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.445434 kubelet[2036]: I1213 06:46:01.445410 2036 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-cilium-run\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.445595 kubelet[2036]: I1213 06:46:01.445572 2036 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-etc-cni-netd\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.445750 kubelet[2036]: I1213 06:46:01.445726 2036 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60b4a283-cdf9-4d11-87f4-38449ca70a3a-clustermesh-secrets\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.445886 kubelet[2036]: I1213 06:46:01.445863 2036 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60b4a283-cdf9-4d11-87f4-38449ca70a3a-bpf-maps\") on node \"srv-7lx2b.gb1.brightbox.com\" DevicePath \"\"" Dec 13 06:46:01.568570 systemd[1]: Removed slice kubepods-burstable-pod60b4a283_cdf9_4d11_87f4_38449ca70a3a.slice. Dec 13 06:46:01.739708 sshd[3872]: Accepted publickey for core from 139.178.89.65 port 49000 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:46:01.738797 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:46:01.748590 systemd[1]: Started session-27.scope. Dec 13 06:46:01.749580 systemd-logind[1181]: New session 27 of user core. Dec 13 06:46:01.941513 systemd[1]: var-lib-kubelet-pods-60b4a283\x2dcdf9\x2d4d11\x2d87f4\x2d38449ca70a3a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc8dw6.mount: Deactivated successfully. Dec 13 06:46:01.941671 systemd[1]: var-lib-kubelet-pods-60b4a283\x2dcdf9\x2d4d11\x2d87f4\x2d38449ca70a3a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 06:46:02.104272 kubelet[2036]: I1213 06:46:02.104232 2036 scope.go:117] "RemoveContainer" containerID="0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb" Dec 13 06:46:02.107166 env[1193]: time="2024-12-13T06:46:02.106892848Z" level=info msg="RemoveContainer for \"0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb\"" Dec 13 06:46:02.114010 env[1193]: time="2024-12-13T06:46:02.113939794Z" level=info msg="RemoveContainer for \"0c859c51ea336020cc051ca3beade30c337385ea5732bace2c198612e56fbddb\" returns successfully" Dec 13 06:46:02.197483 kubelet[2036]: I1213 06:46:02.197408 2036 topology_manager.go:215] "Topology Admit Handler" podUID="23062f03-f69a-44b6-82da-b8f3c52f7a6d" podNamespace="kube-system" podName="cilium-75wkz" Dec 13 06:46:02.197950 kubelet[2036]: E1213 06:46:02.197892 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60b4a283-cdf9-4d11-87f4-38449ca70a3a" containerName="mount-cgroup" Dec 13 06:46:02.198233 kubelet[2036]: I1213 06:46:02.198207 2036 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b4a283-cdf9-4d11-87f4-38449ca70a3a" containerName="mount-cgroup" Dec 13 06:46:02.198423 kubelet[2036]: E1213 06:46:02.198385 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60b4a283-cdf9-4d11-87f4-38449ca70a3a" containerName="mount-cgroup" Dec 13 06:46:02.198603 kubelet[2036]: I1213 06:46:02.198567 2036 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b4a283-cdf9-4d11-87f4-38449ca70a3a" containerName="mount-cgroup" Dec 13 06:46:02.215790 systemd[1]: Created slice kubepods-burstable-pod23062f03_f69a_44b6_82da_b8f3c52f7a6d.slice. Dec 13 06:46:02.262331 kubelet[2036]: I1213 06:46:02.262245 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23062f03-f69a-44b6-82da-b8f3c52f7a6d-xtables-lock\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.262728 kubelet[2036]: I1213 06:46:02.262695 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/23062f03-f69a-44b6-82da-b8f3c52f7a6d-host-proc-sys-kernel\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.262886 kubelet[2036]: I1213 06:46:02.262854 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k27d7\" (UniqueName: \"kubernetes.io/projected/23062f03-f69a-44b6-82da-b8f3c52f7a6d-kube-api-access-k27d7\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.263115 kubelet[2036]: I1213 06:46:02.263086 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/23062f03-f69a-44b6-82da-b8f3c52f7a6d-hostproc\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.263290 kubelet[2036]: I1213 06:46:02.263250 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/23062f03-f69a-44b6-82da-b8f3c52f7a6d-cni-path\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.263466 kubelet[2036]: I1213 06:46:02.263437 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/23062f03-f69a-44b6-82da-b8f3c52f7a6d-hubble-tls\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.263618 kubelet[2036]: I1213 06:46:02.263586 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/23062f03-f69a-44b6-82da-b8f3c52f7a6d-cilium-run\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.263755 kubelet[2036]: I1213 06:46:02.263726 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23062f03-f69a-44b6-82da-b8f3c52f7a6d-cilium-config-path\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.263902 kubelet[2036]: I1213 06:46:02.263871 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/23062f03-f69a-44b6-82da-b8f3c52f7a6d-cilium-cgroup\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.264088 kubelet[2036]: I1213 06:46:02.264059 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/23062f03-f69a-44b6-82da-b8f3c52f7a6d-host-proc-sys-net\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.264276 kubelet[2036]: I1213 06:46:02.264248 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/23062f03-f69a-44b6-82da-b8f3c52f7a6d-etc-cni-netd\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.264439 kubelet[2036]: I1213 06:46:02.264411 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/23062f03-f69a-44b6-82da-b8f3c52f7a6d-clustermesh-secrets\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.264626 kubelet[2036]: I1213 06:46:02.264557 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23062f03-f69a-44b6-82da-b8f3c52f7a6d-lib-modules\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.264771 kubelet[2036]: I1213 06:46:02.264743 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/23062f03-f69a-44b6-82da-b8f3c52f7a6d-cilium-ipsec-secrets\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.264926 kubelet[2036]: I1213 06:46:02.264881 2036 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/23062f03-f69a-44b6-82da-b8f3c52f7a6d-bpf-maps\") pod \"cilium-75wkz\" (UID: \"23062f03-f69a-44b6-82da-b8f3c52f7a6d\") " pod="kube-system/cilium-75wkz" Dec 13 06:46:02.440268 kubelet[2036]: W1213 06:46:02.438268 2036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60b4a283_cdf9_4d11_87f4_38449ca70a3a.slice/cri-containerd-3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1.scope WatchSource:0}: container "3657190de045dcae5712cca71a88ef70889ebdd93cf4deea672786b1378d4bb1" in namespace "k8s.io": not found Dec 13 06:46:02.521442 env[1193]: time="2024-12-13T06:46:02.521372819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75wkz,Uid:23062f03-f69a-44b6-82da-b8f3c52f7a6d,Namespace:kube-system,Attempt:0,}" Dec 13 06:46:02.537599 env[1193]: time="2024-12-13T06:46:02.537477688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:46:02.537599 env[1193]: time="2024-12-13T06:46:02.537542159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:46:02.537599 env[1193]: time="2024-12-13T06:46:02.537560359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:46:02.538323 env[1193]: time="2024-12-13T06:46:02.538237086Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2 pid=3930 runtime=io.containerd.runc.v2 Dec 13 06:46:02.554628 systemd[1]: Started cri-containerd-35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2.scope. Dec 13 06:46:02.599429 env[1193]: time="2024-12-13T06:46:02.599371858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75wkz,Uid:23062f03-f69a-44b6-82da-b8f3c52f7a6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2\"" Dec 13 06:46:02.605212 env[1193]: time="2024-12-13T06:46:02.605157674Z" level=info msg="CreateContainer within sandbox \"35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:46:02.618933 env[1193]: time="2024-12-13T06:46:02.618838261Z" level=info msg="CreateContainer within sandbox \"35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bedf63b76abc328acdb7dd29d7dc593ad9e791a212a26c3f1d41da2b9b4545dd\"" Dec 13 06:46:02.621342 env[1193]: time="2024-12-13T06:46:02.619652829Z" level=info msg="StartContainer for \"bedf63b76abc328acdb7dd29d7dc593ad9e791a212a26c3f1d41da2b9b4545dd\"" Dec 13 06:46:02.643173 systemd[1]: Started cri-containerd-bedf63b76abc328acdb7dd29d7dc593ad9e791a212a26c3f1d41da2b9b4545dd.scope. Dec 13 06:46:02.691902 env[1193]: time="2024-12-13T06:46:02.690811987Z" level=info msg="StartContainer for \"bedf63b76abc328acdb7dd29d7dc593ad9e791a212a26c3f1d41da2b9b4545dd\" returns successfully" Dec 13 06:46:02.708179 systemd[1]: cri-containerd-bedf63b76abc328acdb7dd29d7dc593ad9e791a212a26c3f1d41da2b9b4545dd.scope: Deactivated successfully. Dec 13 06:46:02.742515 env[1193]: time="2024-12-13T06:46:02.742446864Z" level=info msg="shim disconnected" id=bedf63b76abc328acdb7dd29d7dc593ad9e791a212a26c3f1d41da2b9b4545dd Dec 13 06:46:02.742781 env[1193]: time="2024-12-13T06:46:02.742511730Z" level=warning msg="cleaning up after shim disconnected" id=bedf63b76abc328acdb7dd29d7dc593ad9e791a212a26c3f1d41da2b9b4545dd namespace=k8s.io Dec 13 06:46:02.742781 env[1193]: time="2024-12-13T06:46:02.742550027Z" level=info msg="cleaning up dead shim" Dec 13 06:46:02.754482 env[1193]: time="2024-12-13T06:46:02.754399108Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4013 runtime=io.containerd.runc.v2\n" Dec 13 06:46:03.111350 env[1193]: time="2024-12-13T06:46:03.111269638Z" level=info msg="CreateContainer within sandbox \"35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 06:46:03.149169 env[1193]: time="2024-12-13T06:46:03.149079318Z" level=info msg="CreateContainer within sandbox \"35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53dedd7c83bfd5e209b8105c9e73ba6724c9a28eb3b1c1ae45c8534e9a4bbaea\"" Dec 13 06:46:03.157660 env[1193]: time="2024-12-13T06:46:03.151398075Z" level=info msg="StartContainer for \"53dedd7c83bfd5e209b8105c9e73ba6724c9a28eb3b1c1ae45c8534e9a4bbaea\"" Dec 13 06:46:03.196658 systemd[1]: Started cri-containerd-53dedd7c83bfd5e209b8105c9e73ba6724c9a28eb3b1c1ae45c8534e9a4bbaea.scope. Dec 13 06:46:03.251326 env[1193]: time="2024-12-13T06:46:03.251254292Z" level=info msg="StartContainer for \"53dedd7c83bfd5e209b8105c9e73ba6724c9a28eb3b1c1ae45c8534e9a4bbaea\" returns successfully" Dec 13 06:46:03.282593 systemd[1]: cri-containerd-53dedd7c83bfd5e209b8105c9e73ba6724c9a28eb3b1c1ae45c8534e9a4bbaea.scope: Deactivated successfully. Dec 13 06:46:03.336531 env[1193]: time="2024-12-13T06:46:03.336463605Z" level=info msg="shim disconnected" id=53dedd7c83bfd5e209b8105c9e73ba6724c9a28eb3b1c1ae45c8534e9a4bbaea Dec 13 06:46:03.337060 env[1193]: time="2024-12-13T06:46:03.336990920Z" level=warning msg="cleaning up after shim disconnected" id=53dedd7c83bfd5e209b8105c9e73ba6724c9a28eb3b1c1ae45c8534e9a4bbaea namespace=k8s.io Dec 13 06:46:03.337207 env[1193]: time="2024-12-13T06:46:03.337177531Z" level=info msg="cleaning up dead shim" Dec 13 06:46:03.352845 env[1193]: time="2024-12-13T06:46:03.352783621Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4076 runtime=io.containerd.runc.v2\n" Dec 13 06:46:03.566612 kubelet[2036]: I1213 06:46:03.566560 2036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60b4a283-cdf9-4d11-87f4-38449ca70a3a" path="/var/lib/kubelet/pods/60b4a283-cdf9-4d11-87f4-38449ca70a3a/volumes" Dec 13 06:46:03.700006 kubelet[2036]: E1213 06:46:03.699907 2036 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:46:03.941746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53dedd7c83bfd5e209b8105c9e73ba6724c9a28eb3b1c1ae45c8534e9a4bbaea-rootfs.mount: Deactivated successfully. Dec 13 06:46:04.123767 env[1193]: time="2024-12-13T06:46:04.123356456Z" level=info msg="CreateContainer within sandbox \"35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 06:46:04.145071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573973200.mount: Deactivated successfully. Dec 13 06:46:04.158214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3400621045.mount: Deactivated successfully. Dec 13 06:46:04.172251 env[1193]: time="2024-12-13T06:46:04.172156444Z" level=info msg="CreateContainer within sandbox \"35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5af0acf04331b3adb19bd3385fdb5c31c749ae66e125581d87a382c066af90d6\"" Dec 13 06:46:04.175760 env[1193]: time="2024-12-13T06:46:04.173688844Z" level=info msg="StartContainer for \"5af0acf04331b3adb19bd3385fdb5c31c749ae66e125581d87a382c066af90d6\"" Dec 13 06:46:04.212008 systemd[1]: Started cri-containerd-5af0acf04331b3adb19bd3385fdb5c31c749ae66e125581d87a382c066af90d6.scope. Dec 13 06:46:04.267470 env[1193]: time="2024-12-13T06:46:04.267397077Z" level=info msg="StartContainer for \"5af0acf04331b3adb19bd3385fdb5c31c749ae66e125581d87a382c066af90d6\" returns successfully" Dec 13 06:46:04.274819 systemd[1]: cri-containerd-5af0acf04331b3adb19bd3385fdb5c31c749ae66e125581d87a382c066af90d6.scope: Deactivated successfully. Dec 13 06:46:04.310507 env[1193]: time="2024-12-13T06:46:04.310429851Z" level=info msg="shim disconnected" id=5af0acf04331b3adb19bd3385fdb5c31c749ae66e125581d87a382c066af90d6 Dec 13 06:46:04.310507 env[1193]: time="2024-12-13T06:46:04.310492799Z" level=warning msg="cleaning up after shim disconnected" id=5af0acf04331b3adb19bd3385fdb5c31c749ae66e125581d87a382c066af90d6 namespace=k8s.io Dec 13 06:46:04.310507 env[1193]: time="2024-12-13T06:46:04.310510453Z" level=info msg="cleaning up dead shim" Dec 13 06:46:04.345458 env[1193]: time="2024-12-13T06:46:04.345382745Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4132 runtime=io.containerd.runc.v2\n" Dec 13 06:46:05.126437 env[1193]: time="2024-12-13T06:46:05.126369970Z" level=info msg="CreateContainer within sandbox \"35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 06:46:05.146832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150407661.mount: Deactivated successfully. Dec 13 06:46:05.154728 env[1193]: time="2024-12-13T06:46:05.154665525Z" level=info msg="CreateContainer within sandbox \"35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"712da418262d5862bf4853d4b613906558801a4fdd5c46e71a43826afa7eae58\"" Dec 13 06:46:05.155969 env[1193]: time="2024-12-13T06:46:05.155933065Z" level=info msg="StartContainer for \"712da418262d5862bf4853d4b613906558801a4fdd5c46e71a43826afa7eae58\"" Dec 13 06:46:05.200675 systemd[1]: Started cri-containerd-712da418262d5862bf4853d4b613906558801a4fdd5c46e71a43826afa7eae58.scope. Dec 13 06:46:05.248372 systemd[1]: cri-containerd-712da418262d5862bf4853d4b613906558801a4fdd5c46e71a43826afa7eae58.scope: Deactivated successfully. Dec 13 06:46:05.249861 env[1193]: time="2024-12-13T06:46:05.249775517Z" level=info msg="StartContainer for \"712da418262d5862bf4853d4b613906558801a4fdd5c46e71a43826afa7eae58\" returns successfully" Dec 13 06:46:05.283214 env[1193]: time="2024-12-13T06:46:05.283136398Z" level=info msg="shim disconnected" id=712da418262d5862bf4853d4b613906558801a4fdd5c46e71a43826afa7eae58 Dec 13 06:46:05.283214 env[1193]: time="2024-12-13T06:46:05.283210108Z" level=warning msg="cleaning up after shim disconnected" id=712da418262d5862bf4853d4b613906558801a4fdd5c46e71a43826afa7eae58 namespace=k8s.io Dec 13 06:46:05.283583 env[1193]: time="2024-12-13T06:46:05.283227447Z" level=info msg="cleaning up dead shim" Dec 13 06:46:05.301957 env[1193]: time="2024-12-13T06:46:05.301861481Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4189 runtime=io.containerd.runc.v2\n" Dec 13 06:46:05.942029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-712da418262d5862bf4853d4b613906558801a4fdd5c46e71a43826afa7eae58-rootfs.mount: Deactivated successfully. Dec 13 06:46:06.131582 env[1193]: time="2024-12-13T06:46:06.131506572Z" level=info msg="CreateContainer within sandbox \"35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 06:46:06.174280 env[1193]: time="2024-12-13T06:46:06.174197652Z" level=info msg="CreateContainer within sandbox \"35ee62d91045ff9210982d87db45c656c7517610cd41a7fe1f7c5a84511460c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"114068484e44bb0b7a33e75fb0c0cef41f4b8e1fa51517df69286b8ecc49eb08\"" Dec 13 06:46:06.175457 env[1193]: time="2024-12-13T06:46:06.175416880Z" level=info msg="StartContainer for \"114068484e44bb0b7a33e75fb0c0cef41f4b8e1fa51517df69286b8ecc49eb08\"" Dec 13 06:46:06.225602 systemd[1]: Started cri-containerd-114068484e44bb0b7a33e75fb0c0cef41f4b8e1fa51517df69286b8ecc49eb08.scope. Dec 13 06:46:06.281333 env[1193]: time="2024-12-13T06:46:06.281270177Z" level=info msg="StartContainer for \"114068484e44bb0b7a33e75fb0c0cef41f4b8e1fa51517df69286b8ecc49eb08\" returns successfully" Dec 13 06:46:06.942276 systemd[1]: run-containerd-runc-k8s.io-114068484e44bb0b7a33e75fb0c0cef41f4b8e1fa51517df69286b8ecc49eb08-runc.RQ2fle.mount: Deactivated successfully. Dec 13 06:46:07.172491 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 06:46:07.430940 kubelet[2036]: I1213 06:46:07.429784 2036 setters.go:580] "Node became not ready" node="srv-7lx2b.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T06:46:07Z","lastTransitionTime":"2024-12-13T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 06:46:10.970377 systemd-networkd[1026]: lxc_health: Link UP Dec 13 06:46:10.984269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 06:46:10.983824 systemd-networkd[1026]: lxc_health: Gained carrier Dec 13 06:46:12.557783 kubelet[2036]: I1213 06:46:12.557673 2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-75wkz" podStartSLOduration=10.557628849 podStartE2EDuration="10.557628849s" podCreationTimestamp="2024-12-13 06:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:46:07.164362195 +0000 UTC m=+163.817317303" watchObservedRunningTime="2024-12-13 06:46:12.557628849 +0000 UTC m=+169.210583957" Dec 13 06:46:12.850228 systemd-networkd[1026]: lxc_health: Gained IPv6LL Dec 13 06:46:13.415399 kubelet[2036]: E1213 06:46:13.415120 2036 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59684->127.0.0.1:36303: write tcp 127.0.0.1:59684->127.0.0.1:36303: write: connection reset by peer Dec 13 06:46:17.897883 systemd[1]: run-containerd-runc-k8s.io-114068484e44bb0b7a33e75fb0c0cef41f4b8e1fa51517df69286b8ecc49eb08-runc.hSnCV6.mount: Deactivated successfully. Dec 13 06:46:18.147950 sshd[3872]: pam_unix(sshd:session): session closed for user core Dec 13 06:46:18.154071 systemd[1]: sshd@26-10.244.18.198:22-139.178.89.65:49000.service: Deactivated successfully. Dec 13 06:46:18.155360 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 06:46:18.157628 systemd-logind[1181]: Session 27 logged out. Waiting for processes to exit. Dec 13 06:46:18.159835 systemd-logind[1181]: Removed session 27.