Mar 17 21:48:52.918086 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 21:48:52.918129 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 21:48:52.918149 kernel: BIOS-provided physical RAM map: Mar 17 21:48:52.918159 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 21:48:52.918169 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 21:48:52.918178 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 21:48:52.918190 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 17 21:48:52.918200 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 17 21:48:52.918210 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 21:48:52.918219 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 21:48:52.918233 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 21:48:52.918254 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 21:48:52.918266 kernel: NX (Execute Disable) protection: active Mar 17 21:48:52.918276 kernel: SMBIOS 2.8 present. Mar 17 21:48:52.918289 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 17 21:48:52.918300 kernel: Hypervisor detected: KVM Mar 17 21:48:52.918315 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 21:48:52.918326 kernel: kvm-clock: cpu 0, msr 7619a001, primary cpu clock Mar 17 21:48:52.922400 kernel: kvm-clock: using sched offset of 4693864558 cycles Mar 17 21:48:52.922416 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 21:48:52.922428 kernel: tsc: Detected 2500.032 MHz processor Mar 17 21:48:52.922439 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 21:48:52.922451 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 21:48:52.922462 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 17 21:48:52.922473 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 21:48:52.922493 kernel: Using GB pages for direct mapping Mar 17 21:48:52.922504 kernel: ACPI: Early table checksum verification disabled Mar 17 21:48:52.922515 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 17 21:48:52.922526 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:48:52.922537 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:48:52.922555 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:48:52.922566 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 17 21:48:52.922577 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:48:52.922588 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:48:52.922610 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:48:52.922621 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:48:52.922632 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 17 21:48:52.922643 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 17 21:48:52.922654 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 17 21:48:52.922665 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 17 21:48:52.922682 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 17 21:48:52.922697 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 17 21:48:52.922708 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 17 21:48:52.922727 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 21:48:52.922739 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 21:48:52.922750 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 17 21:48:52.922762 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 17 21:48:52.922774 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 17 21:48:52.922797 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 17 21:48:52.922809 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 17 21:48:52.922820 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 17 21:48:52.922832 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 17 21:48:52.922849 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 17 21:48:52.922860 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 17 21:48:52.922872 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 17 21:48:52.922884 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 17 21:48:52.922895 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 17 21:48:52.922907 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 17 21:48:52.922922 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 17 21:48:52.922934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 21:48:52.922954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 21:48:52.922966 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 17 21:48:52.922978 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 17 21:48:52.922989 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 17 21:48:52.923001 kernel: Zone ranges: Mar 17 21:48:52.923013 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 21:48:52.923025 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 17 21:48:52.923040 kernel: Normal empty Mar 17 21:48:52.923052 kernel: Movable zone start for each node Mar 17 21:48:52.923063 kernel: Early memory node ranges Mar 17 21:48:52.923075 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 21:48:52.923087 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 17 21:48:52.923098 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 17 21:48:52.923110 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 21:48:52.923121 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 21:48:52.923133 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 17 21:48:52.923149 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 21:48:52.923161 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 21:48:52.923172 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 21:48:52.923184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 21:48:52.923195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 21:48:52.923207 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 21:48:52.923219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 21:48:52.923231 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 21:48:52.923255 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 21:48:52.923274 kernel: TSC deadline timer available Mar 17 21:48:52.923287 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 17 21:48:52.923298 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 21:48:52.923310 kernel: Booting paravirtualized kernel on KVM Mar 17 21:48:52.923321 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 21:48:52.923349 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Mar 17 21:48:52.926376 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Mar 17 21:48:52.926392 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Mar 17 21:48:52.926405 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 17 21:48:52.926423 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Mar 17 21:48:52.926436 kernel: kvm-guest: PV spinlocks enabled Mar 17 21:48:52.926448 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 21:48:52.926460 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 17 21:48:52.926471 kernel: Policy zone: DMA32 Mar 17 21:48:52.926485 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 21:48:52.926498 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 21:48:52.926509 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 21:48:52.926526 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 21:48:52.926545 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 21:48:52.926557 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 192524K reserved, 0K cma-reserved) Mar 17 21:48:52.926569 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 17 21:48:52.926581 kernel: Kernel/User page tables isolation: enabled Mar 17 21:48:52.926593 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 21:48:52.926607 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 21:48:52.926619 kernel: rcu: Hierarchical RCU implementation. Mar 17 21:48:52.926631 kernel: rcu: RCU event tracing is enabled. Mar 17 21:48:52.926647 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 17 21:48:52.926659 kernel: Rude variant of Tasks RCU enabled. Mar 17 21:48:52.926671 kernel: Tracing variant of Tasks RCU enabled. Mar 17 21:48:52.926683 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 21:48:52.926695 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 17 21:48:52.926706 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 17 21:48:52.926718 kernel: random: crng init done Mar 17 21:48:52.926743 kernel: Console: colour VGA+ 80x25 Mar 17 21:48:52.926756 kernel: printk: console [tty0] enabled Mar 17 21:48:52.926768 kernel: printk: console [ttyS0] enabled Mar 17 21:48:52.926780 kernel: ACPI: Core revision 20210730 Mar 17 21:48:52.926792 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 21:48:52.926812 kernel: x2apic enabled Mar 17 21:48:52.926824 kernel: Switched APIC routing to physical x2apic. Mar 17 21:48:52.926837 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Mar 17 21:48:52.926849 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Mar 17 21:48:52.926861 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 21:48:52.926887 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 21:48:52.926900 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 21:48:52.926912 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 21:48:52.926924 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 21:48:52.926944 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 21:48:52.926956 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 21:48:52.926968 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 21:48:52.926980 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 21:48:52.926992 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 21:48:52.927006 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 21:48:52.927018 kernel: MMIO Stale Data: Unknown: No mitigations Mar 17 21:48:52.927037 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 17 21:48:52.927049 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 21:48:52.927061 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 21:48:52.927073 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 21:48:52.927085 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 21:48:52.927097 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 21:48:52.927109 kernel: Freeing SMP alternatives memory: 32K Mar 17 21:48:52.927121 kernel: pid_max: default: 32768 minimum: 301 Mar 17 21:48:52.927133 kernel: LSM: Security Framework initializing Mar 17 21:48:52.927145 kernel: SELinux: Initializing. Mar 17 21:48:52.927157 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 21:48:52.927173 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 21:48:52.927186 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 17 21:48:52.927198 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 17 21:48:52.927210 kernel: signal: max sigframe size: 1776 Mar 17 21:48:52.927223 kernel: rcu: Hierarchical SRCU implementation. Mar 17 21:48:52.927235 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 21:48:52.927261 kernel: smp: Bringing up secondary CPUs ... Mar 17 21:48:52.927274 kernel: x86: Booting SMP configuration: Mar 17 21:48:52.927286 kernel: .... node #0, CPUs: #1 Mar 17 21:48:52.927304 kernel: kvm-clock: cpu 1, msr 7619a041, secondary cpu clock Mar 17 21:48:52.927316 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 17 21:48:52.927340 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Mar 17 21:48:52.927354 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 21:48:52.927367 kernel: smpboot: Max logical packages: 16 Mar 17 21:48:52.927379 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Mar 17 21:48:52.927391 kernel: devtmpfs: initialized Mar 17 21:48:52.927404 kernel: x86/mm: Memory block size: 128MB Mar 17 21:48:52.927416 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 21:48:52.927428 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 17 21:48:52.927446 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 21:48:52.927458 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 21:48:52.927471 kernel: audit: initializing netlink subsys (disabled) Mar 17 21:48:52.927483 kernel: audit: type=2000 audit(1742248131.363:1): state=initialized audit_enabled=0 res=1 Mar 17 21:48:52.927495 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 21:48:52.927507 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 21:48:52.927519 kernel: cpuidle: using governor menu Mar 17 21:48:52.927531 kernel: ACPI: bus type PCI registered Mar 17 21:48:52.927544 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 21:48:52.927560 kernel: dca service started, version 1.12.1 Mar 17 21:48:52.927572 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 21:48:52.927585 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Mar 17 21:48:52.927597 kernel: PCI: Using configuration type 1 for base access Mar 17 21:48:52.927609 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 21:48:52.927622 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 21:48:52.927643 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 21:48:52.927655 kernel: ACPI: Added _OSI(Module Device) Mar 17 21:48:52.927671 kernel: ACPI: Added _OSI(Processor Device) Mar 17 21:48:52.927684 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 21:48:52.927696 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 21:48:52.927711 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 21:48:52.927723 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 21:48:52.927735 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 21:48:52.927747 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 21:48:52.927760 kernel: ACPI: Interpreter enabled Mar 17 21:48:52.927772 kernel: ACPI: PM: (supports S0 S5) Mar 17 21:48:52.927784 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 21:48:52.927808 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 21:48:52.927821 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 21:48:52.927833 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 21:48:52.928157 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 21:48:52.928372 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 21:48:52.928532 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 21:48:52.928551 kernel: PCI host bridge to bus 0000:00 Mar 17 21:48:52.928738 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 21:48:52.928886 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 21:48:52.929028 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 21:48:52.929182 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 17 21:48:52.929406 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 21:48:52.929554 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 17 21:48:52.929699 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 21:48:52.929903 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 21:48:52.930117 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 17 21:48:52.930301 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 17 21:48:52.936577 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 17 21:48:52.936792 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 17 21:48:52.936982 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 21:48:52.937181 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 21:48:52.937385 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 17 21:48:52.937596 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 21:48:52.937758 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 17 21:48:52.937935 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 21:48:52.938115 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 17 21:48:52.938306 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 21:48:52.938518 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 17 21:48:52.938791 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 21:48:52.938956 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 17 21:48:52.939142 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 21:48:52.939316 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 17 21:48:52.939552 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 21:48:52.939709 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 17 21:48:52.939871 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 21:48:52.940034 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 17 21:48:52.940198 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 21:48:52.940397 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 21:48:52.940554 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 17 21:48:52.940715 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 17 21:48:52.940867 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 17 21:48:52.941031 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 17 21:48:52.941187 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 21:48:52.942407 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 17 21:48:52.942581 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 17 21:48:52.942781 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 21:48:52.942955 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 21:48:52.943131 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 21:48:52.943304 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 17 21:48:52.944518 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 17 21:48:52.944694 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 21:48:52.944853 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 21:48:52.945036 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 17 21:48:52.945202 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 17 21:48:52.945415 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 17 21:48:52.945573 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 17 21:48:52.945729 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 21:48:52.945903 kernel: pci_bus 0000:02: extended config space not accessible Mar 17 21:48:52.946094 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 17 21:48:52.946279 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 17 21:48:52.946461 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 17 21:48:52.946623 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 21:48:52.946806 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 21:48:52.946973 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 17 21:48:52.947132 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 17 21:48:52.947309 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 21:48:52.947482 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 21:48:52.947682 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 21:48:52.947849 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 17 21:48:52.948016 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 17 21:48:52.948185 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 21:48:52.948377 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 21:48:52.948549 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 17 21:48:52.948715 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 21:48:52.948882 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 21:48:52.949055 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 17 21:48:52.949223 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 21:48:52.954542 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 21:48:52.954724 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 17 21:48:52.954883 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 21:48:52.955038 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 21:48:52.955206 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 17 21:48:52.957427 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 21:48:52.957601 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 21:48:52.957766 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 17 21:48:52.957925 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 21:48:52.958081 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 21:48:52.958100 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 21:48:52.958114 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 21:48:52.958134 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 21:48:52.958147 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 21:48:52.958159 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 21:48:52.958172 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 21:48:52.958185 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 21:48:52.958197 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 21:48:52.958209 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 21:48:52.958222 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 21:48:52.958234 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 21:48:52.958265 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 21:48:52.958278 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 21:48:52.958290 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 21:48:52.958302 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 21:48:52.958315 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 21:48:52.958327 kernel: iommu: Default domain type: Translated Mar 17 21:48:52.958363 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 21:48:52.958522 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 21:48:52.958678 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 21:48:52.958838 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 21:48:52.958857 kernel: vgaarb: loaded Mar 17 21:48:52.958870 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 21:48:52.958883 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 21:48:52.958895 kernel: PTP clock support registered Mar 17 21:48:52.958908 kernel: PCI: Using ACPI for IRQ routing Mar 17 21:48:52.958920 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 21:48:52.958933 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 21:48:52.958951 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 17 21:48:52.958963 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 21:48:52.958976 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 21:48:52.958989 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 21:48:52.959001 kernel: pnp: PnP ACPI init Mar 17 21:48:52.959194 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 21:48:52.959217 kernel: pnp: PnP ACPI: found 5 devices Mar 17 21:48:52.959230 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 21:48:52.959269 kernel: NET: Registered PF_INET protocol family Mar 17 21:48:52.959283 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 21:48:52.959296 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 21:48:52.959308 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 21:48:52.959321 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 21:48:52.959345 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Mar 17 21:48:52.959358 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 21:48:52.959371 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 21:48:52.959383 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 21:48:52.959401 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 21:48:52.959414 kernel: NET: Registered PF_XDP protocol family Mar 17 21:48:52.959572 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 17 21:48:52.959732 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 17 21:48:52.959891 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 17 21:48:52.960050 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 17 21:48:52.960207 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 21:48:52.960398 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 21:48:52.960556 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 21:48:52.960711 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 21:48:52.960867 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 21:48:52.961021 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 21:48:52.961175 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 21:48:52.967422 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 17 21:48:52.967628 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 17 21:48:52.967796 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 17 21:48:52.967959 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 17 21:48:52.968134 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 17 21:48:52.968353 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 17 21:48:52.968523 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 21:48:52.968682 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 17 21:48:52.968836 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 17 21:48:52.968999 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 17 21:48:52.969175 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 21:48:52.969393 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 17 21:48:52.969563 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 17 21:48:52.969733 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 21:48:52.969892 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 21:48:52.970062 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 17 21:48:52.970231 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 17 21:48:52.970506 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 21:48:52.970661 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 21:48:52.970824 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 17 21:48:52.970976 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 17 21:48:52.971129 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 21:48:52.971296 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 21:48:52.971475 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 17 21:48:52.971637 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 17 21:48:52.971790 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 21:48:52.971943 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 21:48:52.972096 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 17 21:48:52.972262 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 17 21:48:52.972434 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 21:48:52.972606 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 21:48:52.972760 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 17 21:48:52.972925 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 17 21:48:52.973079 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 21:48:52.973232 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 21:48:52.977470 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 17 21:48:52.977646 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 17 21:48:52.977816 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 21:48:52.977977 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 21:48:52.978132 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 21:48:52.978290 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 21:48:52.978452 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 21:48:52.978597 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 17 21:48:52.978741 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 21:48:52.978883 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 17 21:48:52.979062 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 17 21:48:52.979213 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 17 21:48:52.979393 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 21:48:52.979558 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 17 21:48:52.979722 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 17 21:48:52.979875 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 17 21:48:52.980025 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 21:48:52.980195 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 17 21:48:52.980375 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 17 21:48:52.980525 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 21:48:52.980687 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 17 21:48:52.980838 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 17 21:48:52.981001 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 21:48:52.981199 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 17 21:48:52.982534 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 17 21:48:52.982692 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 21:48:52.982869 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 17 21:48:52.983021 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 17 21:48:52.983169 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 21:48:52.983361 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 17 21:48:52.983522 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 17 21:48:52.983671 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 21:48:52.983842 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 17 21:48:52.984001 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 17 21:48:52.984153 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 21:48:52.984174 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 21:48:52.984188 kernel: PCI: CLS 0 bytes, default 64 Mar 17 21:48:52.984201 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 21:48:52.984221 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 17 21:48:52.984235 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 21:48:52.984259 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Mar 17 21:48:52.984274 kernel: Initialise system trusted keyrings Mar 17 21:48:52.984287 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 21:48:52.984300 kernel: Key type asymmetric registered Mar 17 21:48:52.984313 kernel: Asymmetric key parser 'x509' registered Mar 17 21:48:52.984325 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 21:48:52.984433 kernel: io scheduler mq-deadline registered Mar 17 21:48:52.984453 kernel: io scheduler kyber registered Mar 17 21:48:52.984466 kernel: io scheduler bfq registered Mar 17 21:48:52.984631 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 17 21:48:52.984789 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 17 21:48:52.984945 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:48:52.985102 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 17 21:48:52.985270 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 17 21:48:52.985455 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:48:52.985615 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 17 21:48:52.985770 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 17 21:48:52.985932 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:48:52.986089 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 17 21:48:52.986255 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 17 21:48:52.986437 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:48:52.986597 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 17 21:48:52.986751 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 17 21:48:52.986907 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:48:52.987075 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 17 21:48:52.987232 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 17 21:48:52.987424 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:48:52.987584 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 17 21:48:52.987739 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 17 21:48:52.987895 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:48:52.988051 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 17 21:48:52.988208 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 17 21:48:52.988412 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:48:52.988434 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 21:48:52.988448 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 21:48:52.988461 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 21:48:52.988474 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 21:48:52.988488 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 21:48:52.988501 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 21:48:52.988514 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 21:48:52.988534 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 21:48:52.988706 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 21:48:52.988728 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 21:48:52.988874 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 21:48:52.989022 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T21:48:52 UTC (1742248132) Mar 17 21:48:52.989169 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 21:48:52.989188 kernel: intel_pstate: CPU model not supported Mar 17 21:48:52.989216 kernel: NET: Registered PF_INET6 protocol family Mar 17 21:48:52.989230 kernel: Segment Routing with IPv6 Mar 17 21:48:52.989254 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 21:48:52.989268 kernel: NET: Registered PF_PACKET protocol family Mar 17 21:48:52.989285 kernel: Key type dns_resolver registered Mar 17 21:48:52.989298 kernel: IPI shorthand broadcast: enabled Mar 17 21:48:52.989312 kernel: sched_clock: Marking stable (1011072082, 232769816)->(1538750861, -294908963) Mar 17 21:48:52.989325 kernel: registered taskstats version 1 Mar 17 21:48:52.989350 kernel: Loading compiled-in X.509 certificates Mar 17 21:48:52.989363 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 21:48:52.989382 kernel: Key type .fscrypt registered Mar 17 21:48:52.989394 kernel: Key type fscrypt-provisioning registered Mar 17 21:48:52.989408 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 21:48:52.989421 kernel: ima: Allocated hash algorithm: sha1 Mar 17 21:48:52.989434 kernel: ima: No architecture policies found Mar 17 21:48:52.989447 kernel: clk: Disabling unused clocks Mar 17 21:48:52.989460 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 21:48:52.989473 kernel: Write protecting the kernel read-only data: 28672k Mar 17 21:48:52.989490 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 21:48:52.989504 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 21:48:52.989516 kernel: Run /init as init process Mar 17 21:48:52.989529 kernel: with arguments: Mar 17 21:48:52.989542 kernel: /init Mar 17 21:48:52.989555 kernel: with environment: Mar 17 21:48:52.989567 kernel: HOME=/ Mar 17 21:48:52.989580 kernel: TERM=linux Mar 17 21:48:52.989592 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 21:48:52.989617 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 21:48:52.989640 systemd[1]: Detected virtualization kvm. Mar 17 21:48:52.989655 systemd[1]: Detected architecture x86-64. Mar 17 21:48:52.989668 systemd[1]: Running in initrd. Mar 17 21:48:52.989682 systemd[1]: No hostname configured, using default hostname. Mar 17 21:48:52.989695 systemd[1]: Hostname set to . Mar 17 21:48:52.989709 systemd[1]: Initializing machine ID from VM UUID. Mar 17 21:48:52.989727 systemd[1]: Queued start job for default target initrd.target. Mar 17 21:48:52.989742 systemd[1]: Started systemd-ask-password-console.path. Mar 17 21:48:52.989755 systemd[1]: Reached target cryptsetup.target. Mar 17 21:48:52.989769 systemd[1]: Reached target paths.target. Mar 17 21:48:52.989782 systemd[1]: Reached target slices.target. Mar 17 21:48:52.989800 systemd[1]: Reached target swap.target. Mar 17 21:48:52.989814 systemd[1]: Reached target timers.target. Mar 17 21:48:52.989829 systemd[1]: Listening on iscsid.socket. Mar 17 21:48:52.989847 systemd[1]: Listening on iscsiuio.socket. Mar 17 21:48:52.989860 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 21:48:52.989878 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 21:48:52.989892 systemd[1]: Listening on systemd-journald.socket. Mar 17 21:48:52.989906 systemd[1]: Listening on systemd-networkd.socket. Mar 17 21:48:52.989920 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 21:48:52.989934 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 21:48:52.989948 systemd[1]: Reached target sockets.target. Mar 17 21:48:52.989961 systemd[1]: Starting kmod-static-nodes.service... Mar 17 21:48:52.989979 systemd[1]: Finished network-cleanup.service. Mar 17 21:48:52.989993 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 21:48:52.990007 systemd[1]: Starting systemd-journald.service... Mar 17 21:48:52.990021 systemd[1]: Starting systemd-modules-load.service... Mar 17 21:48:52.990035 systemd[1]: Starting systemd-resolved.service... Mar 17 21:48:52.990049 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 21:48:52.990063 systemd[1]: Finished kmod-static-nodes.service. Mar 17 21:48:52.990087 systemd-journald[202]: Journal started Mar 17 21:48:52.990177 systemd-journald[202]: Runtime Journal (/run/log/journal/c7b03a767cc64157b1b26e6efe6ff1c1) is 4.7M, max 38.1M, 33.3M free. Mar 17 21:48:52.938813 systemd-modules-load[203]: Inserted module 'overlay' Mar 17 21:48:53.036471 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 21:48:53.036516 kernel: Bridge firewalling registered Mar 17 21:48:53.036535 systemd[1]: Started systemd-resolved.service. Mar 17 21:48:53.036558 kernel: audit: type=1130 audit(1742248133.027:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.036577 kernel: SCSI subsystem initialized Mar 17 21:48:53.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:52.966541 systemd-resolved[204]: Positive Trust Anchors: Mar 17 21:48:53.043802 systemd[1]: Started systemd-journald.service. Mar 17 21:48:53.043841 kernel: audit: type=1130 audit(1742248133.036:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:52.966561 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 21:48:53.062286 kernel: audit: type=1130 audit(1742248133.044:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.062356 kernel: audit: type=1130 audit(1742248133.050:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.062377 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 21:48:53.062396 kernel: device-mapper: uevent: version 1.0.3 Mar 17 21:48:53.062414 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 21:48:53.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:52.966607 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 21:48:52.976662 systemd-resolved[204]: Defaulting to hostname 'linux'. Mar 17 21:48:53.003968 systemd-modules-load[203]: Inserted module 'br_netfilter' Mar 17 21:48:53.044852 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 21:48:53.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.050808 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 21:48:53.082178 kernel: audit: type=1130 audit(1742248133.069:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.082214 kernel: audit: type=1130 audit(1742248133.076:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.068424 systemd-modules-load[203]: Inserted module 'dm_multipath' Mar 17 21:48:53.070523 systemd[1]: Finished systemd-modules-load.service. Mar 17 21:48:53.076713 systemd[1]: Reached target nss-lookup.target. Mar 17 21:48:53.083939 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 21:48:53.086110 systemd[1]: Starting systemd-sysctl.service... Mar 17 21:48:53.089290 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 21:48:53.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.101707 systemd[1]: Finished systemd-sysctl.service. Mar 17 21:48:53.108851 kernel: audit: type=1130 audit(1742248133.102:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.107739 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 21:48:53.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.115965 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 21:48:53.117408 kernel: audit: type=1130 audit(1742248133.109:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.118002 systemd[1]: Starting dracut-cmdline.service... Mar 17 21:48:53.138931 kernel: audit: type=1130 audit(1742248133.116:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.139102 dracut-cmdline[225]: dracut-dracut-053 Mar 17 21:48:53.139102 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Mar 17 21:48:53.139102 dracut-cmdline[225]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 21:48:53.227405 kernel: Loading iSCSI transport class v2.0-870. Mar 17 21:48:53.249375 kernel: iscsi: registered transport (tcp) Mar 17 21:48:53.278961 kernel: iscsi: registered transport (qla4xxx) Mar 17 21:48:53.279005 kernel: QLogic iSCSI HBA Driver Mar 17 21:48:53.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.328792 systemd[1]: Finished dracut-cmdline.service. Mar 17 21:48:53.330704 systemd[1]: Starting dracut-pre-udev.service... Mar 17 21:48:53.391386 kernel: raid6: sse2x4 gen() 13158 MB/s Mar 17 21:48:53.409393 kernel: raid6: sse2x4 xor() 7703 MB/s Mar 17 21:48:53.427369 kernel: raid6: sse2x2 gen() 8746 MB/s Mar 17 21:48:53.445370 kernel: raid6: sse2x2 xor() 7773 MB/s Mar 17 21:48:53.463364 kernel: raid6: sse2x1 gen() 8746 MB/s Mar 17 21:48:53.482042 kernel: raid6: sse2x1 xor() 6933 MB/s Mar 17 21:48:53.482080 kernel: raid6: using algorithm sse2x4 gen() 13158 MB/s Mar 17 21:48:53.482099 kernel: raid6: .... xor() 7703 MB/s, rmw enabled Mar 17 21:48:53.483448 kernel: raid6: using ssse3x2 recovery algorithm Mar 17 21:48:53.501363 kernel: xor: automatically using best checksumming function avx Mar 17 21:48:53.620368 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 21:48:53.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.635000 audit: BPF prog-id=7 op=LOAD Mar 17 21:48:53.635000 audit: BPF prog-id=8 op=LOAD Mar 17 21:48:53.634366 systemd[1]: Finished dracut-pre-udev.service. Mar 17 21:48:53.637581 systemd[1]: Starting systemd-udevd.service... Mar 17 21:48:53.653005 systemd-udevd[402]: Using default interface naming scheme 'v252'. Mar 17 21:48:53.661111 systemd[1]: Started systemd-udevd.service. Mar 17 21:48:53.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.666937 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 21:48:53.685056 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Mar 17 21:48:53.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.728120 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 21:48:53.729988 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 21:48:53.826615 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 21:48:53.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:53.924608 kernel: ACPI: bus type USB registered Mar 17 21:48:53.924674 kernel: usbcore: registered new interface driver usbfs Mar 17 21:48:53.924695 kernel: usbcore: registered new interface driver hub Mar 17 21:48:53.928357 kernel: usbcore: registered new device driver usb Mar 17 21:48:53.934353 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 21:48:53.941404 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 21:48:54.025643 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 17 21:48:54.025890 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 17 21:48:54.026088 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 21:48:54.026282 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 17 21:48:54.026487 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 17 21:48:54.026670 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 21:48:54.026689 kernel: GPT:17805311 != 125829119 Mar 17 21:48:54.026712 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 21:48:54.026733 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 17 21:48:54.026901 kernel: GPT:17805311 != 125829119 Mar 17 21:48:54.026919 kernel: hub 1-0:1.0: USB hub found Mar 17 21:48:54.027143 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 21:48:54.027169 kernel: hub 1-0:1.0: 4 ports detected Mar 17 21:48:54.027396 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 21:48:54.027423 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 21:48:54.027700 kernel: hub 2-0:1.0: USB hub found Mar 17 21:48:54.027915 kernel: hub 2-0:1.0: 4 ports detected Mar 17 21:48:54.028101 kernel: AVX version of gcm_enc/dec engaged. Mar 17 21:48:54.028132 kernel: AES CTR mode by8 optimization enabled Mar 17 21:48:54.030372 kernel: libata version 3.00 loaded. Mar 17 21:48:54.058880 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 21:48:54.132013 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 21:48:54.132267 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 21:48:54.132290 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 21:48:54.132502 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 21:48:54.132691 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Mar 17 21:48:54.132720 kernel: scsi host0: ahci Mar 17 21:48:54.132935 kernel: scsi host1: ahci Mar 17 21:48:54.133123 kernel: scsi host2: ahci Mar 17 21:48:54.133346 kernel: scsi host3: ahci Mar 17 21:48:54.133540 kernel: scsi host4: ahci Mar 17 21:48:54.133730 kernel: scsi host5: ahci Mar 17 21:48:54.133924 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Mar 17 21:48:54.133957 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Mar 17 21:48:54.133974 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Mar 17 21:48:54.133990 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Mar 17 21:48:54.134005 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Mar 17 21:48:54.134021 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Mar 17 21:48:54.135973 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 21:48:54.151465 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 21:48:54.152285 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 21:48:54.158673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 21:48:54.160621 systemd[1]: Starting disk-uuid.service... Mar 17 21:48:54.167258 disk-uuid[529]: Primary Header is updated. Mar 17 21:48:54.167258 disk-uuid[529]: Secondary Entries is updated. Mar 17 21:48:54.167258 disk-uuid[529]: Secondary Header is updated. Mar 17 21:48:54.170727 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 21:48:54.230638 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 21:48:54.373395 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 21:48:54.395384 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 21:48:54.402383 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 17 21:48:54.406035 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 21:48:54.406081 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 21:48:54.406116 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 21:48:54.409313 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 21:48:54.426012 kernel: usbcore: registered new interface driver usbhid Mar 17 21:48:54.426074 kernel: usbhid: USB HID core driver Mar 17 21:48:54.433349 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Mar 17 21:48:54.437356 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 17 21:48:55.191358 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 21:48:55.193020 disk-uuid[530]: The operation has completed successfully. Mar 17 21:48:55.243696 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 21:48:55.244943 systemd[1]: Finished disk-uuid.service. Mar 17 21:48:55.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.255799 systemd[1]: Starting verity-setup.service... Mar 17 21:48:55.277359 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 17 21:48:55.331276 systemd[1]: Found device dev-mapper-usr.device. Mar 17 21:48:55.332987 systemd[1]: Mounting sysusr-usr.mount... Mar 17 21:48:55.335048 systemd[1]: Finished verity-setup.service. Mar 17 21:48:55.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.428358 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 21:48:55.429606 systemd[1]: Mounted sysusr-usr.mount. Mar 17 21:48:55.430433 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 21:48:55.431445 systemd[1]: Starting ignition-setup.service... Mar 17 21:48:55.434866 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 21:48:55.453100 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 21:48:55.453180 kernel: BTRFS info (device vda6): using free space tree Mar 17 21:48:55.453202 kernel: BTRFS info (device vda6): has skinny extents Mar 17 21:48:55.467661 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 21:48:55.474633 systemd[1]: Finished ignition-setup.service. Mar 17 21:48:55.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.476511 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 21:48:55.572650 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 21:48:55.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.575000 audit: BPF prog-id=9 op=LOAD Mar 17 21:48:55.576875 systemd[1]: Starting systemd-networkd.service... Mar 17 21:48:55.616865 systemd-networkd[712]: lo: Link UP Mar 17 21:48:55.617910 systemd-networkd[712]: lo: Gained carrier Mar 17 21:48:55.620192 systemd-networkd[712]: Enumeration completed Mar 17 21:48:55.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.621019 systemd[1]: Started systemd-networkd.service. Mar 17 21:48:55.621804 systemd[1]: Reached target network.target. Mar 17 21:48:55.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.623475 systemd[1]: Starting iscsiuio.service... Mar 17 21:48:55.633862 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 21:48:55.636847 systemd-networkd[712]: eth0: Link UP Mar 17 21:48:55.636853 systemd-networkd[712]: eth0: Gained carrier Mar 17 21:48:55.641596 systemd[1]: Started iscsiuio.service. Mar 17 21:48:55.643738 systemd[1]: Starting iscsid.service... Mar 17 21:48:55.650594 iscsid[717]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 21:48:55.650594 iscsid[717]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 21:48:55.650594 iscsid[717]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 21:48:55.650594 iscsid[717]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 21:48:55.650594 iscsid[717]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 21:48:55.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.667603 iscsid[717]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 21:48:55.653929 systemd[1]: Started iscsid.service. Mar 17 21:48:55.661430 systemd[1]: Starting dracut-initqueue.service... Mar 17 21:48:55.663505 systemd-networkd[712]: eth0: DHCPv4 address 10.230.29.198/30, gateway 10.230.29.197 acquired from 10.230.29.197 Mar 17 21:48:55.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.678721 systemd[1]: Finished dracut-initqueue.service. Mar 17 21:48:55.679585 systemd[1]: Reached target remote-fs-pre.target. Mar 17 21:48:55.680233 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 21:48:55.680913 systemd[1]: Reached target remote-fs.target. Mar 17 21:48:55.682772 systemd[1]: Starting dracut-pre-mount.service... Mar 17 21:48:55.689265 ignition[631]: Ignition 2.14.0 Mar 17 21:48:55.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.689289 ignition[631]: Stage: fetch-offline Mar 17 21:48:55.693413 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 21:48:55.689410 ignition[631]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:48:55.695268 systemd[1]: Starting ignition-fetch.service... Mar 17 21:48:55.689449 ignition[631]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:48:55.697854 systemd[1]: Finished dracut-pre-mount.service. Mar 17 21:48:55.690934 ignition[631]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:48:55.691106 ignition[631]: parsed url from cmdline: "" Mar 17 21:48:55.691113 ignition[631]: no config URL provided Mar 17 21:48:55.691123 ignition[631]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 21:48:55.691140 ignition[631]: no config at "/usr/lib/ignition/user.ign" Mar 17 21:48:55.691149 ignition[631]: failed to fetch config: resource requires networking Mar 17 21:48:55.691581 ignition[631]: Ignition finished successfully Mar 17 21:48:55.706537 ignition[731]: Ignition 2.14.0 Mar 17 21:48:55.706548 ignition[731]: Stage: fetch Mar 17 21:48:55.706697 ignition[731]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:48:55.706731 ignition[731]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:48:55.707719 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:48:55.707838 ignition[731]: parsed url from cmdline: "" Mar 17 21:48:55.707844 ignition[731]: no config URL provided Mar 17 21:48:55.707854 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 21:48:55.707870 ignition[731]: no config at "/usr/lib/ignition/user.ign" Mar 17 21:48:55.711187 ignition[731]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 17 21:48:55.711205 ignition[731]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 17 21:48:55.713571 ignition[731]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 17 21:48:55.733605 ignition[731]: GET result: OK Mar 17 21:48:55.733744 ignition[731]: parsing config with SHA512: 5bf740dc062333683bc4eb235dbb6e2cdb2764ea5b41721690181f790f39d76f8b8a235024f95303e9800f3b4919d68642613248dbe882737da1217f7965868d Mar 17 21:48:55.744918 unknown[731]: fetched base config from "system" Mar 17 21:48:55.745848 unknown[731]: fetched base config from "system" Mar 17 21:48:55.746689 unknown[731]: fetched user config from "openstack" Mar 17 21:48:55.747989 ignition[731]: fetch: fetch complete Mar 17 21:48:55.748028 ignition[731]: fetch: fetch passed Mar 17 21:48:55.748115 ignition[731]: Ignition finished successfully Mar 17 21:48:55.750966 systemd[1]: Finished ignition-fetch.service. Mar 17 21:48:55.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.753004 systemd[1]: Starting ignition-kargs.service... Mar 17 21:48:55.765745 ignition[737]: Ignition 2.14.0 Mar 17 21:48:55.765764 ignition[737]: Stage: kargs Mar 17 21:48:55.765932 ignition[737]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:48:55.765965 ignition[737]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:48:55.767243 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:48:55.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.770367 systemd[1]: Finished ignition-kargs.service. Mar 17 21:48:55.769250 ignition[737]: kargs: kargs passed Mar 17 21:48:55.769319 ignition[737]: Ignition finished successfully Mar 17 21:48:55.773697 systemd[1]: Starting ignition-disks.service... Mar 17 21:48:55.783367 ignition[742]: Ignition 2.14.0 Mar 17 21:48:55.783388 ignition[742]: Stage: disks Mar 17 21:48:55.783553 ignition[742]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:48:55.783588 ignition[742]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:48:55.784839 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:48:55.786396 ignition[742]: disks: disks passed Mar 17 21:48:55.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.787530 systemd[1]: Finished ignition-disks.service. Mar 17 21:48:55.786461 ignition[742]: Ignition finished successfully Mar 17 21:48:55.788570 systemd[1]: Reached target initrd-root-device.target. Mar 17 21:48:55.789647 systemd[1]: Reached target local-fs-pre.target. Mar 17 21:48:55.790933 systemd[1]: Reached target local-fs.target. Mar 17 21:48:55.792210 systemd[1]: Reached target sysinit.target. Mar 17 21:48:55.793467 systemd[1]: Reached target basic.target. Mar 17 21:48:55.795910 systemd[1]: Starting systemd-fsck-root.service... Mar 17 21:48:55.816305 systemd-fsck[749]: ROOT: clean, 623/1628000 files, 124059/1617920 blocks Mar 17 21:48:55.821633 systemd[1]: Finished systemd-fsck-root.service. Mar 17 21:48:55.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.823478 systemd[1]: Mounting sysroot.mount... Mar 17 21:48:55.834355 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 21:48:55.835263 systemd[1]: Mounted sysroot.mount. Mar 17 21:48:55.836745 systemd[1]: Reached target initrd-root-fs.target. Mar 17 21:48:55.839562 systemd[1]: Mounting sysroot-usr.mount... Mar 17 21:48:55.841678 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 21:48:55.843666 systemd[1]: Starting flatcar-openstack-hostname.service... Mar 17 21:48:55.845275 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 21:48:55.845974 systemd[1]: Reached target ignition-diskful.target. Mar 17 21:48:55.850325 systemd[1]: Mounted sysroot-usr.mount. Mar 17 21:48:55.853182 systemd[1]: Starting initrd-setup-root.service... Mar 17 21:48:55.866135 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 21:48:55.876845 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Mar 17 21:48:55.883973 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 21:48:55.894725 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 21:48:55.957531 systemd[1]: Finished initrd-setup-root.service. Mar 17 21:48:55.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:55.959548 systemd[1]: Starting ignition-mount.service... Mar 17 21:48:55.961166 systemd[1]: Starting sysroot-boot.service... Mar 17 21:48:55.972563 bash[803]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 21:48:55.994966 coreos-metadata[755]: Mar 17 21:48:55.994 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 21:48:55.999175 ignition[805]: INFO : Ignition 2.14.0 Mar 17 21:48:56.000123 ignition[805]: INFO : Stage: mount Mar 17 21:48:56.001025 ignition[805]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:48:56.002022 ignition[805]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:48:56.004933 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:48:56.006101 systemd[1]: Finished sysroot-boot.service. Mar 17 21:48:56.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:56.008477 ignition[805]: INFO : mount: mount passed Mar 17 21:48:56.009183 ignition[805]: INFO : Ignition finished successfully Mar 17 21:48:56.009441 systemd[1]: Finished ignition-mount.service. Mar 17 21:48:56.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:56.012874 coreos-metadata[755]: Mar 17 21:48:56.012 INFO Fetch successful Mar 17 21:48:56.013728 coreos-metadata[755]: Mar 17 21:48:56.013 INFO wrote hostname srv-87dtj.gb1.brightbox.com to /sysroot/etc/hostname Mar 17 21:48:56.015860 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 17 21:48:56.016007 systemd[1]: Finished flatcar-openstack-hostname.service. Mar 17 21:48:56.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:56.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:48:56.355047 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 21:48:56.366487 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (812) Mar 17 21:48:56.371014 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 21:48:56.371070 kernel: BTRFS info (device vda6): using free space tree Mar 17 21:48:56.371090 kernel: BTRFS info (device vda6): has skinny extents Mar 17 21:48:56.378458 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 21:48:56.381191 systemd[1]: Starting ignition-files.service... Mar 17 21:48:56.402007 ignition[832]: INFO : Ignition 2.14.0 Mar 17 21:48:56.402007 ignition[832]: INFO : Stage: files Mar 17 21:48:56.403765 ignition[832]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:48:56.403765 ignition[832]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:48:56.403765 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:48:56.407057 ignition[832]: DEBUG : files: compiled without relabeling support, skipping Mar 17 21:48:56.407057 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 21:48:56.407057 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 21:48:56.410540 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 21:48:56.412645 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 21:48:56.415532 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 21:48:56.415532 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 21:48:56.415532 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 17 21:48:56.413640 unknown[832]: wrote ssh authorized keys file for user: core Mar 17 21:48:56.556384 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 21:48:56.771403 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 21:48:56.772991 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 21:48:56.772991 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 21:48:57.343134 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 21:48:57.618868 systemd-networkd[712]: eth0: Gained IPv6LL Mar 17 21:48:57.678106 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 21:48:57.679871 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 21:48:57.679871 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 21:48:57.679871 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 21:48:57.679871 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 21:48:57.679871 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 21:48:57.679871 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 21:48:57.679871 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 21:48:57.679871 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 21:48:57.679871 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 21:48:57.679871 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 21:48:57.690667 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 21:48:57.690667 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 21:48:57.690667 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 21:48:57.690667 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 17 21:48:58.258733 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 21:48:59.127054 systemd-networkd[712]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8771:24:19ff:fee6:1dc6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8771:24:19ff:fee6:1dc6/64 assigned by NDisc. Mar 17 21:48:59.127072 systemd-networkd[712]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 17 21:49:00.466475 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 21:49:00.468291 ignition[832]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 21:49:00.468291 ignition[832]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 21:49:00.468291 ignition[832]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Mar 17 21:49:00.468291 ignition[832]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 21:49:00.472300 ignition[832]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 21:49:00.472300 ignition[832]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Mar 17 21:49:00.472300 ignition[832]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 21:49:00.472300 ignition[832]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 21:49:00.472300 ignition[832]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 21:49:00.472300 ignition[832]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 21:49:00.478395 ignition[832]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 21:49:00.478395 ignition[832]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 21:49:00.478395 ignition[832]: INFO : files: files passed Mar 17 21:49:00.478395 ignition[832]: INFO : Ignition finished successfully Mar 17 21:49:00.491520 kernel: kauditd_printk_skb: 28 callbacks suppressed Mar 17 21:49:00.491553 kernel: audit: type=1130 audit(1742248140.481:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.479026 systemd[1]: Finished ignition-files.service. Mar 17 21:49:00.482963 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 21:49:00.492869 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 21:49:00.493783 systemd[1]: Starting ignition-quench.service... Mar 17 21:49:00.505406 kernel: audit: type=1130 audit(1742248140.499:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.505479 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 21:49:00.516725 kernel: audit: type=1130 audit(1742248140.505:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.516767 kernel: audit: type=1131 audit(1742248140.505:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.498399 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 21:49:00.499811 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 21:49:00.499954 systemd[1]: Finished ignition-quench.service. Mar 17 21:49:00.506246 systemd[1]: Reached target ignition-complete.target. Mar 17 21:49:00.518359 systemd[1]: Starting initrd-parse-etc.service... Mar 17 21:49:00.537497 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 21:49:00.538521 systemd[1]: Finished initrd-parse-etc.service. Mar 17 21:49:00.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.541587 systemd[1]: Reached target initrd-fs.target. Mar 17 21:49:00.552292 kernel: audit: type=1130 audit(1742248140.539:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.552326 kernel: audit: type=1131 audit(1742248140.541:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.551637 systemd[1]: Reached target initrd.target. Mar 17 21:49:00.553217 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 21:49:00.555410 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 21:49:00.572715 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 21:49:00.574497 systemd[1]: Starting initrd-cleanup.service... Mar 17 21:49:00.595296 kernel: audit: type=1130 audit(1742248140.573:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.608163 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 21:49:00.609277 systemd[1]: Finished initrd-cleanup.service. Mar 17 21:49:00.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.614464 systemd[1]: Stopped target nss-lookup.target. Mar 17 21:49:00.621240 kernel: audit: type=1130 audit(1742248140.610:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.621274 kernel: audit: type=1131 audit(1742248140.613:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.622060 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 21:49:00.622853 systemd[1]: Stopped target timers.target. Mar 17 21:49:00.624465 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 21:49:00.630966 kernel: audit: type=1131 audit(1742248140.625:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.624548 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 21:49:00.625557 systemd[1]: Stopped target initrd.target. Mar 17 21:49:00.631569 systemd[1]: Stopped target basic.target. Mar 17 21:49:00.632784 systemd[1]: Stopped target ignition-complete.target. Mar 17 21:49:00.634119 systemd[1]: Stopped target ignition-diskful.target. Mar 17 21:49:00.635405 systemd[1]: Stopped target initrd-root-device.target. Mar 17 21:49:00.636702 systemd[1]: Stopped target remote-fs.target. Mar 17 21:49:00.638002 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 21:49:00.639399 systemd[1]: Stopped target sysinit.target. Mar 17 21:49:00.640631 systemd[1]: Stopped target local-fs.target. Mar 17 21:49:00.641852 systemd[1]: Stopped target local-fs-pre.target. Mar 17 21:49:00.643152 systemd[1]: Stopped target swap.target. Mar 17 21:49:00.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.644419 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 21:49:00.644511 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 21:49:00.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.645791 systemd[1]: Stopped target cryptsetup.target. Mar 17 21:49:00.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.647048 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 21:49:00.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.647132 systemd[1]: Stopped dracut-initqueue.service. Mar 17 21:49:00.648509 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 21:49:00.648581 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 21:49:00.649792 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 21:49:00.649854 systemd[1]: Stopped ignition-files.service. Mar 17 21:49:00.652212 systemd[1]: Stopping ignition-mount.service... Mar 17 21:49:00.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.669061 iscsid[717]: iscsid shutting down. Mar 17 21:49:00.658604 systemd[1]: Stopping iscsid.service... Mar 17 21:49:00.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.660040 systemd[1]: Stopping sysroot-boot.service... Mar 17 21:49:00.663266 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 21:49:00.663379 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 21:49:00.674951 ignition[870]: INFO : Ignition 2.14.0 Mar 17 21:49:00.674951 ignition[870]: INFO : Stage: umount Mar 17 21:49:00.674951 ignition[870]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:49:00.674951 ignition[870]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:49:00.664136 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 21:49:00.664196 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 21:49:00.665655 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 21:49:00.665868 systemd[1]: Stopped iscsid.service. Mar 17 21:49:00.668046 systemd[1]: Stopping iscsiuio.service... Mar 17 21:49:00.670012 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 21:49:00.670314 systemd[1]: Stopped iscsiuio.service. Mar 17 21:49:00.684814 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 21:49:00.686472 ignition[870]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:49:00.686472 ignition[870]: INFO : umount: umount passed Mar 17 21:49:00.686472 ignition[870]: INFO : Ignition finished successfully Mar 17 21:49:00.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.687630 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 21:49:00.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.687756 systemd[1]: Stopped ignition-mount.service. Mar 17 21:49:00.688631 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 21:49:00.688696 systemd[1]: Stopped ignition-disks.service. Mar 17 21:49:00.689389 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 21:49:00.689447 systemd[1]: Stopped ignition-kargs.service. Mar 17 21:49:00.690092 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 21:49:00.690148 systemd[1]: Stopped ignition-fetch.service. Mar 17 21:49:00.692212 systemd[1]: Stopped target network.target. Mar 17 21:49:00.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.693958 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 21:49:00.694036 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 21:49:00.695520 systemd[1]: Stopped target paths.target. Mar 17 21:49:00.696747 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 21:49:00.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.698635 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 21:49:00.699668 systemd[1]: Stopped target slices.target. Mar 17 21:49:00.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.700947 systemd[1]: Stopped target sockets.target. Mar 17 21:49:00.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.701605 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 21:49:00.701657 systemd[1]: Closed iscsid.socket. Mar 17 21:49:00.702859 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 21:49:00.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.702912 systemd[1]: Closed iscsiuio.socket. Mar 17 21:49:00.704076 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 21:49:00.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.704136 systemd[1]: Stopped ignition-setup.service. Mar 17 21:49:00.729000 audit: BPF prog-id=6 op=UNLOAD Mar 17 21:49:00.705516 systemd[1]: Stopping systemd-networkd.service... Mar 17 21:49:00.707647 systemd[1]: Stopping systemd-resolved.service... Mar 17 21:49:00.709691 systemd-networkd[712]: eth0: DHCPv6 lease lost Mar 17 21:49:00.734000 audit: BPF prog-id=9 op=UNLOAD Mar 17 21:49:00.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.711699 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 21:49:00.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.711842 systemd[1]: Stopped systemd-networkd.service. Mar 17 21:49:00.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.712856 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 21:49:00.712909 systemd[1]: Closed systemd-networkd.socket. Mar 17 21:49:00.715236 systemd[1]: Stopping network-cleanup.service... Mar 17 21:49:00.717131 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 21:49:00.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.717218 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 21:49:00.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.717975 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 21:49:00.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.718062 systemd[1]: Stopped systemd-sysctl.service. Mar 17 21:49:00.718897 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 21:49:00.718967 systemd[1]: Stopped systemd-modules-load.service. Mar 17 21:49:00.720509 systemd[1]: Stopping systemd-udevd.service... Mar 17 21:49:00.724094 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 21:49:00.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.724845 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 21:49:00.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.725020 systemd[1]: Stopped systemd-resolved.service. Mar 17 21:49:00.726980 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 21:49:00.727196 systemd[1]: Stopped systemd-udevd.service. Mar 17 21:49:00.729716 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 21:49:00.729810 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 21:49:00.732745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 21:49:00.732797 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 21:49:00.733959 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 21:49:00.734035 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 21:49:00.735302 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 21:49:00.735410 systemd[1]: Stopped dracut-cmdline.service. Mar 17 21:49:00.736725 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 21:49:00.736783 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 21:49:00.738818 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 21:49:00.749177 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 21:49:00.749254 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 21:49:00.751128 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 21:49:00.751192 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 21:49:00.752050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 21:49:00.752111 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 21:49:00.755881 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 21:49:00.756645 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 21:49:00.756790 systemd[1]: Stopped network-cleanup.service. Mar 17 21:49:00.757913 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 21:49:00.758054 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 21:49:00.906111 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 21:49:00.906274 systemd[1]: Stopped sysroot-boot.service. Mar 17 21:49:00.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.907936 systemd[1]: Reached target initrd-switch-root.target. Mar 17 21:49:00.909032 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 21:49:00.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:00.909098 systemd[1]: Stopped initrd-setup-root.service. Mar 17 21:49:00.911426 systemd[1]: Starting initrd-switch-root.service... Mar 17 21:49:00.927324 systemd[1]: Switching root. Mar 17 21:49:00.946812 systemd-journald[202]: Journal stopped Mar 17 21:49:04.954533 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Mar 17 21:49:04.955189 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 21:49:04.955222 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 21:49:04.955265 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 21:49:04.955284 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 21:49:04.955325 kernel: SELinux: policy capability open_perms=1 Mar 17 21:49:04.956416 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 21:49:04.956467 kernel: SELinux: policy capability always_check_network=0 Mar 17 21:49:04.956513 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 21:49:04.956541 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 21:49:04.956575 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 21:49:04.956602 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 21:49:04.956624 systemd[1]: Successfully loaded SELinux policy in 62.213ms. Mar 17 21:49:04.956656 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.252ms. Mar 17 21:49:04.956679 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 21:49:04.956700 systemd[1]: Detected virtualization kvm. Mar 17 21:49:04.956720 systemd[1]: Detected architecture x86-64. Mar 17 21:49:04.956753 systemd[1]: Detected first boot. Mar 17 21:49:04.956776 systemd[1]: Hostname set to . Mar 17 21:49:04.956797 systemd[1]: Initializing machine ID from VM UUID. Mar 17 21:49:04.956818 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 21:49:04.956838 systemd[1]: Populated /etc with preset unit settings. Mar 17 21:49:04.956873 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 21:49:04.956907 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 21:49:04.956947 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 21:49:04.956971 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 21:49:04.956992 systemd[1]: Stopped initrd-switch-root.service. Mar 17 21:49:04.957012 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 21:49:04.958084 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 21:49:04.958111 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 21:49:04.958133 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 21:49:04.958184 systemd[1]: Created slice system-getty.slice. Mar 17 21:49:04.958207 systemd[1]: Created slice system-modprobe.slice. Mar 17 21:49:04.958227 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 21:49:04.958255 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 21:49:04.958274 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 21:49:04.958293 systemd[1]: Created slice user.slice. Mar 17 21:49:04.958319 systemd[1]: Started systemd-ask-password-console.path. Mar 17 21:49:04.958338 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 21:49:04.958375 systemd[1]: Set up automount boot.automount. Mar 17 21:49:04.958410 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 21:49:04.958454 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 21:49:04.958476 systemd[1]: Stopped target initrd-fs.target. Mar 17 21:49:04.958498 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 21:49:04.958518 systemd[1]: Reached target integritysetup.target. Mar 17 21:49:04.958545 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 21:49:04.958580 systemd[1]: Reached target remote-fs.target. Mar 17 21:49:04.958602 systemd[1]: Reached target slices.target. Mar 17 21:49:04.958624 systemd[1]: Reached target swap.target. Mar 17 21:49:04.958650 systemd[1]: Reached target torcx.target. Mar 17 21:49:04.958670 systemd[1]: Reached target veritysetup.target. Mar 17 21:49:04.958690 systemd[1]: Listening on systemd-coredump.socket. Mar 17 21:49:04.958711 systemd[1]: Listening on systemd-initctl.socket. Mar 17 21:49:04.958731 systemd[1]: Listening on systemd-networkd.socket. Mar 17 21:49:04.958758 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 21:49:04.958780 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 21:49:04.958812 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 21:49:04.958840 systemd[1]: Mounting dev-hugepages.mount... Mar 17 21:49:04.958874 systemd[1]: Mounting dev-mqueue.mount... Mar 17 21:49:04.958896 systemd[1]: Mounting media.mount... Mar 17 21:49:04.958917 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:49:04.958938 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 21:49:04.958959 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 21:49:04.958979 systemd[1]: Mounting tmp.mount... Mar 17 21:49:04.958999 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 21:49:04.959033 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 21:49:04.959055 systemd[1]: Starting kmod-static-nodes.service... Mar 17 21:49:04.959075 systemd[1]: Starting modprobe@configfs.service... Mar 17 21:49:04.959096 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 21:49:04.959117 systemd[1]: Starting modprobe@drm.service... Mar 17 21:49:04.959142 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 21:49:04.959177 systemd[1]: Starting modprobe@fuse.service... Mar 17 21:49:04.959196 systemd[1]: Starting modprobe@loop.service... Mar 17 21:49:04.959216 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 21:49:04.959255 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 21:49:04.959276 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 21:49:04.959302 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 21:49:04.959322 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 21:49:04.959355 systemd[1]: Stopped systemd-journald.service. Mar 17 21:49:04.959379 systemd[1]: Starting systemd-journald.service... Mar 17 21:49:04.959399 systemd[1]: Starting systemd-modules-load.service... Mar 17 21:49:04.959418 kernel: fuse: init (API version 7.34) Mar 17 21:49:04.959449 systemd[1]: Starting systemd-network-generator.service... Mar 17 21:49:04.959479 systemd[1]: Starting systemd-remount-fs.service... Mar 17 21:49:04.959514 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 21:49:04.959540 kernel: loop: module loaded Mar 17 21:49:04.959571 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 21:49:04.959591 systemd[1]: Stopped verity-setup.service. Mar 17 21:49:04.959612 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:49:04.959633 systemd[1]: Mounted dev-hugepages.mount. Mar 17 21:49:04.960476 systemd[1]: Mounted dev-mqueue.mount. Mar 17 21:49:04.960505 systemd[1]: Mounted media.mount. Mar 17 21:49:04.960545 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 21:49:04.960568 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 21:49:04.960588 systemd[1]: Mounted tmp.mount. Mar 17 21:49:04.960609 systemd[1]: Finished kmod-static-nodes.service. Mar 17 21:49:04.960630 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 21:49:04.960650 systemd[1]: Finished modprobe@configfs.service. Mar 17 21:49:04.960673 systemd-journald[979]: Journal started Mar 17 21:49:04.960740 systemd-journald[979]: Runtime Journal (/run/log/journal/c7b03a767cc64157b1b26e6efe6ff1c1) is 4.7M, max 38.1M, 33.3M free. Mar 17 21:49:04.960802 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 21:49:01.131000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 21:49:01.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 21:49:01.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 21:49:01.204000 audit: BPF prog-id=10 op=LOAD Mar 17 21:49:01.204000 audit: BPF prog-id=10 op=UNLOAD Mar 17 21:49:01.204000 audit: BPF prog-id=11 op=LOAD Mar 17 21:49:01.204000 audit: BPF prog-id=11 op=UNLOAD Mar 17 21:49:01.323000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 21:49:01.323000 audit[902]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178cc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 21:49:01.323000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 21:49:01.326000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 21:49:01.326000 audit[902]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a5 a2=1ed a3=0 items=2 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 21:49:01.326000 audit: CWD cwd="/" Mar 17 21:49:01.326000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:01.326000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:01.326000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 21:49:04.700000 audit: BPF prog-id=12 op=LOAD Mar 17 21:49:04.700000 audit: BPF prog-id=3 op=UNLOAD Mar 17 21:49:04.700000 audit: BPF prog-id=13 op=LOAD Mar 17 21:49:04.701000 audit: BPF prog-id=14 op=LOAD Mar 17 21:49:04.701000 audit: BPF prog-id=4 op=UNLOAD Mar 17 21:49:04.701000 audit: BPF prog-id=5 op=UNLOAD Mar 17 21:49:04.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.711000 audit: BPF prog-id=12 op=UNLOAD Mar 17 21:49:04.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.882000 audit: BPF prog-id=15 op=LOAD Mar 17 21:49:04.883000 audit: BPF prog-id=16 op=LOAD Mar 17 21:49:04.883000 audit: BPF prog-id=17 op=LOAD Mar 17 21:49:04.883000 audit: BPF prog-id=13 op=UNLOAD Mar 17 21:49:04.883000 audit: BPF prog-id=14 op=UNLOAD Mar 17 21:49:04.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.951000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 21:49:04.951000 audit[979]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff696e69d0 a2=4000 a3=7fff696e6a6c items=0 ppid=1 pid=979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 21:49:04.951000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 21:49:04.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:01.320772 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 21:49:04.696368 systemd[1]: Queued start job for default target multi-user.target. Mar 17 21:49:01.321487 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 21:49:04.696390 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 21:49:01.321534 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 21:49:04.702491 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 21:49:01.321610 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 21:49:01.321629 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 21:49:01.321689 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 21:49:01.321711 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 21:49:01.322097 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 21:49:01.322163 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 21:49:01.322187 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 21:49:01.322989 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 21:49:01.323050 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 21:49:01.323082 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 21:49:01.323108 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 21:49:01.323144 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 21:49:01.323170 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 21:49:04.098103 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:04Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 21:49:04.098512 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:04Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 21:49:04.099164 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:04Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 21:49:04.967428 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 21:49:04.099538 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:04Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 21:49:04.099629 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:04Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 21:49:04.099741 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-03-17T21:49:04Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 21:49:04.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.974258 systemd[1]: Started systemd-journald.service. Mar 17 21:49:04.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.971890 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 21:49:04.972889 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 21:49:04.973079 systemd[1]: Finished modprobe@drm.service. Mar 17 21:49:04.974136 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 21:49:04.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.975037 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 21:49:04.976081 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 21:49:04.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.976613 systemd[1]: Finished modprobe@fuse.service. Mar 17 21:49:04.977752 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 21:49:04.978087 systemd[1]: Finished modprobe@loop.service. Mar 17 21:49:04.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.979413 systemd[1]: Finished systemd-modules-load.service. Mar 17 21:49:04.980603 systemd[1]: Finished systemd-network-generator.service. Mar 17 21:49:04.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.981728 systemd[1]: Finished systemd-remount-fs.service. Mar 17 21:49:04.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:04.983393 systemd[1]: Reached target network-pre.target. Mar 17 21:49:04.985911 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 21:49:04.992065 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 21:49:04.995325 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 21:49:04.998118 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 21:49:05.005570 systemd[1]: Starting systemd-journal-flush.service... Mar 17 21:49:05.006415 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 21:49:05.008123 systemd[1]: Starting systemd-random-seed.service... Mar 17 21:49:05.008973 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 21:49:05.011113 systemd[1]: Starting systemd-sysctl.service... Mar 17 21:49:05.013546 systemd[1]: Starting systemd-sysusers.service... Mar 17 21:49:05.016566 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 21:49:05.020704 systemd-journald[979]: Time spent on flushing to /var/log/journal/c7b03a767cc64157b1b26e6efe6ff1c1 is 68.680ms for 1291 entries. Mar 17 21:49:05.020704 systemd-journald[979]: System Journal (/var/log/journal/c7b03a767cc64157b1b26e6efe6ff1c1) is 8.0M, max 584.8M, 576.8M free. Mar 17 21:49:05.124597 systemd-journald[979]: Received client request to flush runtime journal. Mar 17 21:49:05.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.021349 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 21:49:05.035404 systemd[1]: Finished systemd-random-seed.service. Mar 17 21:49:05.036269 systemd[1]: Reached target first-boot-complete.target. Mar 17 21:49:05.053060 systemd[1]: Finished systemd-sysctl.service. Mar 17 21:49:05.069628 systemd[1]: Finished systemd-sysusers.service. Mar 17 21:49:05.073652 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 21:49:05.126298 systemd[1]: Finished systemd-journal-flush.service. Mar 17 21:49:05.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.144276 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 21:49:05.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.175781 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 21:49:05.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.178290 systemd[1]: Starting systemd-udev-settle.service... Mar 17 21:49:05.189661 udevadm[1014]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 21:49:05.651742 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 21:49:05.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.661170 kernel: kauditd_printk_skb: 96 callbacks suppressed Mar 17 21:49:05.661310 kernel: audit: type=1130 audit(1742248145.653:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.661000 audit: BPF prog-id=18 op=LOAD Mar 17 21:49:05.663000 audit: BPF prog-id=19 op=LOAD Mar 17 21:49:05.665087 kernel: audit: type=1334 audit(1742248145.661:137): prog-id=18 op=LOAD Mar 17 21:49:05.665162 kernel: audit: type=1334 audit(1742248145.663:138): prog-id=19 op=LOAD Mar 17 21:49:05.665210 kernel: audit: type=1334 audit(1742248145.663:139): prog-id=7 op=UNLOAD Mar 17 21:49:05.663000 audit: BPF prog-id=7 op=UNLOAD Mar 17 21:49:05.667877 kernel: audit: type=1334 audit(1742248145.663:140): prog-id=8 op=UNLOAD Mar 17 21:49:05.663000 audit: BPF prog-id=8 op=UNLOAD Mar 17 21:49:05.665782 systemd[1]: Starting systemd-udevd.service... Mar 17 21:49:05.693133 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Mar 17 21:49:05.724538 systemd[1]: Started systemd-udevd.service. Mar 17 21:49:05.729920 systemd[1]: Starting systemd-networkd.service... Mar 17 21:49:05.740629 kernel: audit: type=1130 audit(1742248145.724:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.740714 kernel: audit: type=1334 audit(1742248145.726:142): prog-id=20 op=LOAD Mar 17 21:49:05.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.726000 audit: BPF prog-id=20 op=LOAD Mar 17 21:49:05.752626 kernel: audit: type=1334 audit(1742248145.746:143): prog-id=21 op=LOAD Mar 17 21:49:05.746000 audit: BPF prog-id=21 op=LOAD Mar 17 21:49:05.748856 systemd[1]: Starting systemd-userdbd.service... Mar 17 21:49:05.755447 kernel: audit: type=1334 audit(1742248145.747:144): prog-id=22 op=LOAD Mar 17 21:49:05.747000 audit: BPF prog-id=22 op=LOAD Mar 17 21:49:05.759376 kernel: audit: type=1334 audit(1742248145.747:145): prog-id=23 op=LOAD Mar 17 21:49:05.747000 audit: BPF prog-id=23 op=LOAD Mar 17 21:49:05.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.816937 systemd[1]: Started systemd-userdbd.service. Mar 17 21:49:05.827687 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 21:49:05.886730 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 21:49:05.932512 systemd-networkd[1025]: lo: Link UP Mar 17 21:49:05.932525 systemd-networkd[1025]: lo: Gained carrier Mar 17 21:49:05.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:05.934011 systemd-networkd[1025]: Enumeration completed Mar 17 21:49:05.934151 systemd[1]: Started systemd-networkd.service. Mar 17 21:49:05.935727 systemd-networkd[1025]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 21:49:05.937936 systemd-networkd[1025]: eth0: Link UP Mar 17 21:49:05.938078 systemd-networkd[1025]: eth0: Gained carrier Mar 17 21:49:05.940564 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 17 21:49:05.946357 kernel: ACPI: button: Power Button [PWRF] Mar 17 21:49:05.950626 systemd-networkd[1025]: eth0: DHCPv4 address 10.230.29.198/30, gateway 10.230.29.197 acquired from 10.230.29.197 Mar 17 21:49:05.958370 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 21:49:06.010000 audit[1017]: AVC avc: denied { confidentiality } for pid=1017 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 21:49:06.010000 audit[1017]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556fc998eb70 a1=338ac a2=7f7d13830bc5 a3=5 items=110 ppid=1015 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 21:49:06.010000 audit: CWD cwd="/" Mar 17 21:49:06.010000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=1 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=2 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=3 name=(null) inode=14627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=4 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=5 name=(null) inode=14628 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=6 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=7 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=8 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=9 name=(null) inode=14630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=10 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=11 name=(null) inode=14631 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=12 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=13 name=(null) inode=14632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=14 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=15 name=(null) inode=14633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=16 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=17 name=(null) inode=14634 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=18 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=19 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=20 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=21 name=(null) inode=14636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=22 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=23 name=(null) inode=14637 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=24 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=25 name=(null) inode=14638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=26 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=27 name=(null) inode=14639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=28 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=29 name=(null) inode=14640 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=30 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=31 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=32 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=33 name=(null) inode=14642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=34 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=35 name=(null) inode=14643 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=36 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=37 name=(null) inode=14644 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=38 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=39 name=(null) inode=14645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=40 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=41 name=(null) inode=14646 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=42 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=43 name=(null) inode=14647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=44 name=(null) inode=14647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=45 name=(null) inode=14648 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=46 name=(null) inode=14647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=47 name=(null) inode=14649 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=48 name=(null) inode=14647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=49 name=(null) inode=14650 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=50 name=(null) inode=14647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=51 name=(null) inode=14651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=52 name=(null) inode=14647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=53 name=(null) inode=14652 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=55 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=56 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=57 name=(null) inode=14654 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=58 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=59 name=(null) inode=14655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=60 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=61 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=62 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=63 name=(null) inode=14657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=64 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=65 name=(null) inode=14658 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=66 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=67 name=(null) inode=14659 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=68 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=69 name=(null) inode=14660 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=70 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=71 name=(null) inode=14661 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=72 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=73 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=74 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=75 name=(null) inode=14663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=76 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=77 name=(null) inode=14664 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=78 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=79 name=(null) inode=14665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=80 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=81 name=(null) inode=14666 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=82 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=83 name=(null) inode=14667 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=84 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=85 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=86 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=87 name=(null) inode=14669 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=88 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=89 name=(null) inode=14670 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=90 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=91 name=(null) inode=14671 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=92 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=93 name=(null) inode=14672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=94 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=95 name=(null) inode=14673 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=96 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=97 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=98 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=99 name=(null) inode=14675 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=100 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=101 name=(null) inode=14676 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=102 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=103 name=(null) inode=14677 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=104 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=105 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=106 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=107 name=(null) inode=14679 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PATH item=109 name=(null) inode=16249 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:49:06.010000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 21:49:06.076784 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Mar 17 21:49:06.105371 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 21:49:06.127704 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 21:49:06.128012 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 21:49:06.281058 systemd[1]: Finished systemd-udev-settle.service. Mar 17 21:49:06.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.283957 systemd[1]: Starting lvm2-activation-early.service... Mar 17 21:49:06.306874 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 21:49:06.339612 systemd[1]: Finished lvm2-activation-early.service. Mar 17 21:49:06.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.340632 systemd[1]: Reached target cryptsetup.target. Mar 17 21:49:06.343119 systemd[1]: Starting lvm2-activation.service... Mar 17 21:49:06.348805 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 21:49:06.374668 systemd[1]: Finished lvm2-activation.service. Mar 17 21:49:06.375648 systemd[1]: Reached target local-fs-pre.target. Mar 17 21:49:06.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.376317 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 21:49:06.376382 systemd[1]: Reached target local-fs.target. Mar 17 21:49:06.376998 systemd[1]: Reached target machines.target. Mar 17 21:49:06.379431 systemd[1]: Starting ldconfig.service... Mar 17 21:49:06.380870 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 21:49:06.380934 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:49:06.384445 systemd[1]: Starting systemd-boot-update.service... Mar 17 21:49:06.386898 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 21:49:06.390586 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 21:49:06.397611 systemd[1]: Starting systemd-sysext.service... Mar 17 21:49:06.405851 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1047 (bootctl) Mar 17 21:49:06.407598 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 21:49:06.420323 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 21:49:06.435183 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 21:49:06.435495 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 21:49:06.546061 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 21:49:06.546413 kernel: loop0: detected capacity change from 0 to 218376 Mar 17 21:49:06.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.563875 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 21:49:06.565446 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 21:49:06.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.580375 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 21:49:06.602413 kernel: loop1: detected capacity change from 0 to 218376 Mar 17 21:49:06.617476 (sd-sysext)[1061]: Using extensions 'kubernetes'. Mar 17 21:49:06.619640 (sd-sysext)[1061]: Merged extensions into '/usr'. Mar 17 21:49:06.646918 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:49:06.649577 systemd[1]: Mounting usr-share-oem.mount... Mar 17 21:49:06.650587 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 21:49:06.657288 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 21:49:06.660746 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 21:49:06.665054 systemd[1]: Starting modprobe@loop.service... Mar 17 21:49:06.665785 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 21:49:06.666007 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:49:06.666209 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:49:06.667861 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 21:49:06.668074 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 21:49:06.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.672393 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 21:49:06.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.676168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 21:49:06.676357 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 21:49:06.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.678256 systemd[1]: Mounted usr-share-oem.mount. Mar 17 21:49:06.679445 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 21:49:06.679625 systemd[1]: Finished modprobe@loop.service. Mar 17 21:49:06.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.683018 systemd-fsck[1058]: fsck.fat 4.2 (2021-01-31) Mar 17 21:49:06.683018 systemd-fsck[1058]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 21:49:06.681442 systemd[1]: Finished systemd-sysext.service. Mar 17 21:49:06.688324 systemd[1]: Starting ensure-sysext.service... Mar 17 21:49:06.689405 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 21:49:06.693198 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 21:49:06.700606 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 21:49:06.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:06.707310 systemd[1]: Mounting boot.mount... Mar 17 21:49:06.715853 systemd[1]: Reloading. Mar 17 21:49:06.727357 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 21:49:06.733324 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 21:49:06.747733 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 21:49:06.879842 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2025-03-17T21:49:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 21:49:06.879900 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2025-03-17T21:49:06Z" level=info msg="torcx already run" Mar 17 21:49:06.931094 ldconfig[1046]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 21:49:07.003753 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 21:49:07.003802 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 21:49:07.032222 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 21:49:07.111000 audit: BPF prog-id=24 op=LOAD Mar 17 21:49:07.111000 audit: BPF prog-id=25 op=LOAD Mar 17 21:49:07.112000 audit: BPF prog-id=18 op=UNLOAD Mar 17 21:49:07.112000 audit: BPF prog-id=19 op=UNLOAD Mar 17 21:49:07.113000 audit: BPF prog-id=26 op=LOAD Mar 17 21:49:07.113000 audit: BPF prog-id=20 op=UNLOAD Mar 17 21:49:07.117000 audit: BPF prog-id=27 op=LOAD Mar 17 21:49:07.117000 audit: BPF prog-id=21 op=UNLOAD Mar 17 21:49:07.117000 audit: BPF prog-id=28 op=LOAD Mar 17 21:49:07.117000 audit: BPF prog-id=29 op=LOAD Mar 17 21:49:07.117000 audit: BPF prog-id=22 op=UNLOAD Mar 17 21:49:07.117000 audit: BPF prog-id=23 op=UNLOAD Mar 17 21:49:07.119000 audit: BPF prog-id=30 op=LOAD Mar 17 21:49:07.119000 audit: BPF prog-id=15 op=UNLOAD Mar 17 21:49:07.119000 audit: BPF prog-id=31 op=LOAD Mar 17 21:49:07.119000 audit: BPF prog-id=32 op=LOAD Mar 17 21:49:07.119000 audit: BPF prog-id=16 op=UNLOAD Mar 17 21:49:07.119000 audit: BPF prog-id=17 op=UNLOAD Mar 17 21:49:07.126230 systemd[1]: Finished ldconfig.service. Mar 17 21:49:07.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.130890 systemd[1]: Mounted boot.mount. Mar 17 21:49:07.150560 systemd[1]: Finished ensure-sysext.service. Mar 17 21:49:07.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.152575 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 21:49:07.154270 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 21:49:07.158479 systemd[1]: Starting modprobe@drm.service... Mar 17 21:49:07.160467 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 21:49:07.162570 systemd[1]: Starting modprobe@loop.service... Mar 17 21:49:07.163469 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 21:49:07.163571 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:49:07.165152 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 21:49:07.167520 systemd[1]: Finished systemd-boot-update.service. Mar 17 21:49:07.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.168955 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 21:49:07.169277 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 21:49:07.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.170806 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 21:49:07.170986 systemd[1]: Finished modprobe@drm.service. Mar 17 21:49:07.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.172670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 21:49:07.172868 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 21:49:07.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.174498 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 21:49:07.174688 systemd[1]: Finished modprobe@loop.service. Mar 17 21:49:07.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.176266 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 21:49:07.176322 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 21:49:07.282972 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 21:49:07.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.285824 systemd[1]: Starting audit-rules.service... Mar 17 21:49:07.288181 systemd[1]: Starting clean-ca-certificates.service... Mar 17 21:49:07.291913 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 21:49:07.298000 audit: BPF prog-id=33 op=LOAD Mar 17 21:49:07.301589 systemd[1]: Starting systemd-resolved.service... Mar 17 21:49:07.303000 audit: BPF prog-id=34 op=LOAD Mar 17 21:49:07.307207 systemd[1]: Starting systemd-timesyncd.service... Mar 17 21:49:07.310162 systemd[1]: Starting systemd-update-utmp.service... Mar 17 21:49:07.314120 systemd[1]: Finished clean-ca-certificates.service. Mar 17 21:49:07.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.315654 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 21:49:07.321000 audit[1152]: SYSTEM_BOOT pid=1152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.326690 systemd[1]: Finished systemd-update-utmp.service. Mar 17 21:49:07.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.373241 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 21:49:07.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:49:07.379901 systemd[1]: Starting systemd-update-done.service... Mar 17 21:49:07.382610 augenrules[1161]: No rules Mar 17 21:49:07.383953 systemd[1]: Finished audit-rules.service. Mar 17 21:49:07.382000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 21:49:07.382000 audit[1161]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1817e4f0 a2=420 a3=0 items=0 ppid=1141 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 21:49:07.382000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 21:49:07.393144 systemd[1]: Finished systemd-update-done.service. Mar 17 21:49:07.411559 systemd-resolved[1149]: Positive Trust Anchors: Mar 17 21:49:07.412002 systemd-resolved[1149]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 21:49:07.412171 systemd-resolved[1149]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 21:49:07.420433 systemd-resolved[1149]: Using system hostname 'srv-87dtj.gb1.brightbox.com'. Mar 17 21:49:07.423492 systemd[1]: Started systemd-resolved.service. Mar 17 21:49:07.424301 systemd[1]: Reached target network.target. Mar 17 21:49:07.424963 systemd[1]: Reached target nss-lookup.target. Mar 17 21:49:07.433978 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:49:07.434023 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:49:07.434288 systemd[1]: Started systemd-timesyncd.service. Mar 17 21:49:07.435038 systemd[1]: Reached target sysinit.target. Mar 17 21:49:07.435878 systemd[1]: Started motdgen.path. Mar 17 21:49:07.436563 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 21:49:07.437340 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 21:49:07.438002 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 21:49:07.438055 systemd[1]: Reached target paths.target. Mar 17 21:49:07.453028 systemd[1]: Reached target time-set.target. Mar 17 21:49:07.453934 systemd[1]: Started logrotate.timer. Mar 17 21:49:07.454825 systemd[1]: Started mdadm.timer. Mar 17 21:49:07.455428 systemd[1]: Reached target timers.target. Mar 17 21:49:07.456971 systemd[1]: Listening on dbus.socket. Mar 17 21:49:07.459277 systemd[1]: Starting docker.socket... Mar 17 21:49:07.463658 systemd[1]: Listening on sshd.socket. Mar 17 21:49:07.464529 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:49:07.465293 systemd[1]: Listening on docker.socket. Mar 17 21:49:07.466072 systemd[1]: Reached target sockets.target. Mar 17 21:49:07.466729 systemd[1]: Reached target basic.target. Mar 17 21:49:07.467472 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 21:49:07.467524 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 21:49:07.469222 systemd[1]: Starting containerd.service... Mar 17 21:49:07.472660 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 21:49:07.475057 systemd[1]: Starting dbus.service... Mar 17 21:49:07.478208 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 21:49:07.483130 systemd[1]: Starting extend-filesystems.service... Mar 17 21:49:07.484829 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 21:49:07.489462 systemd[1]: Starting motdgen.service... Mar 17 21:49:07.494410 systemd[1]: Starting prepare-helm.service... Mar 17 21:49:07.498807 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 21:49:07.503594 systemd[1]: Starting sshd-keygen.service... Mar 17 21:49:07.554857 jq[1175]: false Mar 17 21:49:07.510556 systemd[1]: Starting systemd-logind.service... Mar 17 21:49:07.512007 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:49:07.512140 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 21:49:07.513077 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 21:49:07.568262 jq[1186]: true Mar 17 21:49:07.515618 systemd[1]: Starting update-engine.service... Mar 17 21:49:07.519509 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 21:49:07.577202 jq[1193]: true Mar 17 21:49:07.528935 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 21:49:07.581690 tar[1192]: linux-amd64/LICENSE Mar 17 21:49:07.581690 tar[1192]: linux-amd64/helm Mar 17 21:49:07.529323 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 21:49:07.553899 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 21:49:07.554154 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 21:49:07.592826 dbus-daemon[1172]: [system] SELinux support is enabled Mar 17 21:49:07.593056 systemd[1]: Started dbus.service. Mar 17 21:49:07.594597 dbus-daemon[1172]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1025 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 21:49:07.597112 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 21:49:07.597408 systemd[1]: Finished motdgen.service. Mar 17 21:49:07.598277 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 21:49:07.598338 systemd[1]: Reached target system-config.target. Mar 17 21:49:07.599075 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 21:49:07.599122 systemd[1]: Reached target user-config.target. Mar 17 21:49:07.616239 extend-filesystems[1176]: Found loop1 Mar 17 21:49:07.617485 dbus-daemon[1172]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 21:49:07.622730 systemd[1]: Starting systemd-hostnamed.service... Mar 17 21:49:07.624518 extend-filesystems[1176]: Found vda Mar 17 21:49:07.624518 extend-filesystems[1176]: Found vda1 Mar 17 21:49:07.624518 extend-filesystems[1176]: Found vda2 Mar 17 21:49:07.624518 extend-filesystems[1176]: Found vda3 Mar 17 21:49:07.624518 extend-filesystems[1176]: Found usr Mar 17 21:49:07.624518 extend-filesystems[1176]: Found vda4 Mar 17 21:49:07.624518 extend-filesystems[1176]: Found vda6 Mar 17 21:49:07.624518 extend-filesystems[1176]: Found vda7 Mar 17 21:49:07.624518 extend-filesystems[1176]: Found vda9 Mar 17 21:49:07.647649 extend-filesystems[1176]: Checking size of /dev/vda9 Mar 17 21:49:07.703224 bash[1223]: Updated "/home/core/.ssh/authorized_keys" Mar 17 21:49:07.708000 update_engine[1184]: I0317 21:49:07.707422 1184 main.cc:92] Flatcar Update Engine starting Mar 17 21:49:07.709748 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 21:49:07.715182 systemd[1]: Started update-engine.service. Mar 17 21:49:07.720211 env[1194]: time="2025-03-17T21:49:07.717018201Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 21:49:07.718506 systemd[1]: Started locksmithd.service. Mar 17 21:49:07.721497 update_engine[1184]: I0317 21:49:07.721467 1184 update_check_scheduler.cc:74] Next update check in 11m9s Mar 17 21:49:07.722061 extend-filesystems[1176]: Resized partition /dev/vda9 Mar 17 21:49:07.724815 extend-filesystems[1229]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 21:49:07.733373 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 17 21:49:07.827710 env[1194]: time="2025-03-17T21:49:07.827645644Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 21:49:07.828366 env[1194]: time="2025-03-17T21:49:07.828314684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 21:49:07.832281 systemd-logind[1183]: Watching system buttons on /dev/input/event2 (Power Button) Mar 17 21:49:07.832857 systemd-logind[1183]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 21:49:07.835698 systemd-logind[1183]: New seat seat0. Mar 17 21:49:07.837159 env[1194]: time="2025-03-17T21:49:07.837056478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 21:49:07.837159 env[1194]: time="2025-03-17T21:49:07.837099874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 21:49:07.838383 env[1194]: time="2025-03-17T21:49:07.837564285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 21:49:07.838383 env[1194]: time="2025-03-17T21:49:07.837597757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 21:49:07.838383 env[1194]: time="2025-03-17T21:49:07.837619407Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 21:49:07.838383 env[1194]: time="2025-03-17T21:49:07.837636679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 21:49:07.838383 env[1194]: time="2025-03-17T21:49:07.837803217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 21:49:07.838383 env[1194]: time="2025-03-17T21:49:07.838227501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 21:49:07.842124 env[1194]: time="2025-03-17T21:49:07.841780768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 21:49:07.842124 env[1194]: time="2025-03-17T21:49:07.841818904Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 21:49:07.842124 env[1194]: time="2025-03-17T21:49:07.841904258Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 21:49:07.842124 env[1194]: time="2025-03-17T21:49:07.841927726Z" level=info msg="metadata content store policy set" policy=shared Mar 17 21:49:07.850725 systemd[1]: Started systemd-logind.service. Mar 17 21:49:07.861278 systemd-networkd[1025]: eth0: Gained IPv6LL Mar 17 21:49:07.864953 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 21:49:07.866239 systemd[1]: Reached target network-online.target. Mar 17 21:49:07.870284 systemd[1]: Starting kubelet.service... Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.874820489Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.874922949Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.874947984Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.875027300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.875055564Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.875079671Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.875101207Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.875122146Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.875146940Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.875168738Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.875188404Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 21:49:07.875483 env[1194]: time="2025-03-17T21:49:07.875218139Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 21:49:07.879470 env[1194]: time="2025-03-17T21:49:07.878428502Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 21:49:07.879470 env[1194]: time="2025-03-17T21:49:07.878640382Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 21:49:07.879470 env[1194]: time="2025-03-17T21:49:07.878993706Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 21:49:07.879470 env[1194]: time="2025-03-17T21:49:07.879047654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.879470 env[1194]: time="2025-03-17T21:49:07.879074636Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 21:49:07.879470 env[1194]: time="2025-03-17T21:49:07.879203313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.879470 env[1194]: time="2025-03-17T21:49:07.879229846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.879470 env[1194]: time="2025-03-17T21:49:07.879255233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.879470 env[1194]: time="2025-03-17T21:49:07.879282910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.879470 env[1194]: time="2025-03-17T21:49:07.879304569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.879323964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.882391309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.882419504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.882445139Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.882689705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.882730363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.882750514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.882796596Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.882834586Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.882854153Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.882902796Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 21:49:07.883381 env[1194]: time="2025-03-17T21:49:07.882983683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 21:49:07.886391 env[1194]: time="2025-03-17T21:49:07.883307784Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 21:49:07.886391 env[1194]: time="2025-03-17T21:49:07.885846810Z" level=info msg="Connect containerd service" Mar 17 21:49:07.886391 env[1194]: time="2025-03-17T21:49:07.885935604Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 21:49:07.888968 env[1194]: time="2025-03-17T21:49:07.888931223Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 21:49:07.890278 env[1194]: time="2025-03-17T21:49:07.890247832Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 21:49:07.905370 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 21:49:07.892098 systemd[1]: Started containerd.service. Mar 17 21:49:07.905651 env[1194]: time="2025-03-17T21:49:07.891863501Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 21:49:07.905651 env[1194]: time="2025-03-17T21:49:07.893389621Z" level=info msg="containerd successfully booted in 0.177423s" Mar 17 21:49:07.905651 env[1194]: time="2025-03-17T21:49:07.893878713Z" level=info msg="Start subscribing containerd event" Mar 17 21:49:07.905651 env[1194]: time="2025-03-17T21:49:07.893952832Z" level=info msg="Start recovering state" Mar 17 21:49:07.905651 env[1194]: time="2025-03-17T21:49:07.894084575Z" level=info msg="Start event monitor" Mar 17 21:49:07.905651 env[1194]: time="2025-03-17T21:49:07.894121213Z" level=info msg="Start snapshots syncer" Mar 17 21:49:07.905651 env[1194]: time="2025-03-17T21:49:07.894138977Z" level=info msg="Start cni network conf syncer for default" Mar 17 21:49:07.905651 env[1194]: time="2025-03-17T21:49:07.894153500Z" level=info msg="Start streaming server" Mar 17 21:49:07.911637 extend-filesystems[1229]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 21:49:07.911637 extend-filesystems[1229]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 21:49:07.911637 extend-filesystems[1229]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 21:49:07.909932 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 21:49:07.921860 extend-filesystems[1176]: Resized filesystem in /dev/vda9 Mar 17 21:49:07.910197 systemd[1]: Finished extend-filesystems.service. Mar 17 21:49:07.950397 dbus-daemon[1172]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 21:49:07.950595 systemd[1]: Started systemd-hostnamed.service. Mar 17 21:49:07.952507 dbus-daemon[1172]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1214 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 21:49:07.956601 systemd[1]: Starting polkit.service... Mar 17 21:49:07.980473 polkitd[1236]: Started polkitd version 121 Mar 17 21:49:08.001855 polkitd[1236]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 21:49:08.003782 polkitd[1236]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 21:49:08.009862 polkitd[1236]: Finished loading, compiling and executing 2 rules Mar 17 21:49:08.011538 dbus-daemon[1172]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 21:49:08.011820 systemd[1]: Started polkit.service. Mar 17 21:49:08.013187 polkitd[1236]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 21:49:08.038937 systemd-hostnamed[1214]: Hostname set to (static) Mar 17 21:49:08.457161 systemd-timesyncd[1151]: Contacted time server 178.62.250.107:123 (0.flatcar.pool.ntp.org). Mar 17 21:49:08.457262 systemd-timesyncd[1151]: Initial clock synchronization to Mon 2025-03-17 21:49:08.758122 UTC. Mar 17 21:49:08.715411 tar[1192]: linux-amd64/README.md Mar 17 21:49:08.725004 systemd[1]: Finished prepare-helm.service. Mar 17 21:49:08.939306 locksmithd[1228]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 21:49:09.272086 sshd_keygen[1201]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 21:49:09.288286 systemd[1]: Started kubelet.service. Mar 17 21:49:09.309328 systemd[1]: Finished sshd-keygen.service. Mar 17 21:49:09.312705 systemd[1]: Starting issuegen.service... Mar 17 21:49:09.321846 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 21:49:09.322099 systemd[1]: Finished issuegen.service. Mar 17 21:49:09.325190 systemd[1]: Starting systemd-user-sessions.service... Mar 17 21:49:09.336783 systemd[1]: Finished systemd-user-sessions.service. Mar 17 21:49:09.340092 systemd[1]: Started getty@tty1.service. Mar 17 21:49:09.343061 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 21:49:09.344262 systemd[1]: Reached target getty.target. Mar 17 21:49:09.369902 systemd-networkd[1025]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8771:24:19ff:fee6:1dc6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8771:24:19ff:fee6:1dc6/64 assigned by NDisc. Mar 17 21:49:09.369920 systemd-networkd[1025]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 17 21:49:09.995770 kubelet[1257]: E0317 21:49:09.995699 1257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 21:49:09.998282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 21:49:09.998562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 21:49:09.999058 systemd[1]: kubelet.service: Consumed 1.103s CPU time. Mar 17 21:49:14.609957 coreos-metadata[1171]: Mar 17 21:49:14.609 WARN failed to locate config-drive, using the metadata service API instead Mar 17 21:49:14.691525 coreos-metadata[1171]: Mar 17 21:49:14.691 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 17 21:49:14.717890 coreos-metadata[1171]: Mar 17 21:49:14.717 INFO Fetch successful Mar 17 21:49:14.718264 coreos-metadata[1171]: Mar 17 21:49:14.718 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 21:49:14.746050 coreos-metadata[1171]: Mar 17 21:49:14.745 INFO Fetch successful Mar 17 21:49:14.748034 unknown[1171]: wrote ssh authorized keys file for user: core Mar 17 21:49:14.759863 update-ssh-keys[1275]: Updated "/home/core/.ssh/authorized_keys" Mar 17 21:49:14.760847 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 21:49:14.761360 systemd[1]: Reached target multi-user.target. Mar 17 21:49:14.763768 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 21:49:14.774490 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 21:49:14.774785 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 21:49:14.779572 systemd[1]: Startup finished in 1.171s (kernel) + 8.368s (initrd) + 13.736s (userspace) = 23.276s. Mar 17 21:49:17.566741 systemd[1]: Created slice system-sshd.slice. Mar 17 21:49:17.568772 systemd[1]: Started sshd@0-10.230.29.198:22-139.178.89.65:50166.service. Mar 17 21:49:18.482484 sshd[1278]: Accepted publickey for core from 139.178.89.65 port 50166 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:49:18.485313 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:49:18.501862 systemd[1]: Created slice user-500.slice. Mar 17 21:49:18.503870 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 21:49:18.508538 systemd-logind[1183]: New session 1 of user core. Mar 17 21:49:18.518665 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 21:49:18.521931 systemd[1]: Starting user@500.service... Mar 17 21:49:18.527840 (systemd)[1281]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:49:18.635111 systemd[1281]: Queued start job for default target default.target. Mar 17 21:49:18.637235 systemd[1281]: Reached target paths.target. Mar 17 21:49:18.637494 systemd[1281]: Reached target sockets.target. Mar 17 21:49:18.637692 systemd[1281]: Reached target timers.target. Mar 17 21:49:18.637921 systemd[1281]: Reached target basic.target. Mar 17 21:49:18.638174 systemd[1281]: Reached target default.target. Mar 17 21:49:18.638310 systemd[1]: Started user@500.service. Mar 17 21:49:18.639084 systemd[1281]: Startup finished in 101ms. Mar 17 21:49:18.640101 systemd[1]: Started session-1.scope. Mar 17 21:49:19.276795 systemd[1]: Started sshd@1-10.230.29.198:22-139.178.89.65:50182.service. Mar 17 21:49:20.021645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 21:49:20.022076 systemd[1]: Stopped kubelet.service. Mar 17 21:49:20.022151 systemd[1]: kubelet.service: Consumed 1.103s CPU time. Mar 17 21:49:20.025114 systemd[1]: Starting kubelet.service... Mar 17 21:49:20.175739 sshd[1290]: Accepted publickey for core from 139.178.89.65 port 50182 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:49:20.179105 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:49:20.191458 systemd[1]: Started session-2.scope. Mar 17 21:49:20.192087 systemd-logind[1183]: New session 2 of user core. Mar 17 21:49:20.221697 systemd[1]: Started kubelet.service. Mar 17 21:49:20.308674 kubelet[1297]: E0317 21:49:20.308434 1297 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 21:49:20.313635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 21:49:20.313924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 21:49:20.806138 sshd[1290]: pam_unix(sshd:session): session closed for user core Mar 17 21:49:20.810678 systemd[1]: sshd@1-10.230.29.198:22-139.178.89.65:50182.service: Deactivated successfully. Mar 17 21:49:20.811949 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 21:49:20.812857 systemd-logind[1183]: Session 2 logged out. Waiting for processes to exit. Mar 17 21:49:20.815053 systemd-logind[1183]: Removed session 2. Mar 17 21:49:20.957224 systemd[1]: Started sshd@2-10.230.29.198:22-139.178.89.65:50196.service. Mar 17 21:49:21.852147 sshd[1305]: Accepted publickey for core from 139.178.89.65 port 50196 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:49:21.855213 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:49:21.863197 systemd-logind[1183]: New session 3 of user core. Mar 17 21:49:21.863326 systemd[1]: Started session-3.scope. Mar 17 21:49:22.472231 sshd[1305]: pam_unix(sshd:session): session closed for user core Mar 17 21:49:22.476390 systemd[1]: sshd@2-10.230.29.198:22-139.178.89.65:50196.service: Deactivated successfully. Mar 17 21:49:22.477409 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 21:49:22.478221 systemd-logind[1183]: Session 3 logged out. Waiting for processes to exit. Mar 17 21:49:22.479795 systemd-logind[1183]: Removed session 3. Mar 17 21:49:22.621800 systemd[1]: Started sshd@3-10.230.29.198:22-139.178.89.65:35534.service. Mar 17 21:49:23.516391 sshd[1311]: Accepted publickey for core from 139.178.89.65 port 35534 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:49:23.518660 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:49:23.526450 systemd-logind[1183]: New session 4 of user core. Mar 17 21:49:23.527471 systemd[1]: Started session-4.scope. Mar 17 21:49:24.141956 sshd[1311]: pam_unix(sshd:session): session closed for user core Mar 17 21:49:24.146765 systemd-logind[1183]: Session 4 logged out. Waiting for processes to exit. Mar 17 21:49:24.147650 systemd[1]: sshd@3-10.230.29.198:22-139.178.89.65:35534.service: Deactivated successfully. Mar 17 21:49:24.149546 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 21:49:24.150823 systemd-logind[1183]: Removed session 4. Mar 17 21:49:24.289992 systemd[1]: Started sshd@4-10.230.29.198:22-139.178.89.65:35542.service. Mar 17 21:49:25.177746 sshd[1317]: Accepted publickey for core from 139.178.89.65 port 35542 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:49:25.180081 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:49:25.187834 systemd-logind[1183]: New session 5 of user core. Mar 17 21:49:25.188948 systemd[1]: Started session-5.scope. Mar 17 21:49:25.679244 sudo[1320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 21:49:25.680231 sudo[1320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 21:49:25.726111 systemd[1]: Starting docker.service... Mar 17 21:49:25.781400 env[1330]: time="2025-03-17T21:49:25.781237651Z" level=info msg="Starting up" Mar 17 21:49:25.785248 env[1330]: time="2025-03-17T21:49:25.785168347Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 21:49:25.785428 env[1330]: time="2025-03-17T21:49:25.785390032Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 21:49:25.785592 env[1330]: time="2025-03-17T21:49:25.785558819Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 21:49:25.785734 env[1330]: time="2025-03-17T21:49:25.785705214Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 21:49:25.790114 env[1330]: time="2025-03-17T21:49:25.790071599Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 21:49:25.790114 env[1330]: time="2025-03-17T21:49:25.790102635Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 21:49:25.790281 env[1330]: time="2025-03-17T21:49:25.790123847Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 21:49:25.790281 env[1330]: time="2025-03-17T21:49:25.790138757Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 21:49:25.828200 env[1330]: time="2025-03-17T21:49:25.827120975Z" level=info msg="Loading containers: start." Mar 17 21:49:25.996898 kernel: Initializing XFRM netlink socket Mar 17 21:49:26.043999 env[1330]: time="2025-03-17T21:49:26.043938801Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 21:49:26.149892 systemd-networkd[1025]: docker0: Link UP Mar 17 21:49:26.168488 env[1330]: time="2025-03-17T21:49:26.168423837Z" level=info msg="Loading containers: done." Mar 17 21:49:26.189217 env[1330]: time="2025-03-17T21:49:26.189172026Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 21:49:26.189569 env[1330]: time="2025-03-17T21:49:26.189539478Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 21:49:26.189763 env[1330]: time="2025-03-17T21:49:26.189738119Z" level=info msg="Daemon has completed initialization" Mar 17 21:49:26.207418 systemd[1]: Started docker.service. Mar 17 21:49:26.217963 env[1330]: time="2025-03-17T21:49:26.217714833Z" level=info msg="API listen on /run/docker.sock" Mar 17 21:49:27.179212 env[1194]: time="2025-03-17T21:49:27.179087223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 21:49:27.911428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250125331.mount: Deactivated successfully. Mar 17 21:49:30.427875 env[1194]: time="2025-03-17T21:49:30.427804581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:30.431713 env[1194]: time="2025-03-17T21:49:30.431659242Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:30.435266 env[1194]: time="2025-03-17T21:49:30.435206499Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:30.438696 env[1194]: time="2025-03-17T21:49:30.438646783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:30.440075 env[1194]: time="2025-03-17T21:49:30.440010122Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 17 21:49:30.442567 env[1194]: time="2025-03-17T21:49:30.442529615Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 21:49:30.565672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 21:49:30.566162 systemd[1]: Stopped kubelet.service. Mar 17 21:49:30.569369 systemd[1]: Starting kubelet.service... Mar 17 21:49:30.746190 systemd[1]: Started kubelet.service. Mar 17 21:49:30.824630 kubelet[1462]: E0317 21:49:30.824558 1462 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 21:49:30.827071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 21:49:30.827320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 21:49:32.787386 env[1194]: time="2025-03-17T21:49:32.787246546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:32.789702 env[1194]: time="2025-03-17T21:49:32.789663022Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:32.792485 env[1194]: time="2025-03-17T21:49:32.792448906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:32.796510 env[1194]: time="2025-03-17T21:49:32.796466448Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:32.797120 env[1194]: time="2025-03-17T21:49:32.797084104Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 17 21:49:32.798059 env[1194]: time="2025-03-17T21:49:32.798024208Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 21:49:34.834208 env[1194]: time="2025-03-17T21:49:34.833933483Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:34.837817 env[1194]: time="2025-03-17T21:49:34.837773729Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:34.841934 env[1194]: time="2025-03-17T21:49:34.841890797Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:34.845461 env[1194]: time="2025-03-17T21:49:34.845424249Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:34.847213 env[1194]: time="2025-03-17T21:49:34.847129478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 17 21:49:34.849600 env[1194]: time="2025-03-17T21:49:34.849553804Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 21:49:36.506796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698779805.mount: Deactivated successfully. Mar 17 21:49:37.612557 env[1194]: time="2025-03-17T21:49:37.612422587Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:37.615288 env[1194]: time="2025-03-17T21:49:37.615233070Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:37.618377 env[1194]: time="2025-03-17T21:49:37.618308660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:37.621436 env[1194]: time="2025-03-17T21:49:37.621402236Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:37.622678 env[1194]: time="2025-03-17T21:49:37.622619719Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 17 21:49:37.625381 env[1194]: time="2025-03-17T21:49:37.625299653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 21:49:38.253915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1635228556.mount: Deactivated successfully. Mar 17 21:49:39.414319 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 21:49:39.852002 env[1194]: time="2025-03-17T21:49:39.851618223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:39.854596 env[1194]: time="2025-03-17T21:49:39.854552850Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:39.857816 env[1194]: time="2025-03-17T21:49:39.857780376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:39.861006 env[1194]: time="2025-03-17T21:49:39.860964271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:39.862706 env[1194]: time="2025-03-17T21:49:39.862613836Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 17 21:49:39.864158 env[1194]: time="2025-03-17T21:49:39.864110957Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 21:49:40.452257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747313768.mount: Deactivated successfully. Mar 17 21:49:40.457989 env[1194]: time="2025-03-17T21:49:40.457904798Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:40.460934 env[1194]: time="2025-03-17T21:49:40.460889639Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:40.462282 env[1194]: time="2025-03-17T21:49:40.462244386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:40.464053 env[1194]: time="2025-03-17T21:49:40.464011121Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:40.465084 env[1194]: time="2025-03-17T21:49:40.465037477Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 21:49:40.466295 env[1194]: time="2025-03-17T21:49:40.466240748Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 21:49:40.931028 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 21:49:40.931406 systemd[1]: Stopped kubelet.service. Mar 17 21:49:40.935081 systemd[1]: Starting kubelet.service... Mar 17 21:49:41.150463 systemd[1]: Started kubelet.service. Mar 17 21:49:41.223886 kubelet[1474]: E0317 21:49:41.223420 1474 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 21:49:41.227985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 21:49:41.228234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 21:49:41.316522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1565660914.mount: Deactivated successfully. Mar 17 21:49:45.830696 env[1194]: time="2025-03-17T21:49:45.830469442Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:45.834988 env[1194]: time="2025-03-17T21:49:45.834936150Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:45.837727 env[1194]: time="2025-03-17T21:49:45.837677465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:45.839782 env[1194]: time="2025-03-17T21:49:45.839714357Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 17 21:49:45.844804 env[1194]: time="2025-03-17T21:49:45.844740103Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:49.317416 systemd[1]: Stopped kubelet.service. Mar 17 21:49:49.322724 systemd[1]: Starting kubelet.service... Mar 17 21:49:49.358477 systemd[1]: Reloading. Mar 17 21:49:49.508118 /usr/lib/systemd/system-generators/torcx-generator[1525]: time="2025-03-17T21:49:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 21:49:49.509045 /usr/lib/systemd/system-generators/torcx-generator[1525]: time="2025-03-17T21:49:49Z" level=info msg="torcx already run" Mar 17 21:49:49.651950 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 21:49:49.652441 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 21:49:49.683000 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 21:49:49.819693 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 21:49:49.820127 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 21:49:49.820685 systemd[1]: Stopped kubelet.service. Mar 17 21:49:49.824077 systemd[1]: Starting kubelet.service... Mar 17 21:49:49.970079 systemd[1]: Started kubelet.service. Mar 17 21:49:50.076124 kubelet[1577]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 21:49:50.076124 kubelet[1577]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 21:49:50.076124 kubelet[1577]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 21:49:50.076124 kubelet[1577]: I0317 21:49:50.075779 1577 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 21:49:50.607482 kubelet[1577]: I0317 21:49:50.607425 1577 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 21:49:50.607738 kubelet[1577]: I0317 21:49:50.607713 1577 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 21:49:50.608281 kubelet[1577]: I0317 21:49:50.608242 1577 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 21:49:50.642742 kubelet[1577]: E0317 21:49:50.642479 1577 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.29.198:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.29.198:6443: connect: connection refused" logger="UnhandledError" Mar 17 21:49:50.648895 kubelet[1577]: I0317 21:49:50.648841 1577 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 21:49:50.683876 kubelet[1577]: E0317 21:49:50.683799 1577 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 21:49:50.684189 kubelet[1577]: I0317 21:49:50.684161 1577 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 21:49:50.691683 kubelet[1577]: I0317 21:49:50.691618 1577 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 21:49:50.693848 kubelet[1577]: I0317 21:49:50.693742 1577 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 21:49:50.694097 kubelet[1577]: I0317 21:49:50.693811 1577 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-87dtj.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 21:49:50.694394 kubelet[1577]: I0317 21:49:50.694119 1577 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 21:49:50.694394 kubelet[1577]: I0317 21:49:50.694138 1577 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 21:49:50.694532 kubelet[1577]: I0317 21:49:50.694417 1577 state_mem.go:36] "Initialized new in-memory state store" Mar 17 21:49:50.698874 kubelet[1577]: I0317 21:49:50.698798 1577 kubelet.go:446] "Attempting to sync node with API server" Mar 17 21:49:50.698874 kubelet[1577]: I0317 21:49:50.698868 1577 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 21:49:50.699162 kubelet[1577]: I0317 21:49:50.698915 1577 kubelet.go:352] "Adding apiserver pod source" Mar 17 21:49:50.699162 kubelet[1577]: I0317 21:49:50.698943 1577 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 21:49:50.709237 kubelet[1577]: W0317 21:49:50.709117 1577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.29.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-87dtj.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.29.198:6443: connect: connection refused Mar 17 21:49:50.709237 kubelet[1577]: E0317 21:49:50.709256 1577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.29.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-87dtj.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.29.198:6443: connect: connection refused" logger="UnhandledError" Mar 17 21:49:50.709590 kubelet[1577]: W0317 21:49:50.709394 1577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.29.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.29.198:6443: connect: connection refused Mar 17 21:49:50.709590 kubelet[1577]: E0317 21:49:50.709451 1577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.29.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.29.198:6443: connect: connection refused" logger="UnhandledError" Mar 17 21:49:50.709728 kubelet[1577]: I0317 21:49:50.709612 1577 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 21:49:50.710282 kubelet[1577]: I0317 21:49:50.710251 1577 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 21:49:50.711201 kubelet[1577]: W0317 21:49:50.711169 1577 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 21:49:50.721816 kubelet[1577]: I0317 21:49:50.721731 1577 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 21:49:50.722089 kubelet[1577]: I0317 21:49:50.721857 1577 server.go:1287] "Started kubelet" Mar 17 21:49:50.736696 kubelet[1577]: E0317 21:49:50.732254 1577 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.29.198:6443/api/v1/namespaces/default/events\": dial tcp 10.230.29.198:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-87dtj.gb1.brightbox.com.182db57fa68a1c75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-87dtj.gb1.brightbox.com,UID:srv-87dtj.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-87dtj.gb1.brightbox.com,},FirstTimestamp:2025-03-17 21:49:50.721793141 +0000 UTC m=+0.746407632,LastTimestamp:2025-03-17 21:49:50.721793141 +0000 UTC m=+0.746407632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-87dtj.gb1.brightbox.com,}" Mar 17 21:49:50.737238 kubelet[1577]: I0317 21:49:50.737185 1577 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 21:49:50.737943 kubelet[1577]: I0317 21:49:50.737821 1577 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 21:49:50.738761 kubelet[1577]: I0317 21:49:50.738730 1577 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 21:49:50.740689 kubelet[1577]: I0317 21:49:50.740660 1577 server.go:490] "Adding debug handlers to kubelet server" Mar 17 21:49:50.746012 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 21:49:50.746458 kubelet[1577]: I0317 21:49:50.746411 1577 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 21:49:50.749793 kubelet[1577]: E0317 21:49:50.749748 1577 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 21:49:50.750448 kubelet[1577]: I0317 21:49:50.750418 1577 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 21:49:50.755421 kubelet[1577]: I0317 21:49:50.755376 1577 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 21:49:50.756274 kubelet[1577]: E0317 21:49:50.756081 1577 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-87dtj.gb1.brightbox.com\" not found" Mar 17 21:49:50.756761 kubelet[1577]: I0317 21:49:50.756725 1577 factory.go:221] Registration of the systemd container factory successfully Mar 17 21:49:50.757100 kubelet[1577]: I0317 21:49:50.757061 1577 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 21:49:50.759212 kubelet[1577]: E0317 21:49:50.759167 1577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.29.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-87dtj.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.29.198:6443: connect: connection refused" interval="200ms" Mar 17 21:49:50.760218 kubelet[1577]: I0317 21:49:50.760189 1577 factory.go:221] Registration of the containerd container factory successfully Mar 17 21:49:50.760649 kubelet[1577]: I0317 21:49:50.760617 1577 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 21:49:50.760851 kubelet[1577]: I0317 21:49:50.760727 1577 reconciler.go:26] "Reconciler: start to sync state" Mar 17 21:49:50.767142 kubelet[1577]: W0317 21:49:50.767053 1577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.29.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.29.198:6443: connect: connection refused Mar 17 21:49:50.767442 kubelet[1577]: E0317 21:49:50.767405 1577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.29.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.29.198:6443: connect: connection refused" logger="UnhandledError" Mar 17 21:49:50.790908 kubelet[1577]: I0317 21:49:50.790782 1577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 21:49:50.797179 kubelet[1577]: I0317 21:49:50.797122 1577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 21:49:50.797418 kubelet[1577]: I0317 21:49:50.797191 1577 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 21:49:50.797418 kubelet[1577]: I0317 21:49:50.797239 1577 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 21:49:50.797418 kubelet[1577]: I0317 21:49:50.797260 1577 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 21:49:50.797635 kubelet[1577]: E0317 21:49:50.797408 1577 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 21:49:50.805991 kubelet[1577]: I0317 21:49:50.805951 1577 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 21:49:50.806227 kubelet[1577]: I0317 21:49:50.806202 1577 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 21:49:50.806412 kubelet[1577]: I0317 21:49:50.806388 1577 state_mem.go:36] "Initialized new in-memory state store" Mar 17 21:49:50.809284 kubelet[1577]: I0317 21:49:50.809247 1577 policy_none.go:49] "None policy: Start" Mar 17 21:49:50.809561 kubelet[1577]: I0317 21:49:50.809533 1577 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 21:49:50.809899 kubelet[1577]: I0317 21:49:50.809854 1577 state_mem.go:35] "Initializing new in-memory state store" Mar 17 21:49:50.810293 kubelet[1577]: W0317 21:49:50.809522 1577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.29.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.29.198:6443: connect: connection refused Mar 17 21:49:50.810408 kubelet[1577]: E0317 21:49:50.810316 1577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.29.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.29.198:6443: connect: connection refused" logger="UnhandledError" Mar 17 21:49:50.820175 systemd[1]: Created slice kubepods.slice. Mar 17 21:49:50.827600 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 21:49:50.832135 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 21:49:50.843090 kubelet[1577]: I0317 21:49:50.843032 1577 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 21:49:50.843415 kubelet[1577]: I0317 21:49:50.843386 1577 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 21:49:50.843532 kubelet[1577]: I0317 21:49:50.843424 1577 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 21:49:50.850260 kubelet[1577]: E0317 21:49:50.850217 1577 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 21:49:50.850571 kubelet[1577]: E0317 21:49:50.850320 1577 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-87dtj.gb1.brightbox.com\" not found" Mar 17 21:49:50.850988 kubelet[1577]: I0317 21:49:50.850745 1577 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 21:49:50.912868 systemd[1]: Created slice kubepods-burstable-pod650ee8a86b37a22242e677b683cda7e1.slice. Mar 17 21:49:50.922083 kubelet[1577]: E0317 21:49:50.922001 1577 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-87dtj.gb1.brightbox.com\" not found" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:50.926580 systemd[1]: Created slice kubepods-burstable-pode290ecd985e9dbcc26fd466698bc031f.slice. Mar 17 21:49:50.930523 kubelet[1577]: E0317 21:49:50.930411 1577 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-87dtj.gb1.brightbox.com\" not found" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:50.933689 systemd[1]: Created slice kubepods-burstable-pod3a498653509a23e741a6015ca35dac0f.slice. Mar 17 21:49:50.936009 kubelet[1577]: E0317 21:49:50.935982 1577 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-87dtj.gb1.brightbox.com\" not found" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:50.946783 kubelet[1577]: I0317 21:49:50.946758 1577 kubelet_node_status.go:76] "Attempting to register node" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:50.947438 kubelet[1577]: E0317 21:49:50.947403 1577 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.29.198:6443/api/v1/nodes\": dial tcp 10.230.29.198:6443: connect: connection refused" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:50.960233 kubelet[1577]: E0317 21:49:50.960185 1577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.29.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-87dtj.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.29.198:6443: connect: connection refused" interval="400ms" Mar 17 21:49:51.062855 kubelet[1577]: I0317 21:49:51.062783 1577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e290ecd985e9dbcc26fd466698bc031f-ca-certs\") pod \"kube-apiserver-srv-87dtj.gb1.brightbox.com\" (UID: \"e290ecd985e9dbcc26fd466698bc031f\") " pod="kube-system/kube-apiserver-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.062855 kubelet[1577]: I0317 21:49:51.062852 1577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e290ecd985e9dbcc26fd466698bc031f-k8s-certs\") pod \"kube-apiserver-srv-87dtj.gb1.brightbox.com\" (UID: \"e290ecd985e9dbcc26fd466698bc031f\") " pod="kube-system/kube-apiserver-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.063174 kubelet[1577]: I0317 21:49:51.062895 1577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e290ecd985e9dbcc26fd466698bc031f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-87dtj.gb1.brightbox.com\" (UID: \"e290ecd985e9dbcc26fd466698bc031f\") " pod="kube-system/kube-apiserver-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.063174 kubelet[1577]: I0317 21:49:51.062928 1577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/650ee8a86b37a22242e677b683cda7e1-kubeconfig\") pod \"kube-controller-manager-srv-87dtj.gb1.brightbox.com\" (UID: \"650ee8a86b37a22242e677b683cda7e1\") " pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.063174 kubelet[1577]: I0317 21:49:51.062981 1577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/650ee8a86b37a22242e677b683cda7e1-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-87dtj.gb1.brightbox.com\" (UID: \"650ee8a86b37a22242e677b683cda7e1\") " pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.063174 kubelet[1577]: I0317 21:49:51.063017 1577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a498653509a23e741a6015ca35dac0f-kubeconfig\") pod \"kube-scheduler-srv-87dtj.gb1.brightbox.com\" (UID: \"3a498653509a23e741a6015ca35dac0f\") " pod="kube-system/kube-scheduler-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.063174 kubelet[1577]: I0317 21:49:51.063042 1577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/650ee8a86b37a22242e677b683cda7e1-ca-certs\") pod \"kube-controller-manager-srv-87dtj.gb1.brightbox.com\" (UID: \"650ee8a86b37a22242e677b683cda7e1\") " pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.063542 kubelet[1577]: I0317 21:49:51.063068 1577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/650ee8a86b37a22242e677b683cda7e1-flexvolume-dir\") pod \"kube-controller-manager-srv-87dtj.gb1.brightbox.com\" (UID: \"650ee8a86b37a22242e677b683cda7e1\") " pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.063542 kubelet[1577]: I0317 21:49:51.063094 1577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/650ee8a86b37a22242e677b683cda7e1-k8s-certs\") pod \"kube-controller-manager-srv-87dtj.gb1.brightbox.com\" (UID: \"650ee8a86b37a22242e677b683cda7e1\") " pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.151134 kubelet[1577]: I0317 21:49:51.151097 1577 kubelet_node_status.go:76] "Attempting to register node" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.152340 kubelet[1577]: E0317 21:49:51.152290 1577 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.29.198:6443/api/v1/nodes\": dial tcp 10.230.29.198:6443: connect: connection refused" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.225226 env[1194]: time="2025-03-17T21:49:51.224635224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-87dtj.gb1.brightbox.com,Uid:650ee8a86b37a22242e677b683cda7e1,Namespace:kube-system,Attempt:0,}" Mar 17 21:49:51.231988 env[1194]: time="2025-03-17T21:49:51.231949168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-87dtj.gb1.brightbox.com,Uid:e290ecd985e9dbcc26fd466698bc031f,Namespace:kube-system,Attempt:0,}" Mar 17 21:49:51.237929 env[1194]: time="2025-03-17T21:49:51.237893354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-87dtj.gb1.brightbox.com,Uid:3a498653509a23e741a6015ca35dac0f,Namespace:kube-system,Attempt:0,}" Mar 17 21:49:51.361679 kubelet[1577]: E0317 21:49:51.361612 1577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.29.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-87dtj.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.29.198:6443: connect: connection refused" interval="800ms" Mar 17 21:49:51.555667 kubelet[1577]: I0317 21:49:51.555536 1577 kubelet_node_status.go:76] "Attempting to register node" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.556545 kubelet[1577]: E0317 21:49:51.556496 1577 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.29.198:6443/api/v1/nodes\": dial tcp 10.230.29.198:6443: connect: connection refused" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:51.620926 kubelet[1577]: W0317 21:49:51.620808 1577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.29.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.29.198:6443: connect: connection refused Mar 17 21:49:51.620926 kubelet[1577]: E0317 21:49:51.620887 1577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.29.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.29.198:6443: connect: connection refused" logger="UnhandledError" Mar 17 21:49:51.623938 kubelet[1577]: W0317 21:49:51.623904 1577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.29.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.29.198:6443: connect: connection refused Mar 17 21:49:51.624055 kubelet[1577]: E0317 21:49:51.623949 1577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.29.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.29.198:6443: connect: connection refused" logger="UnhandledError" Mar 17 21:49:51.822901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616357410.mount: Deactivated successfully. Mar 17 21:49:51.835612 env[1194]: time="2025-03-17T21:49:51.835545315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.838104 env[1194]: time="2025-03-17T21:49:51.838069054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.839200 env[1194]: time="2025-03-17T21:49:51.839165671Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.842384 env[1194]: time="2025-03-17T21:49:51.842325987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.843491 env[1194]: time="2025-03-17T21:49:51.843456062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.844656 env[1194]: time="2025-03-17T21:49:51.844616235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.849104 env[1194]: time="2025-03-17T21:49:51.849066179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.853717 env[1194]: time="2025-03-17T21:49:51.853678041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.854704 env[1194]: time="2025-03-17T21:49:51.854666985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.855671 env[1194]: time="2025-03-17T21:49:51.855634432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.856661 env[1194]: time="2025-03-17T21:49:51.856622969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.859274 env[1194]: time="2025-03-17T21:49:51.859234801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:49:51.888463 env[1194]: time="2025-03-17T21:49:51.888036799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:49:51.888463 env[1194]: time="2025-03-17T21:49:51.888129280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:49:51.888463 env[1194]: time="2025-03-17T21:49:51.888146720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:49:51.894016 env[1194]: time="2025-03-17T21:49:51.888570997Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58eefa238b2da46b8a0d7f29b7b7e2a1c85facdb0c91f7f27d4a59721b07b8d2 pid=1619 runtime=io.containerd.runc.v2 Mar 17 21:49:51.931301 env[1194]: time="2025-03-17T21:49:51.930174990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:49:51.934309 systemd[1]: Started cri-containerd-58eefa238b2da46b8a0d7f29b7b7e2a1c85facdb0c91f7f27d4a59721b07b8d2.scope. Mar 17 21:49:51.944629 env[1194]: time="2025-03-17T21:49:51.938986947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:49:51.944629 env[1194]: time="2025-03-17T21:49:51.939072994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:49:51.944629 env[1194]: time="2025-03-17T21:49:51.939455989Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e7683e50cabb8d24de36468d4e9c358eb2b25c3b8a2c7b9d7da08b2efcef4f3 pid=1643 runtime=io.containerd.runc.v2 Mar 17 21:49:51.944629 env[1194]: time="2025-03-17T21:49:51.943630855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:49:51.944629 env[1194]: time="2025-03-17T21:49:51.943725970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:49:51.944629 env[1194]: time="2025-03-17T21:49:51.943799714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:49:51.944629 env[1194]: time="2025-03-17T21:49:51.944067213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fd87e2d6bd2abfa025c380233116318ff96f7740f6f103c6de13f3057c43fde pid=1650 runtime=io.containerd.runc.v2 Mar 17 21:49:51.982396 systemd[1]: Started cri-containerd-2e7683e50cabb8d24de36468d4e9c358eb2b25c3b8a2c7b9d7da08b2efcef4f3.scope. Mar 17 21:49:51.994771 systemd[1]: Started cri-containerd-6fd87e2d6bd2abfa025c380233116318ff96f7740f6f103c6de13f3057c43fde.scope. Mar 17 21:49:52.046446 kubelet[1577]: W0317 21:49:52.046396 1577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.29.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.29.198:6443: connect: connection refused Mar 17 21:49:52.046671 kubelet[1577]: E0317 21:49:52.046458 1577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.29.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.29.198:6443: connect: connection refused" logger="UnhandledError" Mar 17 21:49:52.055239 kubelet[1577]: W0317 21:49:52.055170 1577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.29.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-87dtj.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.29.198:6443: connect: connection refused Mar 17 21:49:52.055386 kubelet[1577]: E0317 21:49:52.055252 1577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.29.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-87dtj.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.29.198:6443: connect: connection refused" logger="UnhandledError" Mar 17 21:49:52.060458 env[1194]: time="2025-03-17T21:49:52.060403114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-87dtj.gb1.brightbox.com,Uid:e290ecd985e9dbcc26fd466698bc031f,Namespace:kube-system,Attempt:0,} returns sandbox id \"58eefa238b2da46b8a0d7f29b7b7e2a1c85facdb0c91f7f27d4a59721b07b8d2\"" Mar 17 21:49:52.064514 env[1194]: time="2025-03-17T21:49:52.064474189Z" level=info msg="CreateContainer within sandbox \"58eefa238b2da46b8a0d7f29b7b7e2a1c85facdb0c91f7f27d4a59721b07b8d2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 21:49:52.101302 env[1194]: time="2025-03-17T21:49:52.099231089Z" level=info msg="CreateContainer within sandbox \"58eefa238b2da46b8a0d7f29b7b7e2a1c85facdb0c91f7f27d4a59721b07b8d2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f395c9ba58a64fce8959a3fccc57761e499c0c03cd234bd82798a306c340d5ab\"" Mar 17 21:49:52.102741 env[1194]: time="2025-03-17T21:49:52.102702315Z" level=info msg="StartContainer for \"f395c9ba58a64fce8959a3fccc57761e499c0c03cd234bd82798a306c340d5ab\"" Mar 17 21:49:52.109957 env[1194]: time="2025-03-17T21:49:52.109906810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-87dtj.gb1.brightbox.com,Uid:650ee8a86b37a22242e677b683cda7e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e7683e50cabb8d24de36468d4e9c358eb2b25c3b8a2c7b9d7da08b2efcef4f3\"" Mar 17 21:49:52.116825 env[1194]: time="2025-03-17T21:49:52.116726758Z" level=info msg="CreateContainer within sandbox \"2e7683e50cabb8d24de36468d4e9c358eb2b25c3b8a2c7b9d7da08b2efcef4f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 21:49:52.129562 env[1194]: time="2025-03-17T21:49:52.129500729Z" level=info msg="CreateContainer within sandbox \"2e7683e50cabb8d24de36468d4e9c358eb2b25c3b8a2c7b9d7da08b2efcef4f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7e620a4cd05c23884aa67fc90ba92d88f8abcefd8ba89b5e22ad3606bce32003\"" Mar 17 21:49:52.130508 env[1194]: time="2025-03-17T21:49:52.130465898Z" level=info msg="StartContainer for \"7e620a4cd05c23884aa67fc90ba92d88f8abcefd8ba89b5e22ad3606bce32003\"" Mar 17 21:49:52.136576 env[1194]: time="2025-03-17T21:49:52.136533412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-87dtj.gb1.brightbox.com,Uid:3a498653509a23e741a6015ca35dac0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fd87e2d6bd2abfa025c380233116318ff96f7740f6f103c6de13f3057c43fde\"" Mar 17 21:49:52.139434 env[1194]: time="2025-03-17T21:49:52.139318015Z" level=info msg="CreateContainer within sandbox \"6fd87e2d6bd2abfa025c380233116318ff96f7740f6f103c6de13f3057c43fde\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 21:49:52.153801 systemd[1]: Started cri-containerd-f395c9ba58a64fce8959a3fccc57761e499c0c03cd234bd82798a306c340d5ab.scope. Mar 17 21:49:52.163376 kubelet[1577]: E0317 21:49:52.162467 1577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.29.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-87dtj.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.29.198:6443: connect: connection refused" interval="1.6s" Mar 17 21:49:52.165742 env[1194]: time="2025-03-17T21:49:52.165686039Z" level=info msg="CreateContainer within sandbox \"6fd87e2d6bd2abfa025c380233116318ff96f7740f6f103c6de13f3057c43fde\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"688bb5aa07ead9dc5a0ee112d755f87cf0471d67713081cb3a1aa5a37ca66be9\"" Mar 17 21:49:52.166380 env[1194]: time="2025-03-17T21:49:52.166314015Z" level=info msg="StartContainer for \"688bb5aa07ead9dc5a0ee112d755f87cf0471d67713081cb3a1aa5a37ca66be9\"" Mar 17 21:49:52.200806 systemd[1]: Started cri-containerd-7e620a4cd05c23884aa67fc90ba92d88f8abcefd8ba89b5e22ad3606bce32003.scope. Mar 17 21:49:52.220607 systemd[1]: Started cri-containerd-688bb5aa07ead9dc5a0ee112d755f87cf0471d67713081cb3a1aa5a37ca66be9.scope. Mar 17 21:49:52.274023 env[1194]: time="2025-03-17T21:49:52.273286321Z" level=info msg="StartContainer for \"f395c9ba58a64fce8959a3fccc57761e499c0c03cd234bd82798a306c340d5ab\" returns successfully" Mar 17 21:49:52.326990 env[1194]: time="2025-03-17T21:49:52.326881421Z" level=info msg="StartContainer for \"688bb5aa07ead9dc5a0ee112d755f87cf0471d67713081cb3a1aa5a37ca66be9\" returns successfully" Mar 17 21:49:52.327658 env[1194]: time="2025-03-17T21:49:52.327622183Z" level=info msg="StartContainer for \"7e620a4cd05c23884aa67fc90ba92d88f8abcefd8ba89b5e22ad3606bce32003\" returns successfully" Mar 17 21:49:52.360606 kubelet[1577]: I0317 21:49:52.360033 1577 kubelet_node_status.go:76] "Attempting to register node" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:52.360606 kubelet[1577]: E0317 21:49:52.360590 1577 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.230.29.198:6443/api/v1/nodes\": dial tcp 10.230.29.198:6443: connect: connection refused" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:52.505790 update_engine[1184]: I0317 21:49:52.504493 1184 update_attempter.cc:509] Updating boot flags... Mar 17 21:49:52.624387 kubelet[1577]: E0317 21:49:52.623552 1577 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.29.198:6443/api/v1/namespaces/default/events\": dial tcp 10.230.29.198:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-87dtj.gb1.brightbox.com.182db57fa68a1c75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-87dtj.gb1.brightbox.com,UID:srv-87dtj.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-87dtj.gb1.brightbox.com,},FirstTimestamp:2025-03-17 21:49:50.721793141 +0000 UTC m=+0.746407632,LastTimestamp:2025-03-17 21:49:50.721793141 +0000 UTC m=+0.746407632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-87dtj.gb1.brightbox.com,}" Mar 17 21:49:52.754823 kubelet[1577]: E0317 21:49:52.754769 1577 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.29.198:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.29.198:6443: connect: connection refused" logger="UnhandledError" Mar 17 21:49:52.828817 kubelet[1577]: E0317 21:49:52.828724 1577 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-87dtj.gb1.brightbox.com\" not found" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:52.840360 kubelet[1577]: E0317 21:49:52.839518 1577 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-87dtj.gb1.brightbox.com\" not found" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:52.848722 kubelet[1577]: E0317 21:49:52.848682 1577 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-87dtj.gb1.brightbox.com\" not found" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:53.847942 kubelet[1577]: E0317 21:49:53.847897 1577 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-87dtj.gb1.brightbox.com\" not found" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:53.849282 kubelet[1577]: E0317 21:49:53.848992 1577 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-87dtj.gb1.brightbox.com\" not found" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:53.964272 kubelet[1577]: I0317 21:49:53.964234 1577 kubelet_node_status.go:76] "Attempting to register node" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:54.620230 kubelet[1577]: E0317 21:49:54.620190 1577 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-87dtj.gb1.brightbox.com\" not found" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:55.511690 kubelet[1577]: E0317 21:49:55.511579 1577 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-87dtj.gb1.brightbox.com\" not found" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:55.516784 kubelet[1577]: I0317 21:49:55.516744 1577 kubelet_node_status.go:79] "Successfully registered node" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:49:55.516932 kubelet[1577]: E0317 21:49:55.516816 1577 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"srv-87dtj.gb1.brightbox.com\": node \"srv-87dtj.gb1.brightbox.com\" not found" Mar 17 21:49:55.557424 kubelet[1577]: I0317 21:49:55.557388 1577 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:55.585954 kubelet[1577]: E0317 21:49:55.585897 1577 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-87dtj.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:55.586121 kubelet[1577]: I0317 21:49:55.585979 1577 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:55.588314 kubelet[1577]: E0317 21:49:55.588276 1577 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-87dtj.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:55.588487 kubelet[1577]: I0317 21:49:55.588460 1577 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:55.591499 kubelet[1577]: E0317 21:49:55.591466 1577 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-87dtj.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-87dtj.gb1.brightbox.com" Mar 17 21:49:55.707421 kubelet[1577]: I0317 21:49:55.707265 1577 apiserver.go:52] "Watching apiserver" Mar 17 21:49:55.761941 kubelet[1577]: I0317 21:49:55.761739 1577 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 21:49:58.054448 systemd[1]: Reloading. Mar 17 21:49:58.214173 /usr/lib/systemd/system-generators/torcx-generator[1886]: time="2025-03-17T21:49:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 21:49:58.214233 /usr/lib/systemd/system-generators/torcx-generator[1886]: time="2025-03-17T21:49:58Z" level=info msg="torcx already run" Mar 17 21:49:58.316281 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 21:49:58.317134 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 21:49:58.347844 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 21:49:58.554899 systemd[1]: Stopping kubelet.service... Mar 17 21:49:58.571815 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 21:49:58.572862 systemd[1]: Stopped kubelet.service. Mar 17 21:49:58.573020 systemd[1]: kubelet.service: Consumed 1.250s CPU time. Mar 17 21:49:58.576367 systemd[1]: Starting kubelet.service... Mar 17 21:49:59.953649 systemd[1]: Started kubelet.service. Mar 17 21:50:00.081768 kubelet[1934]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 21:50:00.082570 kubelet[1934]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 21:50:00.082691 kubelet[1934]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 21:50:00.082989 kubelet[1934]: I0317 21:50:00.082920 1934 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 21:50:00.105208 kubelet[1934]: I0317 21:50:00.105165 1934 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 21:50:00.105426 kubelet[1934]: I0317 21:50:00.105401 1934 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 21:50:00.105886 kubelet[1934]: I0317 21:50:00.105861 1934 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 21:50:00.111824 kubelet[1934]: I0317 21:50:00.111797 1934 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 21:50:00.120676 sudo[1944]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 21:50:00.121166 sudo[1944]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 21:50:00.126998 kubelet[1934]: I0317 21:50:00.126968 1934 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 21:50:00.142991 kubelet[1934]: E0317 21:50:00.134631 1934 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 21:50:00.142991 kubelet[1934]: I0317 21:50:00.134695 1934 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 21:50:00.142991 kubelet[1934]: I0317 21:50:00.141428 1934 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 21:50:00.142991 kubelet[1934]: I0317 21:50:00.141803 1934 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 21:50:00.143279 kubelet[1934]: I0317 21:50:00.141847 1934 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-87dtj.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 21:50:00.143279 kubelet[1934]: I0317 21:50:00.142186 1934 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 21:50:00.143279 kubelet[1934]: I0317 21:50:00.142204 1934 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 21:50:00.143279 kubelet[1934]: I0317 21:50:00.142299 1934 state_mem.go:36] "Initialized new in-memory state store" Mar 17 21:50:00.145647 kubelet[1934]: I0317 21:50:00.145620 1934 kubelet.go:446] "Attempting to sync node with API server" Mar 17 21:50:00.145846 kubelet[1934]: I0317 21:50:00.145820 1934 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 21:50:00.146028 kubelet[1934]: I0317 21:50:00.146003 1934 kubelet.go:352] "Adding apiserver pod source" Mar 17 21:50:00.146205 kubelet[1934]: I0317 21:50:00.146179 1934 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 21:50:00.157309 kubelet[1934]: I0317 21:50:00.154449 1934 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 21:50:00.161022 kubelet[1934]: I0317 21:50:00.157198 1934 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 21:50:00.162297 kubelet[1934]: I0317 21:50:00.162267 1934 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 21:50:00.162408 kubelet[1934]: I0317 21:50:00.162325 1934 server.go:1287] "Started kubelet" Mar 17 21:50:00.175364 kubelet[1934]: I0317 21:50:00.173665 1934 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 21:50:00.188353 kubelet[1934]: I0317 21:50:00.188291 1934 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 21:50:00.190880 kubelet[1934]: I0317 21:50:00.190851 1934 server.go:490] "Adding debug handlers to kubelet server" Mar 17 21:50:00.193959 kubelet[1934]: I0317 21:50:00.193867 1934 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 21:50:00.196353 kubelet[1934]: I0317 21:50:00.194882 1934 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 21:50:00.198307 kubelet[1934]: I0317 21:50:00.197477 1934 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 21:50:00.198307 kubelet[1934]: I0317 21:50:00.197686 1934 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 21:50:00.198307 kubelet[1934]: E0317 21:50:00.198029 1934 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-87dtj.gb1.brightbox.com\" not found" Mar 17 21:50:00.199317 kubelet[1934]: I0317 21:50:00.199288 1934 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 21:50:00.199665 kubelet[1934]: I0317 21:50:00.199641 1934 reconciler.go:26] "Reconciler: start to sync state" Mar 17 21:50:00.219682 kubelet[1934]: I0317 21:50:00.219544 1934 factory.go:221] Registration of the systemd container factory successfully Mar 17 21:50:00.219791 kubelet[1934]: I0317 21:50:00.219726 1934 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 21:50:00.230641 kubelet[1934]: I0317 21:50:00.230608 1934 factory.go:221] Registration of the containerd container factory successfully Mar 17 21:50:00.289948 kubelet[1934]: I0317 21:50:00.289877 1934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 21:50:00.293172 kubelet[1934]: I0317 21:50:00.292808 1934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 21:50:00.293172 kubelet[1934]: I0317 21:50:00.292882 1934 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 21:50:00.293172 kubelet[1934]: I0317 21:50:00.292917 1934 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 21:50:00.293172 kubelet[1934]: I0317 21:50:00.292937 1934 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 21:50:00.293172 kubelet[1934]: E0317 21:50:00.293073 1934 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 21:50:00.356733 kubelet[1934]: I0317 21:50:00.356653 1934 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 21:50:00.356733 kubelet[1934]: I0317 21:50:00.356695 1934 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 21:50:00.357075 kubelet[1934]: I0317 21:50:00.356755 1934 state_mem.go:36] "Initialized new in-memory state store" Mar 17 21:50:00.357150 kubelet[1934]: I0317 21:50:00.357108 1934 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 21:50:00.357221 kubelet[1934]: I0317 21:50:00.357134 1934 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 21:50:00.357221 kubelet[1934]: I0317 21:50:00.357183 1934 policy_none.go:49] "None policy: Start" Mar 17 21:50:00.357221 kubelet[1934]: I0317 21:50:00.357214 1934 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 21:50:00.357482 kubelet[1934]: I0317 21:50:00.357245 1934 state_mem.go:35] "Initializing new in-memory state store" Mar 17 21:50:00.357482 kubelet[1934]: I0317 21:50:00.357452 1934 state_mem.go:75] "Updated machine memory state" Mar 17 21:50:00.365425 kubelet[1934]: I0317 21:50:00.365381 1934 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 21:50:00.365790 kubelet[1934]: I0317 21:50:00.365745 1934 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 21:50:00.365894 kubelet[1934]: I0317 21:50:00.365803 1934 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 21:50:00.370180 kubelet[1934]: I0317 21:50:00.369114 1934 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 21:50:00.370571 kubelet[1934]: E0317 21:50:00.370539 1934 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 21:50:00.404472 kubelet[1934]: I0317 21:50:00.404417 1934 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.414319 kubelet[1934]: I0317 21:50:00.414279 1934 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.421653 kubelet[1934]: W0317 21:50:00.421594 1934 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 21:50:00.422352 kubelet[1934]: W0317 21:50:00.422313 1934 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 21:50:00.423506 kubelet[1934]: I0317 21:50:00.423438 1934 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.435629 kubelet[1934]: W0317 21:50:00.435560 1934 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 21:50:00.505322 kubelet[1934]: I0317 21:50:00.505131 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e290ecd985e9dbcc26fd466698bc031f-k8s-certs\") pod \"kube-apiserver-srv-87dtj.gb1.brightbox.com\" (UID: \"e290ecd985e9dbcc26fd466698bc031f\") " pod="kube-system/kube-apiserver-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.505322 kubelet[1934]: I0317 21:50:00.505244 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/650ee8a86b37a22242e677b683cda7e1-ca-certs\") pod \"kube-controller-manager-srv-87dtj.gb1.brightbox.com\" (UID: \"650ee8a86b37a22242e677b683cda7e1\") " pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.507255 kubelet[1934]: I0317 21:50:00.507220 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/650ee8a86b37a22242e677b683cda7e1-k8s-certs\") pod \"kube-controller-manager-srv-87dtj.gb1.brightbox.com\" (UID: \"650ee8a86b37a22242e677b683cda7e1\") " pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.507379 kubelet[1934]: I0317 21:50:00.507287 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/650ee8a86b37a22242e677b683cda7e1-kubeconfig\") pod \"kube-controller-manager-srv-87dtj.gb1.brightbox.com\" (UID: \"650ee8a86b37a22242e677b683cda7e1\") " pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.507379 kubelet[1934]: I0317 21:50:00.507367 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e290ecd985e9dbcc26fd466698bc031f-ca-certs\") pod \"kube-apiserver-srv-87dtj.gb1.brightbox.com\" (UID: \"e290ecd985e9dbcc26fd466698bc031f\") " pod="kube-system/kube-apiserver-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.507522 kubelet[1934]: I0317 21:50:00.507402 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e290ecd985e9dbcc26fd466698bc031f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-87dtj.gb1.brightbox.com\" (UID: \"e290ecd985e9dbcc26fd466698bc031f\") " pod="kube-system/kube-apiserver-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.507522 kubelet[1934]: I0317 21:50:00.507455 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/650ee8a86b37a22242e677b683cda7e1-flexvolume-dir\") pod \"kube-controller-manager-srv-87dtj.gb1.brightbox.com\" (UID: \"650ee8a86b37a22242e677b683cda7e1\") " pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.507522 kubelet[1934]: I0317 21:50:00.507485 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/650ee8a86b37a22242e677b683cda7e1-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-87dtj.gb1.brightbox.com\" (UID: \"650ee8a86b37a22242e677b683cda7e1\") " pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.507730 kubelet[1934]: I0317 21:50:00.507599 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a498653509a23e741a6015ca35dac0f-kubeconfig\") pod \"kube-scheduler-srv-87dtj.gb1.brightbox.com\" (UID: \"3a498653509a23e741a6015ca35dac0f\") " pod="kube-system/kube-scheduler-srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.517033 kubelet[1934]: I0317 21:50:00.516988 1934 kubelet_node_status.go:76] "Attempting to register node" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.530664 kubelet[1934]: I0317 21:50:00.530612 1934 kubelet_node_status.go:125] "Node was previously registered" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:50:00.530862 kubelet[1934]: I0317 21:50:00.530781 1934 kubelet_node_status.go:79] "Successfully registered node" node="srv-87dtj.gb1.brightbox.com" Mar 17 21:50:01.052281 sudo[1944]: pam_unix(sudo:session): session closed for user root Mar 17 21:50:01.162208 kubelet[1934]: I0317 21:50:01.162135 1934 apiserver.go:52] "Watching apiserver" Mar 17 21:50:01.200561 kubelet[1934]: I0317 21:50:01.200506 1934 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 21:50:01.394587 kubelet[1934]: I0317 21:50:01.394356 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-87dtj.gb1.brightbox.com" podStartSLOduration=1.394301251 podStartE2EDuration="1.394301251s" podCreationTimestamp="2025-03-17 21:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:50:01.391945102 +0000 UTC m=+1.415758771" watchObservedRunningTime="2025-03-17 21:50:01.394301251 +0000 UTC m=+1.418114911" Mar 17 21:50:01.423718 kubelet[1934]: I0317 21:50:01.423657 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-87dtj.gb1.brightbox.com" podStartSLOduration=1.423614141 podStartE2EDuration="1.423614141s" podCreationTimestamp="2025-03-17 21:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:50:01.409505044 +0000 UTC m=+1.433318712" watchObservedRunningTime="2025-03-17 21:50:01.423614141 +0000 UTC m=+1.447427796" Mar 17 21:50:01.448944 kubelet[1934]: I0317 21:50:01.448850 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-87dtj.gb1.brightbox.com" podStartSLOduration=1.448801191 podStartE2EDuration="1.448801191s" podCreationTimestamp="2025-03-17 21:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:50:01.425924537 +0000 UTC m=+1.449738204" watchObservedRunningTime="2025-03-17 21:50:01.448801191 +0000 UTC m=+1.472614852" Mar 17 21:50:01.905884 kubelet[1934]: I0317 21:50:01.905803 1934 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 21:50:01.907505 env[1194]: time="2025-03-17T21:50:01.907374136Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 21:50:01.908886 kubelet[1934]: I0317 21:50:01.908842 1934 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 21:50:02.464792 systemd[1]: Created slice kubepods-besteffort-pod249e7ff1_aeb2_4453_bbde_8016ec7ea0b8.slice. Mar 17 21:50:02.484154 systemd[1]: Created slice kubepods-burstable-podd78bf6f6_f58d_46ce_bfbf_940eba5f5d4c.slice. Mar 17 21:50:02.495256 kubelet[1934]: W0317 21:50:02.494673 1934 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:srv-87dtj.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-87dtj.gb1.brightbox.com' and this object Mar 17 21:50:02.495256 kubelet[1934]: E0317 21:50:02.494773 1934 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:srv-87dtj.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-87dtj.gb1.brightbox.com' and this object" logger="UnhandledError" Mar 17 21:50:02.496081 kubelet[1934]: W0317 21:50:02.496048 1934 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-87dtj.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-87dtj.gb1.brightbox.com' and this object Mar 17 21:50:02.496192 kubelet[1934]: E0317 21:50:02.496087 1934 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:srv-87dtj.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-87dtj.gb1.brightbox.com' and this object" logger="UnhandledError" Mar 17 21:50:02.496388 kubelet[1934]: W0317 21:50:02.496321 1934 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:srv-87dtj.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-87dtj.gb1.brightbox.com' and this object Mar 17 21:50:02.496610 kubelet[1934]: E0317 21:50:02.496575 1934 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:srv-87dtj.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-87dtj.gb1.brightbox.com' and this object" logger="UnhandledError" Mar 17 21:50:02.525439 kubelet[1934]: I0317 21:50:02.525318 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-lib-modules\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.525829 kubelet[1934]: I0317 21:50:02.525790 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/249e7ff1-aeb2-4453-bbde-8016ec7ea0b8-xtables-lock\") pod \"kube-proxy-5jrqn\" (UID: \"249e7ff1-aeb2-4453-bbde-8016ec7ea0b8\") " pod="kube-system/kube-proxy-5jrqn" Mar 17 21:50:02.526017 kubelet[1934]: I0317 21:50:02.525988 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-run\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.526194 kubelet[1934]: I0317 21:50:02.526165 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-host-proc-sys-net\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.526401 kubelet[1934]: I0317 21:50:02.526374 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/249e7ff1-aeb2-4453-bbde-8016ec7ea0b8-lib-modules\") pod \"kube-proxy-5jrqn\" (UID: \"249e7ff1-aeb2-4453-bbde-8016ec7ea0b8\") " pod="kube-system/kube-proxy-5jrqn" Mar 17 21:50:02.526605 kubelet[1934]: I0317 21:50:02.526576 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-config-path\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.526773 kubelet[1934]: I0317 21:50:02.526742 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p56t5\" (UniqueName: \"kubernetes.io/projected/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-kube-api-access-p56t5\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.526946 kubelet[1934]: I0317 21:50:02.526911 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6xvt\" (UniqueName: \"kubernetes.io/projected/249e7ff1-aeb2-4453-bbde-8016ec7ea0b8-kube-api-access-d6xvt\") pod \"kube-proxy-5jrqn\" (UID: \"249e7ff1-aeb2-4453-bbde-8016ec7ea0b8\") " pod="kube-system/kube-proxy-5jrqn" Mar 17 21:50:02.527145 kubelet[1934]: I0317 21:50:02.527116 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-bpf-maps\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.527347 kubelet[1934]: I0317 21:50:02.527293 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-etc-cni-netd\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.527522 kubelet[1934]: I0317 21:50:02.527476 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-xtables-lock\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.527690 kubelet[1934]: I0317 21:50:02.527663 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-cgroup\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.527845 kubelet[1934]: I0317 21:50:02.527817 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/249e7ff1-aeb2-4453-bbde-8016ec7ea0b8-kube-proxy\") pod \"kube-proxy-5jrqn\" (UID: \"249e7ff1-aeb2-4453-bbde-8016ec7ea0b8\") " pod="kube-system/kube-proxy-5jrqn" Mar 17 21:50:02.528015 kubelet[1934]: I0317 21:50:02.527987 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-hostproc\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.528209 kubelet[1934]: I0317 21:50:02.528181 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cni-path\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.528420 kubelet[1934]: I0317 21:50:02.528378 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-host-proc-sys-kernel\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.528640 kubelet[1934]: I0317 21:50:02.528612 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-hubble-tls\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.528806 kubelet[1934]: I0317 21:50:02.528777 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-clustermesh-secrets\") pod \"cilium-hlgmc\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " pod="kube-system/cilium-hlgmc" Mar 17 21:50:02.652016 kubelet[1934]: I0317 21:50:02.651962 1934 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 21:50:02.974602 systemd[1]: Created slice kubepods-besteffort-podc356167f_8513_4fd0_a2e1_4754e4d0ef94.slice. Mar 17 21:50:03.033185 kubelet[1934]: I0317 21:50:03.033121 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c356167f-8513-4fd0-a2e1-4754e4d0ef94-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-m4wj8\" (UID: \"c356167f-8513-4fd0-a2e1-4754e4d0ef94\") " pod="kube-system/cilium-operator-6c4d7847fc-m4wj8" Mar 17 21:50:03.033536 kubelet[1934]: I0317 21:50:03.033504 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8r6n\" (UniqueName: \"kubernetes.io/projected/c356167f-8513-4fd0-a2e1-4754e4d0ef94-kube-api-access-l8r6n\") pod \"cilium-operator-6c4d7847fc-m4wj8\" (UID: \"c356167f-8513-4fd0-a2e1-4754e4d0ef94\") " pod="kube-system/cilium-operator-6c4d7847fc-m4wj8" Mar 17 21:50:03.076720 env[1194]: time="2025-03-17T21:50:03.076635424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jrqn,Uid:249e7ff1-aeb2-4453-bbde-8016ec7ea0b8,Namespace:kube-system,Attempt:0,}" Mar 17 21:50:03.109401 env[1194]: time="2025-03-17T21:50:03.108669885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:50:03.109401 env[1194]: time="2025-03-17T21:50:03.108759828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:50:03.109401 env[1194]: time="2025-03-17T21:50:03.108778634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:50:03.109827 env[1194]: time="2025-03-17T21:50:03.109461926Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de147dde0e34287a381d633e2ab6451a07d6e812b3022d92545e74f54cb52a89 pid=1990 runtime=io.containerd.runc.v2 Mar 17 21:50:03.195400 systemd[1]: Started cri-containerd-de147dde0e34287a381d633e2ab6451a07d6e812b3022d92545e74f54cb52a89.scope. Mar 17 21:50:03.286667 env[1194]: time="2025-03-17T21:50:03.286475441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jrqn,Uid:249e7ff1-aeb2-4453-bbde-8016ec7ea0b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"de147dde0e34287a381d633e2ab6451a07d6e812b3022d92545e74f54cb52a89\"" Mar 17 21:50:03.297954 env[1194]: time="2025-03-17T21:50:03.296215559Z" level=info msg="CreateContainer within sandbox \"de147dde0e34287a381d633e2ab6451a07d6e812b3022d92545e74f54cb52a89\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 21:50:03.301692 kubelet[1934]: E0317 21:50:03.301589 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-hlgmc" podUID="d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" Mar 17 21:50:03.320364 env[1194]: time="2025-03-17T21:50:03.320284932Z" level=info msg="CreateContainer within sandbox \"de147dde0e34287a381d633e2ab6451a07d6e812b3022d92545e74f54cb52a89\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f59c9325324b3e6e0861034f28ddd24e86183a4e217f377895694cdbb0507dd\"" Mar 17 21:50:03.321363 env[1194]: time="2025-03-17T21:50:03.321297071Z" level=info msg="StartContainer for \"1f59c9325324b3e6e0861034f28ddd24e86183a4e217f377895694cdbb0507dd\"" Mar 17 21:50:03.379385 systemd[1]: Started cri-containerd-1f59c9325324b3e6e0861034f28ddd24e86183a4e217f377895694cdbb0507dd.scope. Mar 17 21:50:03.439251 kubelet[1934]: I0317 21:50:03.437777 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-hostproc\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.439251 kubelet[1934]: I0317 21:50:03.437873 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-cgroup\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.439251 kubelet[1934]: I0317 21:50:03.437916 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-lib-modules\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.439251 kubelet[1934]: I0317 21:50:03.437977 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p56t5\" (UniqueName: \"kubernetes.io/projected/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-kube-api-access-p56t5\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.439251 kubelet[1934]: I0317 21:50:03.438034 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-run\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.439251 kubelet[1934]: I0317 21:50:03.438063 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-bpf-maps\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.439794 kubelet[1934]: I0317 21:50:03.438119 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-host-proc-sys-kernel\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.439794 kubelet[1934]: I0317 21:50:03.438151 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cni-path\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.439794 kubelet[1934]: I0317 21:50:03.438211 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-config-path\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.439794 kubelet[1934]: I0317 21:50:03.438271 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-xtables-lock\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.439794 kubelet[1934]: I0317 21:50:03.438299 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-host-proc-sys-net\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.439794 kubelet[1934]: I0317 21:50:03.438384 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-etc-cni-netd\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:03.440154 kubelet[1934]: I0317 21:50:03.438550 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:50:03.440154 kubelet[1934]: I0317 21:50:03.438608 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:50:03.440154 kubelet[1934]: I0317 21:50:03.438650 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-hostproc" (OuterVolumeSpecName: "hostproc") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:50:03.440154 kubelet[1934]: I0317 21:50:03.438683 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:50:03.440154 kubelet[1934]: I0317 21:50:03.438681 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:50:03.440482 kubelet[1934]: I0317 21:50:03.438746 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:50:03.440482 kubelet[1934]: I0317 21:50:03.438758 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cni-path" (OuterVolumeSpecName: "cni-path") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:50:03.447899 kubelet[1934]: I0317 21:50:03.447802 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 21:50:03.448018 kubelet[1934]: I0317 21:50:03.447957 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:50:03.448018 kubelet[1934]: I0317 21:50:03.448003 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:50:03.448180 kubelet[1934]: I0317 21:50:03.448049 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:50:03.452015 kubelet[1934]: I0317 21:50:03.451379 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-kube-api-access-p56t5" (OuterVolumeSpecName: "kube-api-access-p56t5") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "kube-api-access-p56t5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 21:50:03.478555 env[1194]: time="2025-03-17T21:50:03.478055383Z" level=info msg="StartContainer for \"1f59c9325324b3e6e0861034f28ddd24e86183a4e217f377895694cdbb0507dd\" returns successfully" Mar 17 21:50:03.539114 kubelet[1934]: I0317 21:50:03.538914 1934 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-lib-modules\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.539114 kubelet[1934]: I0317 21:50:03.538978 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p56t5\" (UniqueName: \"kubernetes.io/projected/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-kube-api-access-p56t5\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.539114 kubelet[1934]: I0317 21:50:03.539002 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-run\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.539114 kubelet[1934]: I0317 21:50:03.539020 1934 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-bpf-maps\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.539114 kubelet[1934]: I0317 21:50:03.539050 1934 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-host-proc-sys-kernel\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.539114 kubelet[1934]: I0317 21:50:03.539070 1934 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cni-path\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.539114 kubelet[1934]: I0317 21:50:03.539094 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-config-path\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.539114 kubelet[1934]: I0317 21:50:03.539111 1934 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-xtables-lock\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.540389 kubelet[1934]: I0317 21:50:03.539128 1934 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-host-proc-sys-net\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.540389 kubelet[1934]: I0317 21:50:03.539143 1934 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-etc-cni-netd\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.540389 kubelet[1934]: I0317 21:50:03.539159 1934 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-hostproc\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.540389 kubelet[1934]: I0317 21:50:03.539175 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-cilium-cgroup\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:03.583590 env[1194]: time="2025-03-17T21:50:03.583511442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m4wj8,Uid:c356167f-8513-4fd0-a2e1-4754e4d0ef94,Namespace:kube-system,Attempt:0,}" Mar 17 21:50:03.608850 env[1194]: time="2025-03-17T21:50:03.608739736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:50:03.609220 env[1194]: time="2025-03-17T21:50:03.609165186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:50:03.609435 env[1194]: time="2025-03-17T21:50:03.609379607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:50:03.609992 env[1194]: time="2025-03-17T21:50:03.609890962Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0 pid=2078 runtime=io.containerd.runc.v2 Mar 17 21:50:03.628566 systemd[1]: Started cri-containerd-7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0.scope. Mar 17 21:50:03.633387 kubelet[1934]: E0317 21:50:03.632719 1934 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Mar 17 21:50:03.633387 kubelet[1934]: E0317 21:50:03.632786 1934 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-hlgmc: failed to sync secret cache: timed out waiting for the condition Mar 17 21:50:03.633387 kubelet[1934]: E0317 21:50:03.632910 1934 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-hubble-tls podName:d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c nodeName:}" failed. No retries permitted until 2025-03-17 21:50:04.13287198 +0000 UTC m=+4.156685644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-hubble-tls") pod "cilium-hlgmc" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c") : failed to sync secret cache: timed out waiting for the condition Mar 17 21:50:03.633387 kubelet[1934]: E0317 21:50:03.633259 1934 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 21:50:03.633387 kubelet[1934]: E0317 21:50:03.633311 1934 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-clustermesh-secrets podName:d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c nodeName:}" failed. No retries permitted until 2025-03-17 21:50:04.133297606 +0000 UTC m=+4.157111266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-clustermesh-secrets") pod "cilium-hlgmc" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c") : failed to sync secret cache: timed out waiting for the condition Mar 17 21:50:03.714263 env[1194]: time="2025-03-17T21:50:03.714202947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m4wj8,Uid:c356167f-8513-4fd0-a2e1-4754e4d0ef94,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\"" Mar 17 21:50:03.717622 env[1194]: time="2025-03-17T21:50:03.717517406Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 21:50:03.787392 systemd[1]: var-lib-kubelet-pods-d78bf6f6\x2df58d\x2d46ce\x2dbfbf\x2d940eba5f5d4c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp56t5.mount: Deactivated successfully. Mar 17 21:50:04.243967 kubelet[1934]: I0317 21:50:04.243908 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-hubble-tls\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:04.244822 kubelet[1934]: I0317 21:50:04.244795 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-clustermesh-secrets\") pod \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\" (UID: \"d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c\") " Mar 17 21:50:04.250645 systemd[1]: var-lib-kubelet-pods-d78bf6f6\x2df58d\x2d46ce\x2dbfbf\x2d940eba5f5d4c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 21:50:04.252567 kubelet[1934]: I0317 21:50:04.251169 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 21:50:04.254848 systemd[1]: var-lib-kubelet-pods-d78bf6f6\x2df58d\x2d46ce\x2dbfbf\x2d940eba5f5d4c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 21:50:04.256817 kubelet[1934]: I0317 21:50:04.256762 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" (UID: "d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 21:50:04.303428 systemd[1]: Removed slice kubepods-burstable-podd78bf6f6_f58d_46ce_bfbf_940eba5f5d4c.slice. Mar 17 21:50:04.346265 kubelet[1934]: I0317 21:50:04.346223 1934 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-hubble-tls\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:04.346632 kubelet[1934]: I0317 21:50:04.346580 1934 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c-clustermesh-secrets\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:50:04.462969 kubelet[1934]: I0317 21:50:04.462872 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jrqn" podStartSLOduration=2.462808797 podStartE2EDuration="2.462808797s" podCreationTimestamp="2025-03-17 21:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:50:04.437447384 +0000 UTC m=+4.461261052" watchObservedRunningTime="2025-03-17 21:50:04.462808797 +0000 UTC m=+4.486622458" Mar 17 21:50:04.496791 systemd[1]: Created slice kubepods-burstable-pod79a4216a_bb7c_4f9a_be38_fd37aeb60bfb.slice. Mar 17 21:50:04.548606 kubelet[1934]: I0317 21:50:04.548537 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-lib-modules\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.548606 kubelet[1934]: I0317 21:50:04.548611 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-xtables-lock\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549423 kubelet[1934]: I0317 21:50:04.548646 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-hubble-tls\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549423 kubelet[1934]: I0317 21:50:04.548679 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-host-proc-sys-kernel\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549423 kubelet[1934]: I0317 21:50:04.548726 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-etc-cni-netd\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549423 kubelet[1934]: I0317 21:50:04.548767 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-config-path\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549423 kubelet[1934]: I0317 21:50:04.548795 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-cgroup\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549423 kubelet[1934]: I0317 21:50:04.548820 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cni-path\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549743 kubelet[1934]: I0317 21:50:04.548867 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-bpf-maps\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549743 kubelet[1934]: I0317 21:50:04.548914 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-hostproc\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549743 kubelet[1934]: I0317 21:50:04.548971 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-clustermesh-secrets\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549743 kubelet[1934]: I0317 21:50:04.548999 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhps6\" (UniqueName: \"kubernetes.io/projected/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-kube-api-access-vhps6\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549743 kubelet[1934]: I0317 21:50:04.549033 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-run\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.549743 kubelet[1934]: I0317 21:50:04.549065 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-host-proc-sys-net\") pod \"cilium-xdx7z\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " pod="kube-system/cilium-xdx7z" Mar 17 21:50:04.804285 env[1194]: time="2025-03-17T21:50:04.803486667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xdx7z,Uid:79a4216a-bb7c-4f9a-be38-fd37aeb60bfb,Namespace:kube-system,Attempt:0,}" Mar 17 21:50:04.826888 env[1194]: time="2025-03-17T21:50:04.826648503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:50:04.826888 env[1194]: time="2025-03-17T21:50:04.826703207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:50:04.826888 env[1194]: time="2025-03-17T21:50:04.826720797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:50:04.827217 env[1194]: time="2025-03-17T21:50:04.826934845Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd pid=2256 runtime=io.containerd.runc.v2 Mar 17 21:50:04.857632 systemd[1]: Started cri-containerd-30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd.scope. Mar 17 21:50:04.910735 env[1194]: time="2025-03-17T21:50:04.910673243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xdx7z,Uid:79a4216a-bb7c-4f9a-be38-fd37aeb60bfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\"" Mar 17 21:50:06.298929 kubelet[1934]: I0317 21:50:06.298865 1934 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c" path="/var/lib/kubelet/pods/d78bf6f6-f58d-46ce-bfbf-940eba5f5d4c/volumes" Mar 17 21:50:06.853251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1734107761.mount: Deactivated successfully. Mar 17 21:50:08.426729 env[1194]: time="2025-03-17T21:50:08.426551439Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:50:08.429112 env[1194]: time="2025-03-17T21:50:08.429071241Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:50:08.431070 env[1194]: time="2025-03-17T21:50:08.431022959Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:50:08.431943 env[1194]: time="2025-03-17T21:50:08.431901808Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 21:50:08.438105 env[1194]: time="2025-03-17T21:50:08.438062688Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 21:50:08.439212 env[1194]: time="2025-03-17T21:50:08.439168830Z" level=info msg="CreateContainer within sandbox \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 21:50:08.458852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161983185.mount: Deactivated successfully. Mar 17 21:50:08.470300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2735036396.mount: Deactivated successfully. Mar 17 21:50:08.473583 env[1194]: time="2025-03-17T21:50:08.473524046Z" level=info msg="CreateContainer within sandbox \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\"" Mar 17 21:50:08.474257 env[1194]: time="2025-03-17T21:50:08.474219710Z" level=info msg="StartContainer for \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\"" Mar 17 21:50:08.514164 systemd[1]: Started cri-containerd-64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98.scope. Mar 17 21:50:08.565915 env[1194]: time="2025-03-17T21:50:08.565858940Z" level=info msg="StartContainer for \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\" returns successfully" Mar 17 21:50:09.491649 kubelet[1934]: I0317 21:50:09.491577 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-m4wj8" podStartSLOduration=2.773798811 podStartE2EDuration="7.49155445s" podCreationTimestamp="2025-03-17 21:50:02 +0000 UTC" firstStartedPulling="2025-03-17 21:50:03.716733555 +0000 UTC m=+3.740547211" lastFinishedPulling="2025-03-17 21:50:08.434489187 +0000 UTC m=+8.458302850" observedRunningTime="2025-03-17 21:50:09.485530611 +0000 UTC m=+9.509344280" watchObservedRunningTime="2025-03-17 21:50:09.49155445 +0000 UTC m=+9.515368109" Mar 17 21:50:16.631748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982752104.mount: Deactivated successfully. Mar 17 21:50:21.960693 env[1194]: time="2025-03-17T21:50:21.960560875Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:50:21.964964 env[1194]: time="2025-03-17T21:50:21.964928322Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:50:21.967079 env[1194]: time="2025-03-17T21:50:21.967030971Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:50:21.968128 env[1194]: time="2025-03-17T21:50:21.968071837Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 21:50:21.974126 env[1194]: time="2025-03-17T21:50:21.973581995Z" level=info msg="CreateContainer within sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 21:50:22.154261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3071723671.mount: Deactivated successfully. Mar 17 21:50:22.164769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871148633.mount: Deactivated successfully. Mar 17 21:50:22.166981 env[1194]: time="2025-03-17T21:50:22.166921936Z" level=info msg="CreateContainer within sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5\"" Mar 17 21:50:22.168079 env[1194]: time="2025-03-17T21:50:22.168034420Z" level=info msg="StartContainer for \"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5\"" Mar 17 21:50:22.211144 systemd[1]: Started cri-containerd-0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5.scope. Mar 17 21:50:22.277029 env[1194]: time="2025-03-17T21:50:22.276970565Z" level=info msg="StartContainer for \"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5\" returns successfully" Mar 17 21:50:22.296590 systemd[1]: cri-containerd-0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5.scope: Deactivated successfully. Mar 17 21:50:22.408202 env[1194]: time="2025-03-17T21:50:22.404271457Z" level=info msg="shim disconnected" id=0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5 Mar 17 21:50:22.408202 env[1194]: time="2025-03-17T21:50:22.404481238Z" level=warning msg="cleaning up after shim disconnected" id=0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5 namespace=k8s.io Mar 17 21:50:22.408202 env[1194]: time="2025-03-17T21:50:22.404504821Z" level=info msg="cleaning up dead shim" Mar 17 21:50:22.417439 env[1194]: time="2025-03-17T21:50:22.417354131Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:50:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2384 runtime=io.containerd.runc.v2\n" Mar 17 21:50:22.499214 env[1194]: time="2025-03-17T21:50:22.498467131Z" level=info msg="CreateContainer within sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 21:50:22.515238 env[1194]: time="2025-03-17T21:50:22.514791010Z" level=info msg="CreateContainer within sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198\"" Mar 17 21:50:22.520390 env[1194]: time="2025-03-17T21:50:22.519264999Z" level=info msg="StartContainer for \"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198\"" Mar 17 21:50:22.560105 systemd[1]: Started cri-containerd-cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198.scope. Mar 17 21:50:22.619705 env[1194]: time="2025-03-17T21:50:22.618627232Z" level=info msg="StartContainer for \"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198\" returns successfully" Mar 17 21:50:22.646266 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 21:50:22.646697 systemd[1]: Stopped systemd-sysctl.service. Mar 17 21:50:22.647575 systemd[1]: Stopping systemd-sysctl.service... Mar 17 21:50:22.652487 systemd[1]: Starting systemd-sysctl.service... Mar 17 21:50:22.661845 systemd[1]: cri-containerd-cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198.scope: Deactivated successfully. Mar 17 21:50:22.696228 systemd[1]: Finished systemd-sysctl.service. Mar 17 21:50:22.703421 env[1194]: time="2025-03-17T21:50:22.703352963Z" level=info msg="shim disconnected" id=cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198 Mar 17 21:50:22.703928 env[1194]: time="2025-03-17T21:50:22.703895317Z" level=warning msg="cleaning up after shim disconnected" id=cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198 namespace=k8s.io Mar 17 21:50:22.704098 env[1194]: time="2025-03-17T21:50:22.704067836Z" level=info msg="cleaning up dead shim" Mar 17 21:50:22.717955 env[1194]: time="2025-03-17T21:50:22.717887851Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:50:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2449 runtime=io.containerd.runc.v2\n" Mar 17 21:50:23.149413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5-rootfs.mount: Deactivated successfully. Mar 17 21:50:23.501892 env[1194]: time="2025-03-17T21:50:23.501825201Z" level=info msg="CreateContainer within sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 21:50:23.521770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3422484129.mount: Deactivated successfully. Mar 17 21:50:23.532046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641108273.mount: Deactivated successfully. Mar 17 21:50:23.541908 env[1194]: time="2025-03-17T21:50:23.541805424Z" level=info msg="CreateContainer within sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6\"" Mar 17 21:50:23.542958 env[1194]: time="2025-03-17T21:50:23.542910255Z" level=info msg="StartContainer for \"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6\"" Mar 17 21:50:23.571970 systemd[1]: Started cri-containerd-8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6.scope. Mar 17 21:50:23.624160 env[1194]: time="2025-03-17T21:50:23.624086707Z" level=info msg="StartContainer for \"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6\" returns successfully" Mar 17 21:50:23.633586 systemd[1]: cri-containerd-8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6.scope: Deactivated successfully. Mar 17 21:50:23.664044 env[1194]: time="2025-03-17T21:50:23.663981616Z" level=info msg="shim disconnected" id=8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6 Mar 17 21:50:23.664462 env[1194]: time="2025-03-17T21:50:23.664428135Z" level=warning msg="cleaning up after shim disconnected" id=8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6 namespace=k8s.io Mar 17 21:50:23.664686 env[1194]: time="2025-03-17T21:50:23.664656557Z" level=info msg="cleaning up dead shim" Mar 17 21:50:23.675695 env[1194]: time="2025-03-17T21:50:23.675652306Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:50:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2510 runtime=io.containerd.runc.v2\n" Mar 17 21:50:24.509255 env[1194]: time="2025-03-17T21:50:24.509082321Z" level=info msg="CreateContainer within sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 21:50:24.526852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591574239.mount: Deactivated successfully. Mar 17 21:50:24.536152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount252216920.mount: Deactivated successfully. Mar 17 21:50:24.550360 env[1194]: time="2025-03-17T21:50:24.550288357Z" level=info msg="CreateContainer within sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a\"" Mar 17 21:50:24.554129 env[1194]: time="2025-03-17T21:50:24.554077686Z" level=info msg="StartContainer for \"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a\"" Mar 17 21:50:24.596262 systemd[1]: Started cri-containerd-0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a.scope. Mar 17 21:50:24.645683 systemd[1]: cri-containerd-0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a.scope: Deactivated successfully. Mar 17 21:50:24.649108 env[1194]: time="2025-03-17T21:50:24.648061921Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79a4216a_bb7c_4f9a_be38_fd37aeb60bfb.slice/cri-containerd-0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a.scope/memory.events\": no such file or directory" Mar 17 21:50:24.649629 env[1194]: time="2025-03-17T21:50:24.649074929Z" level=info msg="StartContainer for \"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a\" returns successfully" Mar 17 21:50:24.682815 env[1194]: time="2025-03-17T21:50:24.682751767Z" level=info msg="shim disconnected" id=0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a Mar 17 21:50:24.682815 env[1194]: time="2025-03-17T21:50:24.682813728Z" level=warning msg="cleaning up after shim disconnected" id=0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a namespace=k8s.io Mar 17 21:50:24.683114 env[1194]: time="2025-03-17T21:50:24.682830343Z" level=info msg="cleaning up dead shim" Mar 17 21:50:24.693092 env[1194]: time="2025-03-17T21:50:24.693042229Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:50:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2566 runtime=io.containerd.runc.v2\n" Mar 17 21:50:25.513506 env[1194]: time="2025-03-17T21:50:25.513402612Z" level=info msg="CreateContainer within sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 21:50:25.551204 env[1194]: time="2025-03-17T21:50:25.551142930Z" level=info msg="CreateContainer within sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\"" Mar 17 21:50:25.552110 env[1194]: time="2025-03-17T21:50:25.552066037Z" level=info msg="StartContainer for \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\"" Mar 17 21:50:25.583115 systemd[1]: Started cri-containerd-c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb.scope. Mar 17 21:50:25.654143 env[1194]: time="2025-03-17T21:50:25.653954781Z" level=info msg="StartContainer for \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\" returns successfully" Mar 17 21:50:25.913033 kubelet[1934]: I0317 21:50:25.912194 1934 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 21:50:26.006102 systemd[1]: Created slice kubepods-burstable-pod16ff2ef9_290e_4cb0_b2dc_70dfdabceefe.slice. Mar 17 21:50:26.011817 systemd[1]: Created slice kubepods-burstable-pod726f0478_ad58_40c7_9dd8_66627ce80e31.slice. Mar 17 21:50:26.014676 kubelet[1934]: I0317 21:50:26.014620 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwjb\" (UniqueName: \"kubernetes.io/projected/16ff2ef9-290e-4cb0-b2dc-70dfdabceefe-kube-api-access-gfwjb\") pod \"coredns-668d6bf9bc-l26ts\" (UID: \"16ff2ef9-290e-4cb0-b2dc-70dfdabceefe\") " pod="kube-system/coredns-668d6bf9bc-l26ts" Mar 17 21:50:26.014892 kubelet[1934]: I0317 21:50:26.014862 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16ff2ef9-290e-4cb0-b2dc-70dfdabceefe-config-volume\") pod \"coredns-668d6bf9bc-l26ts\" (UID: \"16ff2ef9-290e-4cb0-b2dc-70dfdabceefe\") " pod="kube-system/coredns-668d6bf9bc-l26ts" Mar 17 21:50:26.115695 kubelet[1934]: I0317 21:50:26.115637 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/726f0478-ad58-40c7-9dd8-66627ce80e31-config-volume\") pod \"coredns-668d6bf9bc-8bdsj\" (UID: \"726f0478-ad58-40c7-9dd8-66627ce80e31\") " pod="kube-system/coredns-668d6bf9bc-8bdsj" Mar 17 21:50:26.115935 kubelet[1934]: I0317 21:50:26.115745 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qrjr\" (UniqueName: \"kubernetes.io/projected/726f0478-ad58-40c7-9dd8-66627ce80e31-kube-api-access-6qrjr\") pod \"coredns-668d6bf9bc-8bdsj\" (UID: \"726f0478-ad58-40c7-9dd8-66627ce80e31\") " pod="kube-system/coredns-668d6bf9bc-8bdsj" Mar 17 21:50:26.151141 systemd[1]: run-containerd-runc-k8s.io-c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb-runc.yCP13u.mount: Deactivated successfully. Mar 17 21:50:26.314370 env[1194]: time="2025-03-17T21:50:26.314275896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l26ts,Uid:16ff2ef9-290e-4cb0-b2dc-70dfdabceefe,Namespace:kube-system,Attempt:0,}" Mar 17 21:50:26.319961 env[1194]: time="2025-03-17T21:50:26.319507659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bdsj,Uid:726f0478-ad58-40c7-9dd8-66627ce80e31,Namespace:kube-system,Attempt:0,}" Mar 17 21:50:26.561231 kubelet[1934]: I0317 21:50:26.558575 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xdx7z" podStartSLOduration=5.500686288 podStartE2EDuration="22.558520504s" podCreationTimestamp="2025-03-17 21:50:04 +0000 UTC" firstStartedPulling="2025-03-17 21:50:04.912566224 +0000 UTC m=+4.936379880" lastFinishedPulling="2025-03-17 21:50:21.97040044 +0000 UTC m=+21.994214096" observedRunningTime="2025-03-17 21:50:26.557074101 +0000 UTC m=+26.580887768" watchObservedRunningTime="2025-03-17 21:50:26.558520504 +0000 UTC m=+26.582334167" Mar 17 21:50:28.174126 systemd[1]: run-containerd-runc-k8s.io-c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb-runc.7h44mO.mount: Deactivated successfully. Mar 17 21:50:28.570519 systemd-networkd[1025]: cilium_host: Link UP Mar 17 21:50:28.576766 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 21:50:28.579804 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 21:50:28.574148 systemd-networkd[1025]: cilium_net: Link UP Mar 17 21:50:28.575066 systemd-networkd[1025]: cilium_net: Gained carrier Mar 17 21:50:28.577151 systemd-networkd[1025]: cilium_host: Gained carrier Mar 17 21:50:28.677295 systemd-networkd[1025]: cilium_net: Gained IPv6LL Mar 17 21:50:28.679667 systemd-networkd[1025]: cilium_host: Gained IPv6LL Mar 17 21:50:28.796551 systemd-networkd[1025]: cilium_vxlan: Link UP Mar 17 21:50:28.796566 systemd-networkd[1025]: cilium_vxlan: Gained carrier Mar 17 21:50:29.339400 kernel: NET: Registered PF_ALG protocol family Mar 17 21:50:30.164749 systemd-networkd[1025]: cilium_vxlan: Gained IPv6LL Mar 17 21:50:30.375147 systemd[1]: run-containerd-runc-k8s.io-c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb-runc.eXJ2dd.mount: Deactivated successfully. Mar 17 21:50:30.565594 systemd-networkd[1025]: lxc_health: Link UP Mar 17 21:50:30.577611 systemd-networkd[1025]: lxc_health: Gained carrier Mar 17 21:50:30.578593 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 21:50:30.938152 systemd-networkd[1025]: lxc4d5ccb7eb68f: Link UP Mar 17 21:50:30.946722 systemd-networkd[1025]: lxc31ddcbe46514: Link UP Mar 17 21:50:30.955379 kernel: eth0: renamed from tmp93623 Mar 17 21:50:30.965428 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4d5ccb7eb68f: link becomes ready Mar 17 21:50:30.967417 kernel: eth0: renamed from tmpe57dd Mar 17 21:50:30.973881 systemd-networkd[1025]: lxc4d5ccb7eb68f: Gained carrier Mar 17 21:50:30.984982 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc31ddcbe46514: link becomes ready Mar 17 21:50:30.984477 systemd-networkd[1025]: lxc31ddcbe46514: Gained carrier Mar 17 21:50:31.902517 systemd-networkd[1025]: lxc_health: Gained IPv6LL Mar 17 21:50:32.466591 systemd-networkd[1025]: lxc4d5ccb7eb68f: Gained IPv6LL Mar 17 21:50:32.467083 systemd-networkd[1025]: lxc31ddcbe46514: Gained IPv6LL Mar 17 21:50:32.615432 systemd[1]: run-containerd-runc-k8s.io-c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb-runc.QWbbew.mount: Deactivated successfully. Mar 17 21:50:32.716513 kubelet[1934]: E0317 21:50:32.714183 1934 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:37700->127.0.0.1:39749: read tcp 127.0.0.1:37700->127.0.0.1:39749: read: connection reset by peer Mar 17 21:50:34.819822 systemd[1]: run-containerd-runc-k8s.io-c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb-runc.APTMvO.mount: Deactivated successfully. Mar 17 21:50:37.017820 env[1194]: time="2025-03-17T21:50:37.015713209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:50:37.017820 env[1194]: time="2025-03-17T21:50:37.015789019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:50:37.017820 env[1194]: time="2025-03-17T21:50:37.015806933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:50:37.017820 env[1194]: time="2025-03-17T21:50:37.016037100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9362304c43131f922163a0be1f7597f9d2cc26946047210635867cf0da24b738 pid=3217 runtime=io.containerd.runc.v2 Mar 17 21:50:37.028205 env[1194]: time="2025-03-17T21:50:37.028112654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:50:37.028477 env[1194]: time="2025-03-17T21:50:37.028415745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:50:37.028704 env[1194]: time="2025-03-17T21:50:37.028636295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:50:37.029188 env[1194]: time="2025-03-17T21:50:37.029128875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e57dde0d5ed13a940803122d1ed25fa79d85410892dec43f0edf48b41c46d979 pid=3215 runtime=io.containerd.runc.v2 Mar 17 21:50:37.091617 systemd[1]: Started cri-containerd-e57dde0d5ed13a940803122d1ed25fa79d85410892dec43f0edf48b41c46d979.scope. Mar 17 21:50:37.099771 systemd[1]: run-containerd-runc-k8s.io-e57dde0d5ed13a940803122d1ed25fa79d85410892dec43f0edf48b41c46d979-runc.oVIqUw.mount: Deactivated successfully. Mar 17 21:50:37.125248 systemd[1]: Started cri-containerd-9362304c43131f922163a0be1f7597f9d2cc26946047210635867cf0da24b738.scope. Mar 17 21:50:37.252969 env[1194]: time="2025-03-17T21:50:37.252884196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bdsj,Uid:726f0478-ad58-40c7-9dd8-66627ce80e31,Namespace:kube-system,Attempt:0,} returns sandbox id \"e57dde0d5ed13a940803122d1ed25fa79d85410892dec43f0edf48b41c46d979\"" Mar 17 21:50:37.263944 env[1194]: time="2025-03-17T21:50:37.263887620Z" level=info msg="CreateContainer within sandbox \"e57dde0d5ed13a940803122d1ed25fa79d85410892dec43f0edf48b41c46d979\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 21:50:37.284847 env[1194]: time="2025-03-17T21:50:37.284692532Z" level=info msg="CreateContainer within sandbox \"e57dde0d5ed13a940803122d1ed25fa79d85410892dec43f0edf48b41c46d979\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a55ba98724462a17e8e62c4a5f55f274fc310db9b988bdbb516b7bbb20082adc\"" Mar 17 21:50:37.285994 env[1194]: time="2025-03-17T21:50:37.285912690Z" level=info msg="StartContainer for \"a55ba98724462a17e8e62c4a5f55f274fc310db9b988bdbb516b7bbb20082adc\"" Mar 17 21:50:37.338304 env[1194]: time="2025-03-17T21:50:37.338193559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l26ts,Uid:16ff2ef9-290e-4cb0-b2dc-70dfdabceefe,Namespace:kube-system,Attempt:0,} returns sandbox id \"9362304c43131f922163a0be1f7597f9d2cc26946047210635867cf0da24b738\"" Mar 17 21:50:37.346525 systemd[1]: Started cri-containerd-a55ba98724462a17e8e62c4a5f55f274fc310db9b988bdbb516b7bbb20082adc.scope. Mar 17 21:50:37.347704 env[1194]: time="2025-03-17T21:50:37.347648610Z" level=info msg="CreateContainer within sandbox \"9362304c43131f922163a0be1f7597f9d2cc26946047210635867cf0da24b738\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 21:50:37.369246 env[1194]: time="2025-03-17T21:50:37.369180810Z" level=info msg="CreateContainer within sandbox \"9362304c43131f922163a0be1f7597f9d2cc26946047210635867cf0da24b738\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"579aad69290359f130eea1149977f1aeb17f3a438d7b492e5f554745bed3594f\"" Mar 17 21:50:37.370482 env[1194]: time="2025-03-17T21:50:37.370434062Z" level=info msg="StartContainer for \"579aad69290359f130eea1149977f1aeb17f3a438d7b492e5f554745bed3594f\"" Mar 17 21:50:37.406003 systemd[1]: Started cri-containerd-579aad69290359f130eea1149977f1aeb17f3a438d7b492e5f554745bed3594f.scope. Mar 17 21:50:37.463842 env[1194]: time="2025-03-17T21:50:37.463786673Z" level=info msg="StartContainer for \"a55ba98724462a17e8e62c4a5f55f274fc310db9b988bdbb516b7bbb20082adc\" returns successfully" Mar 17 21:50:37.487704 env[1194]: time="2025-03-17T21:50:37.487635622Z" level=info msg="StartContainer for \"579aad69290359f130eea1149977f1aeb17f3a438d7b492e5f554745bed3594f\" returns successfully" Mar 17 21:50:37.510407 kubelet[1934]: E0317 21:50:37.507253 1934 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:46942->127.0.0.1:39749: read tcp 127.0.0.1:46942->127.0.0.1:39749: read: connection reset by peer Mar 17 21:50:37.606231 kubelet[1934]: I0317 21:50:37.606060 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8bdsj" podStartSLOduration=35.605998096 podStartE2EDuration="35.605998096s" podCreationTimestamp="2025-03-17 21:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:50:37.592388951 +0000 UTC m=+37.616202617" watchObservedRunningTime="2025-03-17 21:50:37.605998096 +0000 UTC m=+37.629811758" Mar 17 21:50:37.618233 kubelet[1934]: I0317 21:50:37.617491 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l26ts" podStartSLOduration=35.617471604 podStartE2EDuration="35.617471604s" podCreationTimestamp="2025-03-17 21:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:50:37.615607423 +0000 UTC m=+37.639421091" watchObservedRunningTime="2025-03-17 21:50:37.617471604 +0000 UTC m=+37.641285267" Mar 17 21:50:38.028614 systemd[1]: run-containerd-runc-k8s.io-9362304c43131f922163a0be1f7597f9d2cc26946047210635867cf0da24b738-runc.zFN2Fd.mount: Deactivated successfully. Mar 17 21:50:38.708291 sudo[1320]: pam_unix(sudo:session): session closed for user root Mar 17 21:50:38.860006 sshd[1317]: pam_unix(sshd:session): session closed for user core Mar 17 21:50:38.871669 systemd-logind[1183]: Session 5 logged out. Waiting for processes to exit. Mar 17 21:50:38.875744 systemd[1]: sshd@4-10.230.29.198:22-139.178.89.65:35542.service: Deactivated successfully. Mar 17 21:50:38.877077 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 21:50:38.877363 systemd[1]: session-5.scope: Consumed 7.103s CPU time. Mar 17 21:50:38.879076 systemd-logind[1183]: Removed session 5. Mar 17 21:51:46.058206 systemd[1]: Started sshd@5-10.230.29.198:22-139.178.89.65:50630.service. Mar 17 21:51:46.960535 sshd[3409]: Accepted publickey for core from 139.178.89.65 port 50630 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:51:46.962918 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:51:46.972505 systemd-logind[1183]: New session 6 of user core. Mar 17 21:51:46.973668 systemd[1]: Started session-6.scope. Mar 17 21:51:47.859424 sshd[3409]: pam_unix(sshd:session): session closed for user core Mar 17 21:51:47.864036 systemd[1]: sshd@5-10.230.29.198:22-139.178.89.65:50630.service: Deactivated successfully. Mar 17 21:51:47.865918 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 21:51:47.867004 systemd-logind[1183]: Session 6 logged out. Waiting for processes to exit. Mar 17 21:51:47.868933 systemd-logind[1183]: Removed session 6. Mar 17 21:51:53.010547 systemd[1]: Started sshd@6-10.230.29.198:22-139.178.89.65:43286.service. Mar 17 21:51:53.902775 sshd[3423]: Accepted publickey for core from 139.178.89.65 port 43286 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:51:53.905741 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:51:53.913121 systemd-logind[1183]: New session 7 of user core. Mar 17 21:51:53.914498 systemd[1]: Started session-7.scope. Mar 17 21:51:54.630224 sshd[3423]: pam_unix(sshd:session): session closed for user core Mar 17 21:51:54.635542 systemd[1]: sshd@6-10.230.29.198:22-139.178.89.65:43286.service: Deactivated successfully. Mar 17 21:51:54.635617 systemd-logind[1183]: Session 7 logged out. Waiting for processes to exit. Mar 17 21:51:54.636912 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 21:51:54.638537 systemd-logind[1183]: Removed session 7. Mar 17 21:51:59.778469 systemd[1]: Started sshd@7-10.230.29.198:22-139.178.89.65:43302.service. Mar 17 21:52:00.667292 sshd[3436]: Accepted publickey for core from 139.178.89.65 port 43302 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:00.669305 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:00.677973 systemd[1]: Started session-8.scope. Mar 17 21:52:00.679569 systemd-logind[1183]: New session 8 of user core. Mar 17 21:52:01.378847 sshd[3436]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:01.385385 systemd-logind[1183]: Session 8 logged out. Waiting for processes to exit. Mar 17 21:52:01.385880 systemd[1]: sshd@7-10.230.29.198:22-139.178.89.65:43302.service: Deactivated successfully. Mar 17 21:52:01.387044 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 21:52:01.388744 systemd-logind[1183]: Removed session 8. Mar 17 21:52:06.527772 systemd[1]: Started sshd@8-10.230.29.198:22-139.178.89.65:36080.service. Mar 17 21:52:07.422057 sshd[3453]: Accepted publickey for core from 139.178.89.65 port 36080 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:07.423629 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:07.433216 systemd[1]: Started session-9.scope. Mar 17 21:52:07.433399 systemd-logind[1183]: New session 9 of user core. Mar 17 21:52:08.135940 sshd[3453]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:08.139553 systemd[1]: sshd@8-10.230.29.198:22-139.178.89.65:36080.service: Deactivated successfully. Mar 17 21:52:08.140740 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 21:52:08.141643 systemd-logind[1183]: Session 9 logged out. Waiting for processes to exit. Mar 17 21:52:08.142879 systemd-logind[1183]: Removed session 9. Mar 17 21:52:08.285251 systemd[1]: Started sshd@9-10.230.29.198:22-139.178.89.65:36090.service. Mar 17 21:52:09.182081 sshd[3466]: Accepted publickey for core from 139.178.89.65 port 36090 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:09.184941 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:09.193055 systemd-logind[1183]: New session 10 of user core. Mar 17 21:52:09.194515 systemd[1]: Started session-10.scope. Mar 17 21:52:09.970271 sshd[3466]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:09.975020 systemd[1]: sshd@9-10.230.29.198:22-139.178.89.65:36090.service: Deactivated successfully. Mar 17 21:52:09.976078 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 21:52:09.976822 systemd-logind[1183]: Session 10 logged out. Waiting for processes to exit. Mar 17 21:52:09.978209 systemd-logind[1183]: Removed session 10. Mar 17 21:52:10.145949 systemd[1]: Started sshd@10-10.230.29.198:22-139.178.89.65:36106.service. Mar 17 21:52:11.040018 sshd[3475]: Accepted publickey for core from 139.178.89.65 port 36106 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:11.042417 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:11.050420 systemd-logind[1183]: New session 11 of user core. Mar 17 21:52:11.050728 systemd[1]: Started session-11.scope. Mar 17 21:52:11.762605 sshd[3475]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:11.766811 systemd-logind[1183]: Session 11 logged out. Waiting for processes to exit. Mar 17 21:52:11.767223 systemd[1]: sshd@10-10.230.29.198:22-139.178.89.65:36106.service: Deactivated successfully. Mar 17 21:52:11.768297 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 21:52:11.769426 systemd-logind[1183]: Removed session 11. Mar 17 21:52:16.910383 systemd[1]: Started sshd@11-10.230.29.198:22-139.178.89.65:54652.service. Mar 17 21:52:17.794822 sshd[3487]: Accepted publickey for core from 139.178.89.65 port 54652 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:17.796880 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:17.804354 systemd-logind[1183]: New session 12 of user core. Mar 17 21:52:17.805962 systemd[1]: Started session-12.scope. Mar 17 21:52:18.507557 sshd[3487]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:18.511630 systemd[1]: sshd@11-10.230.29.198:22-139.178.89.65:54652.service: Deactivated successfully. Mar 17 21:52:18.512668 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 21:52:18.513489 systemd-logind[1183]: Session 12 logged out. Waiting for processes to exit. Mar 17 21:52:18.514553 systemd-logind[1183]: Removed session 12. Mar 17 21:52:23.658899 systemd[1]: Started sshd@12-10.230.29.198:22-139.178.89.65:41758.service. Mar 17 21:52:24.548710 sshd[3499]: Accepted publickey for core from 139.178.89.65 port 41758 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:24.551782 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:24.559470 systemd-logind[1183]: New session 13 of user core. Mar 17 21:52:24.560794 systemd[1]: Started session-13.scope. Mar 17 21:52:25.289197 sshd[3499]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:25.294546 systemd[1]: sshd@12-10.230.29.198:22-139.178.89.65:41758.service: Deactivated successfully. Mar 17 21:52:25.295882 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 21:52:25.296964 systemd-logind[1183]: Session 13 logged out. Waiting for processes to exit. Mar 17 21:52:25.299162 systemd-logind[1183]: Removed session 13. Mar 17 21:52:25.449721 systemd[1]: Started sshd@13-10.230.29.198:22-139.178.89.65:41760.service. Mar 17 21:52:26.348370 sshd[3510]: Accepted publickey for core from 139.178.89.65 port 41760 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:26.351154 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:26.359193 systemd-logind[1183]: New session 14 of user core. Mar 17 21:52:26.361365 systemd[1]: Started session-14.scope. Mar 17 21:52:27.376714 sshd[3510]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:27.381994 systemd[1]: sshd@13-10.230.29.198:22-139.178.89.65:41760.service: Deactivated successfully. Mar 17 21:52:27.383119 systemd-logind[1183]: Session 14 logged out. Waiting for processes to exit. Mar 17 21:52:27.383121 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 21:52:27.384969 systemd-logind[1183]: Removed session 14. Mar 17 21:52:27.524086 systemd[1]: Started sshd@14-10.230.29.198:22-139.178.89.65:41774.service. Mar 17 21:52:28.429047 sshd[3520]: Accepted publickey for core from 139.178.89.65 port 41774 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:28.431244 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:28.439881 systemd-logind[1183]: New session 15 of user core. Mar 17 21:52:28.441176 systemd[1]: Started session-15.scope. Mar 17 21:52:30.167909 sshd[3520]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:30.171840 systemd[1]: sshd@14-10.230.29.198:22-139.178.89.65:41774.service: Deactivated successfully. Mar 17 21:52:30.173077 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 21:52:30.173883 systemd-logind[1183]: Session 15 logged out. Waiting for processes to exit. Mar 17 21:52:30.175019 systemd-logind[1183]: Removed session 15. Mar 17 21:52:30.315869 systemd[1]: Started sshd@15-10.230.29.198:22-139.178.89.65:41776.service. Mar 17 21:52:31.207376 sshd[3537]: Accepted publickey for core from 139.178.89.65 port 41776 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:31.209631 sshd[3537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:31.218376 systemd-logind[1183]: New session 16 of user core. Mar 17 21:52:31.219426 systemd[1]: Started session-16.scope. Mar 17 21:52:32.200669 sshd[3537]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:32.204678 systemd[1]: sshd@15-10.230.29.198:22-139.178.89.65:41776.service: Deactivated successfully. Mar 17 21:52:32.206048 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 21:52:32.207025 systemd-logind[1183]: Session 16 logged out. Waiting for processes to exit. Mar 17 21:52:32.208266 systemd-logind[1183]: Removed session 16. Mar 17 21:52:32.358256 systemd[1]: Started sshd@16-10.230.29.198:22-139.178.89.65:44320.service. Mar 17 21:52:33.256664 sshd[3547]: Accepted publickey for core from 139.178.89.65 port 44320 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:33.258824 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:33.266926 systemd-logind[1183]: New session 17 of user core. Mar 17 21:52:33.267726 systemd[1]: Started session-17.scope. Mar 17 21:52:33.966965 sshd[3547]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:33.971499 systemd-logind[1183]: Session 17 logged out. Waiting for processes to exit. Mar 17 21:52:33.972051 systemd[1]: sshd@16-10.230.29.198:22-139.178.89.65:44320.service: Deactivated successfully. Mar 17 21:52:33.973249 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 21:52:33.974635 systemd-logind[1183]: Removed session 17. Mar 17 21:52:39.117159 systemd[1]: Started sshd@17-10.230.29.198:22-139.178.89.65:44324.service. Mar 17 21:52:40.007730 sshd[3563]: Accepted publickey for core from 139.178.89.65 port 44324 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:40.010045 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:40.017590 systemd-logind[1183]: New session 18 of user core. Mar 17 21:52:40.018433 systemd[1]: Started session-18.scope. Mar 17 21:52:40.728469 sshd[3563]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:40.732598 systemd[1]: sshd@17-10.230.29.198:22-139.178.89.65:44324.service: Deactivated successfully. Mar 17 21:52:40.733711 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 21:52:40.734467 systemd-logind[1183]: Session 18 logged out. Waiting for processes to exit. Mar 17 21:52:40.735708 systemd-logind[1183]: Removed session 18. Mar 17 21:52:45.879801 systemd[1]: Started sshd@18-10.230.29.198:22-139.178.89.65:33448.service. Mar 17 21:52:46.768572 sshd[3575]: Accepted publickey for core from 139.178.89.65 port 33448 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:46.770939 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:46.778538 systemd-logind[1183]: New session 19 of user core. Mar 17 21:52:46.779643 systemd[1]: Started session-19.scope. Mar 17 21:52:47.478959 sshd[3575]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:47.483393 systemd[1]: sshd@18-10.230.29.198:22-139.178.89.65:33448.service: Deactivated successfully. Mar 17 21:52:47.484530 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 21:52:47.485285 systemd-logind[1183]: Session 19 logged out. Waiting for processes to exit. Mar 17 21:52:47.486493 systemd-logind[1183]: Removed session 19. Mar 17 21:52:52.627813 systemd[1]: Started sshd@19-10.230.29.198:22-139.178.89.65:51998.service. Mar 17 21:52:53.518342 sshd[3587]: Accepted publickey for core from 139.178.89.65 port 51998 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:53.519927 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:53.528302 systemd[1]: Started session-20.scope. Mar 17 21:52:53.529052 systemd-logind[1183]: New session 20 of user core. Mar 17 21:52:54.230399 sshd[3587]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:54.236880 systemd[1]: sshd@19-10.230.29.198:22-139.178.89.65:51998.service: Deactivated successfully. Mar 17 21:52:54.239016 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 21:52:54.240287 systemd-logind[1183]: Session 20 logged out. Waiting for processes to exit. Mar 17 21:52:54.241838 systemd-logind[1183]: Removed session 20. Mar 17 21:52:54.377651 systemd[1]: Started sshd@20-10.230.29.198:22-139.178.89.65:52006.service. Mar 17 21:52:55.262203 sshd[3599]: Accepted publickey for core from 139.178.89.65 port 52006 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:52:55.264198 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:52:55.271113 systemd-logind[1183]: New session 21 of user core. Mar 17 21:52:55.272400 systemd[1]: Started session-21.scope. Mar 17 21:52:57.361028 env[1194]: time="2025-03-17T21:52:57.360912201Z" level=info msg="StopContainer for \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\" with timeout 30 (s)" Mar 17 21:52:57.369357 env[1194]: time="2025-03-17T21:52:57.369279046Z" level=info msg="Stop container \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\" with signal terminated" Mar 17 21:52:57.411133 systemd[1]: run-containerd-runc-k8s.io-c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb-runc.XYmyoL.mount: Deactivated successfully. Mar 17 21:52:57.432525 systemd[1]: cri-containerd-64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98.scope: Deactivated successfully. Mar 17 21:52:57.476513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98-rootfs.mount: Deactivated successfully. Mar 17 21:52:57.478790 env[1194]: time="2025-03-17T21:52:57.478440763Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 21:52:57.484911 env[1194]: time="2025-03-17T21:52:57.484862972Z" level=info msg="shim disconnected" id=64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98 Mar 17 21:52:57.485128 env[1194]: time="2025-03-17T21:52:57.484921358Z" level=warning msg="cleaning up after shim disconnected" id=64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98 namespace=k8s.io Mar 17 21:52:57.485128 env[1194]: time="2025-03-17T21:52:57.484947197Z" level=info msg="cleaning up dead shim" Mar 17 21:52:57.488540 env[1194]: time="2025-03-17T21:52:57.488500828Z" level=info msg="StopContainer for \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\" with timeout 2 (s)" Mar 17 21:52:57.488977 env[1194]: time="2025-03-17T21:52:57.488936089Z" level=info msg="Stop container \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\" with signal terminated" Mar 17 21:52:57.502881 systemd-networkd[1025]: lxc_health: Link DOWN Mar 17 21:52:57.502907 systemd-networkd[1025]: lxc_health: Lost carrier Mar 17 21:52:57.509094 env[1194]: time="2025-03-17T21:52:57.509040319Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:52:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3645 runtime=io.containerd.runc.v2\n" Mar 17 21:52:57.532685 env[1194]: time="2025-03-17T21:52:57.519558193Z" level=info msg="StopContainer for \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\" returns successfully" Mar 17 21:52:57.532685 env[1194]: time="2025-03-17T21:52:57.529121485Z" level=info msg="StopPodSandbox for \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\"" Mar 17 21:52:57.532685 env[1194]: time="2025-03-17T21:52:57.529387007Z" level=info msg="Container to stop \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:52:57.535035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0-shm.mount: Deactivated successfully. Mar 17 21:52:57.556006 systemd[1]: cri-containerd-c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb.scope: Deactivated successfully. Mar 17 21:52:57.556479 systemd[1]: cri-containerd-c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb.scope: Consumed 11.067s CPU time. Mar 17 21:52:57.569988 systemd[1]: cri-containerd-7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0.scope: Deactivated successfully. Mar 17 21:52:57.615854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0-rootfs.mount: Deactivated successfully. Mar 17 21:52:57.623381 env[1194]: time="2025-03-17T21:52:57.623294844Z" level=info msg="shim disconnected" id=7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0 Mar 17 21:52:57.623381 env[1194]: time="2025-03-17T21:52:57.623383041Z" level=warning msg="cleaning up after shim disconnected" id=7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0 namespace=k8s.io Mar 17 21:52:57.623679 env[1194]: time="2025-03-17T21:52:57.623413446Z" level=info msg="cleaning up dead shim" Mar 17 21:52:57.624373 env[1194]: time="2025-03-17T21:52:57.624294221Z" level=info msg="shim disconnected" id=c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb Mar 17 21:52:57.624373 env[1194]: time="2025-03-17T21:52:57.624357235Z" level=warning msg="cleaning up after shim disconnected" id=c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb namespace=k8s.io Mar 17 21:52:57.624542 env[1194]: time="2025-03-17T21:52:57.624375202Z" level=info msg="cleaning up dead shim" Mar 17 21:52:57.638025 env[1194]: time="2025-03-17T21:52:57.637944150Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:52:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3702 runtime=io.containerd.runc.v2\n" Mar 17 21:52:57.638729 env[1194]: time="2025-03-17T21:52:57.638666434Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:52:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3701 runtime=io.containerd.runc.v2\n" Mar 17 21:52:57.640108 env[1194]: time="2025-03-17T21:52:57.640060814Z" level=info msg="TearDown network for sandbox \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\" successfully" Mar 17 21:52:57.640210 env[1194]: time="2025-03-17T21:52:57.640123987Z" level=info msg="StopPodSandbox for \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\" returns successfully" Mar 17 21:52:57.641008 env[1194]: time="2025-03-17T21:52:57.640688178Z" level=info msg="StopContainer for \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\" returns successfully" Mar 17 21:52:57.641775 env[1194]: time="2025-03-17T21:52:57.641318614Z" level=info msg="StopPodSandbox for \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\"" Mar 17 21:52:57.641775 env[1194]: time="2025-03-17T21:52:57.641541195Z" level=info msg="Container to stop \"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:52:57.641775 env[1194]: time="2025-03-17T21:52:57.641601297Z" level=info msg="Container to stop \"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:52:57.641775 env[1194]: time="2025-03-17T21:52:57.641657599Z" level=info msg="Container to stop \"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:52:57.641775 env[1194]: time="2025-03-17T21:52:57.641682200Z" level=info msg="Container to stop \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:52:57.641775 env[1194]: time="2025-03-17T21:52:57.641702613Z" level=info msg="Container to stop \"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:52:57.661982 systemd[1]: cri-containerd-30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd.scope: Deactivated successfully. Mar 17 21:52:57.693028 env[1194]: time="2025-03-17T21:52:57.692950192Z" level=info msg="shim disconnected" id=30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd Mar 17 21:52:57.693028 env[1194]: time="2025-03-17T21:52:57.693020592Z" level=warning msg="cleaning up after shim disconnected" id=30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd namespace=k8s.io Mar 17 21:52:57.693374 env[1194]: time="2025-03-17T21:52:57.693038921Z" level=info msg="cleaning up dead shim" Mar 17 21:52:57.703981 env[1194]: time="2025-03-17T21:52:57.703921242Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:52:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3746 runtime=io.containerd.runc.v2\n" Mar 17 21:52:57.705005 env[1194]: time="2025-03-17T21:52:57.704963625Z" level=info msg="TearDown network for sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" successfully" Mar 17 21:52:57.705161 env[1194]: time="2025-03-17T21:52:57.705126243Z" level=info msg="StopPodSandbox for \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" returns successfully" Mar 17 21:52:57.742713 kubelet[1934]: I0317 21:52:57.742608 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-hubble-tls\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.744999 kubelet[1934]: I0317 21:52:57.744939 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-host-proc-sys-kernel\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.745109 kubelet[1934]: I0317 21:52:57.745028 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-clustermesh-secrets\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.745511 kubelet[1934]: I0317 21:52:57.745344 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-config-path\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.745511 kubelet[1934]: I0317 21:52:57.745435 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhps6\" (UniqueName: \"kubernetes.io/projected/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-kube-api-access-vhps6\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.745511 kubelet[1934]: I0317 21:52:57.745472 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-bpf-maps\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.745783 kubelet[1934]: I0317 21:52:57.745537 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-host-proc-sys-net\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.745783 kubelet[1934]: I0317 21:52:57.745597 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-cgroup\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.745783 kubelet[1934]: I0317 21:52:57.745637 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cni-path\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.746298 kubelet[1934]: I0317 21:52:57.746195 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-hostproc\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.746298 kubelet[1934]: I0317 21:52:57.746260 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-xtables-lock\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.746640 kubelet[1934]: I0317 21:52:57.746315 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8r6n\" (UniqueName: \"kubernetes.io/projected/c356167f-8513-4fd0-a2e1-4754e4d0ef94-kube-api-access-l8r6n\") pod \"c356167f-8513-4fd0-a2e1-4754e4d0ef94\" (UID: \"c356167f-8513-4fd0-a2e1-4754e4d0ef94\") " Mar 17 21:52:57.746640 kubelet[1934]: I0317 21:52:57.746411 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-lib-modules\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.746640 kubelet[1934]: I0317 21:52:57.746476 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c356167f-8513-4fd0-a2e1-4754e4d0ef94-cilium-config-path\") pod \"c356167f-8513-4fd0-a2e1-4754e4d0ef94\" (UID: \"c356167f-8513-4fd0-a2e1-4754e4d0ef94\") " Mar 17 21:52:57.747431 kubelet[1934]: I0317 21:52:57.746517 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-etc-cni-netd\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.747881 kubelet[1934]: I0317 21:52:57.747838 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-run\") pod \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\" (UID: \"79a4216a-bb7c-4f9a-be38-fd37aeb60bfb\") " Mar 17 21:52:57.754178 kubelet[1934]: I0317 21:52:57.747655 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:52:57.754761 kubelet[1934]: I0317 21:52:57.754718 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:52:57.754883 kubelet[1934]: I0317 21:52:57.754791 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:52:57.755022 kubelet[1934]: I0317 21:52:57.754989 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:52:57.755113 kubelet[1934]: I0317 21:52:57.755043 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cni-path" (OuterVolumeSpecName: "cni-path") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:52:57.757793 kubelet[1934]: I0317 21:52:57.746098 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:52:57.757898 kubelet[1934]: I0317 21:52:57.752867 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:52:57.757898 kubelet[1934]: I0317 21:52:57.753072 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-hostproc" (OuterVolumeSpecName: "hostproc") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:52:57.757898 kubelet[1934]: I0317 21:52:57.753494 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:52:57.757898 kubelet[1934]: I0317 21:52:57.753908 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:52:57.768891 kubelet[1934]: I0317 21:52:57.768834 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c356167f-8513-4fd0-a2e1-4754e4d0ef94-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c356167f-8513-4fd0-a2e1-4754e4d0ef94" (UID: "c356167f-8513-4fd0-a2e1-4754e4d0ef94"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 21:52:57.769055 kubelet[1934]: I0317 21:52:57.768925 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 21:52:57.769696 kubelet[1934]: I0317 21:52:57.769636 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-kube-api-access-vhps6" (OuterVolumeSpecName: "kube-api-access-vhps6") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "kube-api-access-vhps6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 21:52:57.769836 kubelet[1934]: I0317 21:52:57.769802 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 21:52:57.770291 kubelet[1934]: I0317 21:52:57.770209 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" (UID: "79a4216a-bb7c-4f9a-be38-fd37aeb60bfb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 21:52:57.773014 kubelet[1934]: I0317 21:52:57.772972 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c356167f-8513-4fd0-a2e1-4754e4d0ef94-kube-api-access-l8r6n" (OuterVolumeSpecName: "kube-api-access-l8r6n") pod "c356167f-8513-4fd0-a2e1-4754e4d0ef94" (UID: "c356167f-8513-4fd0-a2e1-4754e4d0ef94"). InnerVolumeSpecName "kube-api-access-l8r6n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 21:52:57.854242 kubelet[1934]: I0317 21:52:57.854168 1934 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-bpf-maps\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.854242 kubelet[1934]: I0317 21:52:57.854230 1934 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-host-proc-sys-net\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.854242 kubelet[1934]: I0317 21:52:57.854251 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-cgroup\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.854648 kubelet[1934]: I0317 21:52:57.854280 1934 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cni-path\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.854648 kubelet[1934]: I0317 21:52:57.854298 1934 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-hostproc\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.854648 kubelet[1934]: I0317 21:52:57.854315 1934 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-xtables-lock\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.854648 kubelet[1934]: I0317 21:52:57.854352 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l8r6n\" (UniqueName: \"kubernetes.io/projected/c356167f-8513-4fd0-a2e1-4754e4d0ef94-kube-api-access-l8r6n\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.854648 kubelet[1934]: I0317 21:52:57.854398 1934 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-lib-modules\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.854648 kubelet[1934]: I0317 21:52:57.854423 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c356167f-8513-4fd0-a2e1-4754e4d0ef94-cilium-config-path\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.854648 kubelet[1934]: I0317 21:52:57.854441 1934 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-etc-cni-netd\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.854648 kubelet[1934]: I0317 21:52:57.854490 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-run\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.855204 kubelet[1934]: I0317 21:52:57.854508 1934 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-hubble-tls\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.855204 kubelet[1934]: I0317 21:52:57.854529 1934 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-host-proc-sys-kernel\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.855204 kubelet[1934]: I0317 21:52:57.854548 1934 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-clustermesh-secrets\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.855204 kubelet[1934]: I0317 21:52:57.854567 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-cilium-config-path\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.855204 kubelet[1934]: I0317 21:52:57.854585 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vhps6\" (UniqueName: \"kubernetes.io/projected/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb-kube-api-access-vhps6\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:52:57.955004 kubelet[1934]: I0317 21:52:57.954933 1934 scope.go:117] "RemoveContainer" containerID="64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98" Mar 17 21:52:57.959732 env[1194]: time="2025-03-17T21:52:57.959659249Z" level=info msg="RemoveContainer for \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\"" Mar 17 21:52:57.961990 systemd[1]: Removed slice kubepods-besteffort-podc356167f_8513_4fd0_a2e1_4754e4d0ef94.slice. Mar 17 21:52:57.966450 env[1194]: time="2025-03-17T21:52:57.966395768Z" level=info msg="RemoveContainer for \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\" returns successfully" Mar 17 21:52:57.967193 kubelet[1934]: I0317 21:52:57.967148 1934 scope.go:117] "RemoveContainer" containerID="64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98" Mar 17 21:52:57.970005 env[1194]: time="2025-03-17T21:52:57.969793590Z" level=error msg="ContainerStatus for \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\": not found" Mar 17 21:52:57.972301 kubelet[1934]: E0317 21:52:57.972224 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\": not found" containerID="64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98" Mar 17 21:52:57.996530 kubelet[1934]: I0317 21:52:57.977323 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98"} err="failed to get container status \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\": rpc error: code = NotFound desc = an error occurred when try to find container \"64505c26da43f3c8848ffd6635f6d3981df7285968f000e0278a2460f762ab98\": not found" Mar 17 21:52:57.996530 kubelet[1934]: I0317 21:52:57.996510 1934 scope.go:117] "RemoveContainer" containerID="c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb" Mar 17 21:52:57.999274 env[1194]: time="2025-03-17T21:52:57.999201833Z" level=info msg="RemoveContainer for \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\"" Mar 17 21:52:58.003126 systemd[1]: Removed slice kubepods-burstable-pod79a4216a_bb7c_4f9a_be38_fd37aeb60bfb.slice. Mar 17 21:52:58.003276 systemd[1]: kubepods-burstable-pod79a4216a_bb7c_4f9a_be38_fd37aeb60bfb.slice: Consumed 11.247s CPU time. Mar 17 21:52:58.005872 env[1194]: time="2025-03-17T21:52:58.005809799Z" level=info msg="RemoveContainer for \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\" returns successfully" Mar 17 21:52:58.011615 kubelet[1934]: I0317 21:52:58.011557 1934 scope.go:117] "RemoveContainer" containerID="0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a" Mar 17 21:52:58.013817 env[1194]: time="2025-03-17T21:52:58.013676664Z" level=info msg="RemoveContainer for \"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a\"" Mar 17 21:52:58.025174 env[1194]: time="2025-03-17T21:52:58.025094754Z" level=info msg="RemoveContainer for \"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a\" returns successfully" Mar 17 21:52:58.026247 kubelet[1934]: I0317 21:52:58.026206 1934 scope.go:117] "RemoveContainer" containerID="8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6" Mar 17 21:52:58.032109 env[1194]: time="2025-03-17T21:52:58.031559875Z" level=info msg="RemoveContainer for \"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6\"" Mar 17 21:52:58.035669 env[1194]: time="2025-03-17T21:52:58.035460458Z" level=info msg="RemoveContainer for \"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6\" returns successfully" Mar 17 21:52:58.036078 kubelet[1934]: I0317 21:52:58.036049 1934 scope.go:117] "RemoveContainer" containerID="cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198" Mar 17 21:52:58.039164 env[1194]: time="2025-03-17T21:52:58.039109007Z" level=info msg="RemoveContainer for \"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198\"" Mar 17 21:52:58.043650 env[1194]: time="2025-03-17T21:52:58.042273524Z" level=info msg="RemoveContainer for \"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198\" returns successfully" Mar 17 21:52:58.043807 kubelet[1934]: I0317 21:52:58.042635 1934 scope.go:117] "RemoveContainer" containerID="0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5" Mar 17 21:52:58.047762 env[1194]: time="2025-03-17T21:52:58.046623363Z" level=info msg="RemoveContainer for \"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5\"" Mar 17 21:52:58.050787 env[1194]: time="2025-03-17T21:52:58.050705696Z" level=info msg="RemoveContainer for \"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5\" returns successfully" Mar 17 21:52:58.051002 kubelet[1934]: I0317 21:52:58.050961 1934 scope.go:117] "RemoveContainer" containerID="c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb" Mar 17 21:52:58.053371 env[1194]: time="2025-03-17T21:52:58.051398278Z" level=error msg="ContainerStatus for \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\": not found" Mar 17 21:52:58.053523 kubelet[1934]: E0317 21:52:58.051682 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\": not found" containerID="c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb" Mar 17 21:52:58.053523 kubelet[1934]: I0317 21:52:58.051739 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb"} err="failed to get container status \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb\": not found" Mar 17 21:52:58.053523 kubelet[1934]: I0317 21:52:58.051787 1934 scope.go:117] "RemoveContainer" containerID="0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a" Mar 17 21:52:58.054173 env[1194]: time="2025-03-17T21:52:58.054064102Z" level=error msg="ContainerStatus for \"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a\": not found" Mar 17 21:52:58.056247 kubelet[1934]: E0317 21:52:58.054906 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a\": not found" containerID="0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a" Mar 17 21:52:58.056247 kubelet[1934]: I0317 21:52:58.054969 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a"} err="failed to get container status \"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0af2c9e2dddda8a64f02456161a9230c0027b60957b58cd31d5fcfb4cb6d3f7a\": not found" Mar 17 21:52:58.056247 kubelet[1934]: I0317 21:52:58.055023 1934 scope.go:117] "RemoveContainer" containerID="8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6" Mar 17 21:52:58.056905 env[1194]: time="2025-03-17T21:52:58.056795727Z" level=error msg="ContainerStatus for \"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6\": not found" Mar 17 21:52:58.059243 kubelet[1934]: E0317 21:52:58.057293 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6\": not found" containerID="8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6" Mar 17 21:52:58.059243 kubelet[1934]: I0317 21:52:58.057418 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6"} err="failed to get container status \"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ad88bae0ab8833e8b88220607ff7307fb0e48a52d56ae7ca3cfaaa6a23cb8d6\": not found" Mar 17 21:52:58.059243 kubelet[1934]: I0317 21:52:58.057445 1934 scope.go:117] "RemoveContainer" containerID="cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198" Mar 17 21:52:58.060184 env[1194]: time="2025-03-17T21:52:58.060050760Z" level=error msg="ContainerStatus for \"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198\": not found" Mar 17 21:52:58.060474 kubelet[1934]: E0317 21:52:58.060409 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198\": not found" containerID="cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198" Mar 17 21:52:58.060573 kubelet[1934]: I0317 21:52:58.060479 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198"} err="failed to get container status \"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf7b5b7a6cf465bedf5e092a40790af22b2ec0251d3d361f98e36a8a12f3c198\": not found" Mar 17 21:52:58.060573 kubelet[1934]: I0317 21:52:58.060504 1934 scope.go:117] "RemoveContainer" containerID="0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5" Mar 17 21:52:58.060954 env[1194]: time="2025-03-17T21:52:58.060889429Z" level=error msg="ContainerStatus for \"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5\": not found" Mar 17 21:52:58.061311 kubelet[1934]: E0317 21:52:58.061272 1934 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5\": not found" containerID="0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5" Mar 17 21:52:58.061523 kubelet[1934]: I0317 21:52:58.061482 1934 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5"} err="failed to get container status \"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a49f190f561dacdd3a4324a067210883097595ef7b4f8fa5f862ee53ecefea5\": not found" Mar 17 21:52:58.298195 kubelet[1934]: I0317 21:52:58.298008 1934 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" path="/var/lib/kubelet/pods/79a4216a-bb7c-4f9a-be38-fd37aeb60bfb/volumes" Mar 17 21:52:58.300775 kubelet[1934]: I0317 21:52:58.300744 1934 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c356167f-8513-4fd0-a2e1-4754e4d0ef94" path="/var/lib/kubelet/pods/c356167f-8513-4fd0-a2e1-4754e4d0ef94/volumes" Mar 17 21:52:58.399925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4c5aa4cb2af42bb7ca6f73e085691026cc7ede985ace2df11913684282e85cb-rootfs.mount: Deactivated successfully. Mar 17 21:52:58.400447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd-rootfs.mount: Deactivated successfully. Mar 17 21:52:58.400744 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd-shm.mount: Deactivated successfully. Mar 17 21:52:58.401838 systemd[1]: var-lib-kubelet-pods-79a4216a\x2dbb7c\x2d4f9a\x2dbe38\x2dfd37aeb60bfb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvhps6.mount: Deactivated successfully. Mar 17 21:52:58.401975 systemd[1]: var-lib-kubelet-pods-79a4216a\x2dbb7c\x2d4f9a\x2dbe38\x2dfd37aeb60bfb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 21:52:58.402077 systemd[1]: var-lib-kubelet-pods-79a4216a\x2dbb7c\x2d4f9a\x2dbe38\x2dfd37aeb60bfb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 21:52:58.402198 systemd[1]: var-lib-kubelet-pods-c356167f\x2d8513\x2d4fd0\x2da2e1\x2d4754e4d0ef94-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl8r6n.mount: Deactivated successfully. Mar 17 21:52:59.396062 sshd[3599]: pam_unix(sshd:session): session closed for user core Mar 17 21:52:59.400554 systemd-logind[1183]: Session 21 logged out. Waiting for processes to exit. Mar 17 21:52:59.401718 systemd[1]: sshd@20-10.230.29.198:22-139.178.89.65:52006.service: Deactivated successfully. Mar 17 21:52:59.402836 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 21:52:59.404702 systemd-logind[1183]: Removed session 21. Mar 17 21:52:59.546145 systemd[1]: Started sshd@21-10.230.29.198:22-139.178.89.65:52010.service. Mar 17 21:53:00.245117 env[1194]: time="2025-03-17T21:53:00.244659504Z" level=info msg="StopPodSandbox for \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\"" Mar 17 21:53:00.245117 env[1194]: time="2025-03-17T21:53:00.244919252Z" level=info msg="TearDown network for sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" successfully" Mar 17 21:53:00.245117 env[1194]: time="2025-03-17T21:53:00.244988222Z" level=info msg="StopPodSandbox for \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" returns successfully" Mar 17 21:53:00.247782 env[1194]: time="2025-03-17T21:53:00.246863392Z" level=info msg="RemovePodSandbox for \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\"" Mar 17 21:53:00.247782 env[1194]: time="2025-03-17T21:53:00.246915626Z" level=info msg="Forcibly stopping sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\"" Mar 17 21:53:00.247782 env[1194]: time="2025-03-17T21:53:00.247103294Z" level=info msg="TearDown network for sandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" successfully" Mar 17 21:53:00.251700 env[1194]: time="2025-03-17T21:53:00.251660256Z" level=info msg="RemovePodSandbox \"30a72eeeb5b8d1b9d2c04876a5442bda9c71b26cc619ffd614d289d63f5803dd\" returns successfully" Mar 17 21:53:00.252382 env[1194]: time="2025-03-17T21:53:00.252312760Z" level=info msg="StopPodSandbox for \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\"" Mar 17 21:53:00.252496 env[1194]: time="2025-03-17T21:53:00.252443645Z" level=info msg="TearDown network for sandbox \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\" successfully" Mar 17 21:53:00.252572 env[1194]: time="2025-03-17T21:53:00.252495616Z" level=info msg="StopPodSandbox for \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\" returns successfully" Mar 17 21:53:00.253025 env[1194]: time="2025-03-17T21:53:00.252989057Z" level=info msg="RemovePodSandbox for \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\"" Mar 17 21:53:00.253117 env[1194]: time="2025-03-17T21:53:00.253036695Z" level=info msg="Forcibly stopping sandbox \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\"" Mar 17 21:53:00.253183 env[1194]: time="2025-03-17T21:53:00.253133387Z" level=info msg="TearDown network for sandbox \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\" successfully" Mar 17 21:53:00.256306 env[1194]: time="2025-03-17T21:53:00.256265566Z" level=info msg="RemovePodSandbox \"7f9a034bbf88de7dd88a1d025d423c763c9597a37c8774ad59a88d8879c45ea0\" returns successfully" Mar 17 21:53:00.450502 kubelet[1934]: E0317 21:53:00.449268 1934 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 21:53:00.452714 sshd[3766]: Accepted publickey for core from 139.178.89.65 port 52010 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:53:00.455087 sshd[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:53:00.463236 systemd-logind[1183]: New session 22 of user core. Mar 17 21:53:00.464110 systemd[1]: Started session-22.scope. Mar 17 21:53:02.078326 kubelet[1934]: I0317 21:53:02.078225 1934 memory_manager.go:355] "RemoveStaleState removing state" podUID="c356167f-8513-4fd0-a2e1-4754e4d0ef94" containerName="cilium-operator" Mar 17 21:53:02.078326 kubelet[1934]: I0317 21:53:02.078297 1934 memory_manager.go:355] "RemoveStaleState removing state" podUID="79a4216a-bb7c-4f9a-be38-fd37aeb60bfb" containerName="cilium-agent" Mar 17 21:53:02.100011 systemd[1]: Created slice kubepods-burstable-pod45ef0493_ef09_4f07_90c8_c41918bfcc7f.slice. Mar 17 21:53:02.194549 kubelet[1934]: I0317 21:53:02.194497 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-bpf-maps\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.194947 kubelet[1934]: I0317 21:53:02.194893 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-etc-cni-netd\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.195189 kubelet[1934]: I0317 21:53:02.195148 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww2ww\" (UniqueName: \"kubernetes.io/projected/45ef0493-ef09-4f07-90c8-c41918bfcc7f-kube-api-access-ww2ww\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.195453 kubelet[1934]: I0317 21:53:02.195424 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-run\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.195649 kubelet[1934]: I0317 21:53:02.195602 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-xtables-lock\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.195872 kubelet[1934]: I0317 21:53:02.195846 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cni-path\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.196074 kubelet[1934]: I0317 21:53:02.196036 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-lib-modules\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.196252 kubelet[1934]: I0317 21:53:02.196226 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45ef0493-ef09-4f07-90c8-c41918bfcc7f-clustermesh-secrets\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.196467 kubelet[1934]: I0317 21:53:02.196440 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-hostproc\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.196664 kubelet[1934]: I0317 21:53:02.196630 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-cgroup\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.196869 kubelet[1934]: I0317 21:53:02.196841 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-ipsec-secrets\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.197095 kubelet[1934]: I0317 21:53:02.197050 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-host-proc-sys-net\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.197318 kubelet[1934]: I0317 21:53:02.197267 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-host-proc-sys-kernel\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.197557 kubelet[1934]: I0317 21:53:02.197508 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45ef0493-ef09-4f07-90c8-c41918bfcc7f-hubble-tls\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.197819 kubelet[1934]: I0317 21:53:02.197792 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-config-path\") pod \"cilium-hgx6w\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " pod="kube-system/cilium-hgx6w" Mar 17 21:53:02.199254 sshd[3766]: pam_unix(sshd:session): session closed for user core Mar 17 21:53:02.204832 systemd[1]: sshd@21-10.230.29.198:22-139.178.89.65:52010.service: Deactivated successfully. Mar 17 21:53:02.206135 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 21:53:02.206753 systemd[1]: session-22.scope: Consumed 1.001s CPU time. Mar 17 21:53:02.207516 systemd-logind[1183]: Session 22 logged out. Waiting for processes to exit. Mar 17 21:53:02.209101 systemd-logind[1183]: Removed session 22. Mar 17 21:53:02.346204 systemd[1]: Started sshd@22-10.230.29.198:22-139.178.89.65:48726.service. Mar 17 21:53:02.408618 env[1194]: time="2025-03-17T21:53:02.408474005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hgx6w,Uid:45ef0493-ef09-4f07-90c8-c41918bfcc7f,Namespace:kube-system,Attempt:0,}" Mar 17 21:53:02.432742 env[1194]: time="2025-03-17T21:53:02.432529604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:53:02.433076 env[1194]: time="2025-03-17T21:53:02.432986316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:53:02.433305 env[1194]: time="2025-03-17T21:53:02.433220584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:53:02.434088 env[1194]: time="2025-03-17T21:53:02.433925531Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703 pid=3793 runtime=io.containerd.runc.v2 Mar 17 21:53:02.463888 systemd[1]: Started cri-containerd-7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703.scope. Mar 17 21:53:02.510985 env[1194]: time="2025-03-17T21:53:02.510921813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hgx6w,Uid:45ef0493-ef09-4f07-90c8-c41918bfcc7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703\"" Mar 17 21:53:02.519150 env[1194]: time="2025-03-17T21:53:02.519106188Z" level=info msg="CreateContainer within sandbox \"7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 21:53:02.534273 env[1194]: time="2025-03-17T21:53:02.534161335Z" level=info msg="CreateContainer within sandbox \"7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384\"" Mar 17 21:53:02.536779 env[1194]: time="2025-03-17T21:53:02.536643650Z" level=info msg="StartContainer for \"7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384\"" Mar 17 21:53:02.558725 systemd[1]: Started cri-containerd-7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384.scope. Mar 17 21:53:02.581195 systemd[1]: cri-containerd-7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384.scope: Deactivated successfully. Mar 17 21:53:02.603229 env[1194]: time="2025-03-17T21:53:02.603065767Z" level=info msg="shim disconnected" id=7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384 Mar 17 21:53:02.603229 env[1194]: time="2025-03-17T21:53:02.603166901Z" level=warning msg="cleaning up after shim disconnected" id=7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384 namespace=k8s.io Mar 17 21:53:02.603229 env[1194]: time="2025-03-17T21:53:02.603184531Z" level=info msg="cleaning up dead shim" Mar 17 21:53:02.615611 env[1194]: time="2025-03-17T21:53:02.615553828Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:53:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3849 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T21:53:02Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 21:53:02.616295 env[1194]: time="2025-03-17T21:53:02.616102298Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Mar 17 21:53:02.619468 env[1194]: time="2025-03-17T21:53:02.619404202Z" level=error msg="Failed to pipe stdout of container \"7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384\"" error="reading from a closed fifo" Mar 17 21:53:02.619689 env[1194]: time="2025-03-17T21:53:02.619621820Z" level=error msg="Failed to pipe stderr of container \"7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384\"" error="reading from a closed fifo" Mar 17 21:53:02.621426 env[1194]: time="2025-03-17T21:53:02.621235514Z" level=error msg="StartContainer for \"7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 21:53:02.621996 kubelet[1934]: E0317 21:53:02.621755 1934 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384" Mar 17 21:53:02.626833 kubelet[1934]: E0317 21:53:02.626783 1934 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 21:53:02.626833 kubelet[1934]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 21:53:02.626833 kubelet[1934]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 21:53:02.626833 kubelet[1934]: rm /hostbin/cilium-mount Mar 17 21:53:02.627181 kubelet[1934]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ww2ww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hgx6w_kube-system(45ef0493-ef09-4f07-90c8-c41918bfcc7f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 21:53:02.627181 kubelet[1934]: > logger="UnhandledError" Mar 17 21:53:02.628632 kubelet[1934]: E0317 21:53:02.628589 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hgx6w" podUID="45ef0493-ef09-4f07-90c8-c41918bfcc7f" Mar 17 21:53:03.000832 env[1194]: time="2025-03-17T21:53:03.000685406Z" level=info msg="CreateContainer within sandbox \"7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Mar 17 21:53:03.021554 env[1194]: time="2025-03-17T21:53:03.021477522Z" level=info msg="CreateContainer within sandbox \"7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e\"" Mar 17 21:53:03.022861 env[1194]: time="2025-03-17T21:53:03.022803465Z" level=info msg="StartContainer for \"1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e\"" Mar 17 21:53:03.051667 systemd[1]: Started cri-containerd-1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e.scope. Mar 17 21:53:03.071479 systemd[1]: cri-containerd-1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e.scope: Deactivated successfully. Mar 17 21:53:03.082718 env[1194]: time="2025-03-17T21:53:03.082635528Z" level=info msg="shim disconnected" id=1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e Mar 17 21:53:03.082718 env[1194]: time="2025-03-17T21:53:03.082712558Z" level=warning msg="cleaning up after shim disconnected" id=1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e namespace=k8s.io Mar 17 21:53:03.083077 env[1194]: time="2025-03-17T21:53:03.082730828Z" level=info msg="cleaning up dead shim" Mar 17 21:53:03.096246 env[1194]: time="2025-03-17T21:53:03.096149954Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:53:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3886 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T21:53:03Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 21:53:03.096774 env[1194]: time="2025-03-17T21:53:03.096664144Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Mar 17 21:53:03.097647 env[1194]: time="2025-03-17T21:53:03.097592370Z" level=error msg="Failed to pipe stderr of container \"1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e\"" error="reading from a closed fifo" Mar 17 21:53:03.097884 env[1194]: time="2025-03-17T21:53:03.097840307Z" level=error msg="Failed to pipe stdout of container \"1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e\"" error="reading from a closed fifo" Mar 17 21:53:03.099552 env[1194]: time="2025-03-17T21:53:03.099467135Z" level=error msg="StartContainer for \"1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 21:53:03.099936 kubelet[1934]: E0317 21:53:03.099851 1934 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e" Mar 17 21:53:03.100521 kubelet[1934]: E0317 21:53:03.100100 1934 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 21:53:03.100521 kubelet[1934]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 21:53:03.100521 kubelet[1934]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 21:53:03.100521 kubelet[1934]: rm /hostbin/cilium-mount Mar 17 21:53:03.100521 kubelet[1934]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ww2ww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hgx6w_kube-system(45ef0493-ef09-4f07-90c8-c41918bfcc7f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 21:53:03.100521 kubelet[1934]: > logger="UnhandledError" Mar 17 21:53:03.102273 kubelet[1934]: E0317 21:53:03.101833 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hgx6w" podUID="45ef0493-ef09-4f07-90c8-c41918bfcc7f" Mar 17 21:53:03.242538 sshd[3783]: Accepted publickey for core from 139.178.89.65 port 48726 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:53:03.244948 sshd[3783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:53:03.252925 systemd[1]: Started session-23.scope. Mar 17 21:53:03.253713 systemd-logind[1183]: New session 23 of user core. Mar 17 21:53:04.002398 kubelet[1934]: I0317 21:53:04.002322 1934 scope.go:117] "RemoveContainer" containerID="7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384" Mar 17 21:53:04.006223 env[1194]: time="2025-03-17T21:53:04.003084573Z" level=info msg="StopPodSandbox for \"7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703\"" Mar 17 21:53:04.006223 env[1194]: time="2025-03-17T21:53:04.003171286Z" level=info msg="Container to stop \"1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:53:04.006223 env[1194]: time="2025-03-17T21:53:04.003209746Z" level=info msg="Container to stop \"7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:53:04.006248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703-shm.mount: Deactivated successfully. Mar 17 21:53:04.009882 env[1194]: time="2025-03-17T21:53:04.009410727Z" level=info msg="RemoveContainer for \"7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384\"" Mar 17 21:53:04.014611 env[1194]: time="2025-03-17T21:53:04.014561822Z" level=info msg="RemoveContainer for \"7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384\" returns successfully" Mar 17 21:53:04.019553 systemd[1]: cri-containerd-7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703.scope: Deactivated successfully. Mar 17 21:53:04.025326 sshd[3783]: pam_unix(sshd:session): session closed for user core Mar 17 21:53:04.029560 systemd[1]: sshd@22-10.230.29.198:22-139.178.89.65:48726.service: Deactivated successfully. Mar 17 21:53:04.030514 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 21:53:04.032566 systemd-logind[1183]: Session 23 logged out. Waiting for processes to exit. Mar 17 21:53:04.037466 systemd-logind[1183]: Removed session 23. Mar 17 21:53:04.068402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703-rootfs.mount: Deactivated successfully. Mar 17 21:53:04.076287 env[1194]: time="2025-03-17T21:53:04.076206116Z" level=info msg="shim disconnected" id=7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703 Mar 17 21:53:04.077050 env[1194]: time="2025-03-17T21:53:04.077017686Z" level=warning msg="cleaning up after shim disconnected" id=7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703 namespace=k8s.io Mar 17 21:53:04.077179 env[1194]: time="2025-03-17T21:53:04.077150088Z" level=info msg="cleaning up dead shim" Mar 17 21:53:04.088872 env[1194]: time="2025-03-17T21:53:04.088812921Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:53:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3927 runtime=io.containerd.runc.v2\n" Mar 17 21:53:04.089504 env[1194]: time="2025-03-17T21:53:04.089463462Z" level=info msg="TearDown network for sandbox \"7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703\" successfully" Mar 17 21:53:04.089659 env[1194]: time="2025-03-17T21:53:04.089623627Z" level=info msg="StopPodSandbox for \"7a62dd6a6f8e08fcd5ea0dd43251e5f35ec2a479931a19540e743fcf476d3703\" returns successfully" Mar 17 21:53:04.172815 systemd[1]: Started sshd@23-10.230.29.198:22-139.178.89.65:48740.service. Mar 17 21:53:04.222981 kubelet[1934]: I0317 21:53:04.222921 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-hostproc\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.222996 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-host-proc-sys-net\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223039 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-etc-cni-netd\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223094 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-host-proc-sys-kernel\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223134 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-ipsec-secrets\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223196 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-run\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223264 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-config-path\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223306 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-bpf-maps\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223376 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cni-path\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223428 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45ef0493-ef09-4f07-90c8-c41918bfcc7f-clustermesh-secrets\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223441 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223460 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww2ww\" (UniqueName: \"kubernetes.io/projected/45ef0493-ef09-4f07-90c8-c41918bfcc7f-kube-api-access-ww2ww\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223521 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-xtables-lock\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223554 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-lib-modules\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223595 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-cgroup\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223651 1934 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45ef0493-ef09-4f07-90c8-c41918bfcc7f-hubble-tls\") pod \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\" (UID: \"45ef0493-ef09-4f07-90c8-c41918bfcc7f\") " Mar 17 21:53:04.226605 kubelet[1934]: I0317 21:53:04.223703 1934 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-host-proc-sys-kernel\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.228080 kubelet[1934]: I0317 21:53:04.224308 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-hostproc" (OuterVolumeSpecName: "hostproc") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:53:04.228080 kubelet[1934]: I0317 21:53:04.224370 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:53:04.228080 kubelet[1934]: I0317 21:53:04.224415 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:53:04.228080 kubelet[1934]: I0317 21:53:04.224453 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:53:04.228080 kubelet[1934]: I0317 21:53:04.224483 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:53:04.228080 kubelet[1934]: I0317 21:53:04.224520 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:53:04.228080 kubelet[1934]: I0317 21:53:04.227938 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 21:53:04.228891 kubelet[1934]: I0317 21:53:04.228558 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:53:04.228891 kubelet[1934]: I0317 21:53:04.228601 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cni-path" (OuterVolumeSpecName: "cni-path") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:53:04.232448 systemd[1]: var-lib-kubelet-pods-45ef0493\x2def09\x2d4f07\x2d90c8\x2dc41918bfcc7f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 21:53:04.234371 kubelet[1934]: I0317 21:53:04.234321 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 21:53:04.238068 systemd[1]: var-lib-kubelet-pods-45ef0493\x2def09\x2d4f07\x2d90c8\x2dc41918bfcc7f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 21:53:04.239648 kubelet[1934]: I0317 21:53:04.239506 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ef0493-ef09-4f07-90c8-c41918bfcc7f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 21:53:04.240282 kubelet[1934]: I0317 21:53:04.240248 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ef0493-ef09-4f07-90c8-c41918bfcc7f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 21:53:04.240566 kubelet[1934]: I0317 21:53:04.240534 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ef0493-ef09-4f07-90c8-c41918bfcc7f-kube-api-access-ww2ww" (OuterVolumeSpecName: "kube-api-access-ww2ww") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "kube-api-access-ww2ww". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 21:53:04.243620 kubelet[1934]: I0317 21:53:04.243575 1934 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "45ef0493-ef09-4f07-90c8-c41918bfcc7f" (UID: "45ef0493-ef09-4f07-90c8-c41918bfcc7f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 21:53:04.303311 systemd[1]: Removed slice kubepods-burstable-pod45ef0493_ef09_4f07_90c8_c41918bfcc7f.slice. Mar 17 21:53:04.308862 systemd[1]: var-lib-kubelet-pods-45ef0493\x2def09\x2d4f07\x2d90c8\x2dc41918bfcc7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dww2ww.mount: Deactivated successfully. Mar 17 21:53:04.308997 systemd[1]: var-lib-kubelet-pods-45ef0493\x2def09\x2d4f07\x2d90c8\x2dc41918bfcc7f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 21:53:04.323927 kubelet[1934]: I0317 21:53:04.323890 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-ipsec-secrets\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.324153 kubelet[1934]: I0317 21:53:04.324124 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-run\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.324419 kubelet[1934]: I0317 21:53:04.324384 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-config-path\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.324694 kubelet[1934]: I0317 21:53:04.324668 1934 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-bpf-maps\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.324825 kubelet[1934]: I0317 21:53:04.324800 1934 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cni-path\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.324967 kubelet[1934]: I0317 21:53:04.324942 1934 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ww2ww\" (UniqueName: \"kubernetes.io/projected/45ef0493-ef09-4f07-90c8-c41918bfcc7f-kube-api-access-ww2ww\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.325112 kubelet[1934]: I0317 21:53:04.325082 1934 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-xtables-lock\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.325257 kubelet[1934]: I0317 21:53:04.325218 1934 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45ef0493-ef09-4f07-90c8-c41918bfcc7f-clustermesh-secrets\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.325422 kubelet[1934]: I0317 21:53:04.325388 1934 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-cilium-cgroup\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.325572 kubelet[1934]: I0317 21:53:04.325547 1934 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45ef0493-ef09-4f07-90c8-c41918bfcc7f-hubble-tls\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.325700 kubelet[1934]: I0317 21:53:04.325673 1934 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-lib-modules\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.325836 kubelet[1934]: I0317 21:53:04.325812 1934 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-hostproc\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.325963 kubelet[1934]: I0317 21:53:04.325938 1934 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-host-proc-sys-net\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.326091 kubelet[1934]: I0317 21:53:04.326063 1934 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45ef0493-ef09-4f07-90c8-c41918bfcc7f-etc-cni-netd\") on node \"srv-87dtj.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:53:04.455166 kubelet[1934]: I0317 21:53:04.454800 1934 setters.go:602] "Node became not ready" node="srv-87dtj.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T21:53:04Z","lastTransitionTime":"2025-03-17T21:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 21:53:05.006945 kubelet[1934]: I0317 21:53:05.006890 1934 scope.go:117] "RemoveContainer" containerID="1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e" Mar 17 21:53:05.011057 env[1194]: time="2025-03-17T21:53:05.010594547Z" level=info msg="RemoveContainer for \"1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e\"" Mar 17 21:53:05.014983 env[1194]: time="2025-03-17T21:53:05.014914662Z" level=info msg="RemoveContainer for \"1c09032dcde2b14fd6d02ec6662a8b897c6a2119bf049ca2b287fa7ce971845e\" returns successfully" Mar 17 21:53:05.065217 sshd[3941]: Accepted publickey for core from 139.178.89.65 port 48740 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:53:05.067094 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:53:05.077990 systemd[1]: Started session-24.scope. Mar 17 21:53:05.080135 systemd-logind[1183]: New session 24 of user core. Mar 17 21:53:05.099402 kubelet[1934]: I0317 21:53:05.099363 1934 memory_manager.go:355] "RemoveStaleState removing state" podUID="45ef0493-ef09-4f07-90c8-c41918bfcc7f" containerName="mount-cgroup" Mar 17 21:53:05.099642 kubelet[1934]: I0317 21:53:05.099618 1934 memory_manager.go:355] "RemoveStaleState removing state" podUID="45ef0493-ef09-4f07-90c8-c41918bfcc7f" containerName="mount-cgroup" Mar 17 21:53:05.110234 systemd[1]: Created slice kubepods-burstable-pod38802fba_5dbd_497b_8cf9_dae0edeaeb88.slice. Mar 17 21:53:05.130456 kubelet[1934]: I0317 21:53:05.130399 1934 status_manager.go:890] "Failed to get status for pod" podUID="38802fba-5dbd-497b-8cf9-dae0edeaeb88" pod="kube-system/cilium-9fjpp" err="pods \"cilium-9fjpp\" is forbidden: User \"system:node:srv-87dtj.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-87dtj.gb1.brightbox.com' and this object" Mar 17 21:53:05.231728 kubelet[1934]: I0317 21:53:05.231658 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38802fba-5dbd-497b-8cf9-dae0edeaeb88-lib-modules\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.232593 kubelet[1934]: I0317 21:53:05.232560 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38802fba-5dbd-497b-8cf9-dae0edeaeb88-xtables-lock\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.232871 kubelet[1934]: I0317 21:53:05.232826 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38802fba-5dbd-497b-8cf9-dae0edeaeb88-cilium-run\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.233073 kubelet[1934]: I0317 21:53:05.233043 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38802fba-5dbd-497b-8cf9-dae0edeaeb88-cilium-config-path\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.233324 kubelet[1934]: I0317 21:53:05.233295 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/38802fba-5dbd-497b-8cf9-dae0edeaeb88-cilium-ipsec-secrets\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.233539 kubelet[1934]: I0317 21:53:05.233483 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m84v7\" (UniqueName: \"kubernetes.io/projected/38802fba-5dbd-497b-8cf9-dae0edeaeb88-kube-api-access-m84v7\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.233720 kubelet[1934]: I0317 21:53:05.233688 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38802fba-5dbd-497b-8cf9-dae0edeaeb88-bpf-maps\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.233895 kubelet[1934]: I0317 21:53:05.233868 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38802fba-5dbd-497b-8cf9-dae0edeaeb88-cni-path\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.234071 kubelet[1934]: I0317 21:53:05.234043 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38802fba-5dbd-497b-8cf9-dae0edeaeb88-clustermesh-secrets\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.234258 kubelet[1934]: I0317 21:53:05.234219 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38802fba-5dbd-497b-8cf9-dae0edeaeb88-host-proc-sys-net\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.234477 kubelet[1934]: I0317 21:53:05.234428 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38802fba-5dbd-497b-8cf9-dae0edeaeb88-etc-cni-netd\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.234653 kubelet[1934]: I0317 21:53:05.234627 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38802fba-5dbd-497b-8cf9-dae0edeaeb88-host-proc-sys-kernel\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.234844 kubelet[1934]: I0317 21:53:05.234818 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38802fba-5dbd-497b-8cf9-dae0edeaeb88-cilium-cgroup\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.235019 kubelet[1934]: I0317 21:53:05.234980 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38802fba-5dbd-497b-8cf9-dae0edeaeb88-hostproc\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.235176 kubelet[1934]: I0317 21:53:05.235150 1934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38802fba-5dbd-497b-8cf9-dae0edeaeb88-hubble-tls\") pod \"cilium-9fjpp\" (UID: \"38802fba-5dbd-497b-8cf9-dae0edeaeb88\") " pod="kube-system/cilium-9fjpp" Mar 17 21:53:05.416041 env[1194]: time="2025-03-17T21:53:05.414349405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9fjpp,Uid:38802fba-5dbd-497b-8cf9-dae0edeaeb88,Namespace:kube-system,Attempt:0,}" Mar 17 21:53:05.454232 kubelet[1934]: E0317 21:53:05.454110 1934 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 21:53:05.454639 env[1194]: time="2025-03-17T21:53:05.454464610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:53:05.454812 env[1194]: time="2025-03-17T21:53:05.454570365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:53:05.454812 env[1194]: time="2025-03-17T21:53:05.454683669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:53:05.455029 env[1194]: time="2025-03-17T21:53:05.454987641Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c pid=3960 runtime=io.containerd.runc.v2 Mar 17 21:53:05.473075 systemd[1]: Started cri-containerd-b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c.scope. Mar 17 21:53:05.513742 env[1194]: time="2025-03-17T21:53:05.513662274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9fjpp,Uid:38802fba-5dbd-497b-8cf9-dae0edeaeb88,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c\"" Mar 17 21:53:05.519604 env[1194]: time="2025-03-17T21:53:05.519554921Z" level=info msg="CreateContainer within sandbox \"b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 21:53:05.531047 env[1194]: time="2025-03-17T21:53:05.530986530Z" level=info msg="CreateContainer within sandbox \"b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ad26e12d32684aabe43f453c6083572bff9b5d07f11af26406bbf04d97ec504\"" Mar 17 21:53:05.533742 env[1194]: time="2025-03-17T21:53:05.533695974Z" level=info msg="StartContainer for \"9ad26e12d32684aabe43f453c6083572bff9b5d07f11af26406bbf04d97ec504\"" Mar 17 21:53:05.568122 systemd[1]: Started cri-containerd-9ad26e12d32684aabe43f453c6083572bff9b5d07f11af26406bbf04d97ec504.scope. Mar 17 21:53:05.636688 env[1194]: time="2025-03-17T21:53:05.636628692Z" level=info msg="StartContainer for \"9ad26e12d32684aabe43f453c6083572bff9b5d07f11af26406bbf04d97ec504\" returns successfully" Mar 17 21:53:05.667856 systemd[1]: cri-containerd-9ad26e12d32684aabe43f453c6083572bff9b5d07f11af26406bbf04d97ec504.scope: Deactivated successfully. Mar 17 21:53:05.707053 env[1194]: time="2025-03-17T21:53:05.706926692Z" level=info msg="shim disconnected" id=9ad26e12d32684aabe43f453c6083572bff9b5d07f11af26406bbf04d97ec504 Mar 17 21:53:05.707053 env[1194]: time="2025-03-17T21:53:05.707041772Z" level=warning msg="cleaning up after shim disconnected" id=9ad26e12d32684aabe43f453c6083572bff9b5d07f11af26406bbf04d97ec504 namespace=k8s.io Mar 17 21:53:05.707053 env[1194]: time="2025-03-17T21:53:05.707075194Z" level=info msg="cleaning up dead shim" Mar 17 21:53:05.717652 kubelet[1934]: W0317 21:53:05.717455 1934 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ef0493_ef09_4f07_90c8_c41918bfcc7f.slice/cri-containerd-7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384.scope WatchSource:0}: container "7c1d07d70e85e3419fa7f567dbc19acc2ab177ae28259212ce6393722a5b6384" in namespace "k8s.io": not found Mar 17 21:53:05.726983 env[1194]: time="2025-03-17T21:53:05.726541549Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:53:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4052 runtime=io.containerd.runc.v2\n" Mar 17 21:53:06.017135 env[1194]: time="2025-03-17T21:53:06.017066115Z" level=info msg="CreateContainer within sandbox \"b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 21:53:06.032103 env[1194]: time="2025-03-17T21:53:06.032041044Z" level=info msg="CreateContainer within sandbox \"b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bac2147272abb299e766ae8ecf3d6cfbf032461f778c8c2d2a065ebdd8df0c13\"" Mar 17 21:53:06.034457 env[1194]: time="2025-03-17T21:53:06.034419286Z" level=info msg="StartContainer for \"bac2147272abb299e766ae8ecf3d6cfbf032461f778c8c2d2a065ebdd8df0c13\"" Mar 17 21:53:06.067774 systemd[1]: Started cri-containerd-bac2147272abb299e766ae8ecf3d6cfbf032461f778c8c2d2a065ebdd8df0c13.scope. Mar 17 21:53:06.122274 env[1194]: time="2025-03-17T21:53:06.122177145Z" level=info msg="StartContainer for \"bac2147272abb299e766ae8ecf3d6cfbf032461f778c8c2d2a065ebdd8df0c13\" returns successfully" Mar 17 21:53:06.141059 systemd[1]: cri-containerd-bac2147272abb299e766ae8ecf3d6cfbf032461f778c8c2d2a065ebdd8df0c13.scope: Deactivated successfully. Mar 17 21:53:06.177653 env[1194]: time="2025-03-17T21:53:06.177551948Z" level=info msg="shim disconnected" id=bac2147272abb299e766ae8ecf3d6cfbf032461f778c8c2d2a065ebdd8df0c13 Mar 17 21:53:06.178010 env[1194]: time="2025-03-17T21:53:06.177977167Z" level=warning msg="cleaning up after shim disconnected" id=bac2147272abb299e766ae8ecf3d6cfbf032461f778c8c2d2a065ebdd8df0c13 namespace=k8s.io Mar 17 21:53:06.178224 env[1194]: time="2025-03-17T21:53:06.178161476Z" level=info msg="cleaning up dead shim" Mar 17 21:53:06.189940 env[1194]: time="2025-03-17T21:53:06.189877645Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:53:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4117 runtime=io.containerd.runc.v2\n" Mar 17 21:53:06.304424 kubelet[1934]: I0317 21:53:06.304264 1934 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ef0493-ef09-4f07-90c8-c41918bfcc7f" path="/var/lib/kubelet/pods/45ef0493-ef09-4f07-90c8-c41918bfcc7f/volumes" Mar 17 21:53:07.024003 env[1194]: time="2025-03-17T21:53:07.023943019Z" level=info msg="CreateContainer within sandbox \"b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 21:53:07.046936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721470569.mount: Deactivated successfully. Mar 17 21:53:07.056449 env[1194]: time="2025-03-17T21:53:07.056382993Z" level=info msg="CreateContainer within sandbox \"b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"904988622a1d911f7490b6801c85ce3b8d2a9c7c21aade81c1ded658eaf5b22f\"" Mar 17 21:53:07.057765 env[1194]: time="2025-03-17T21:53:07.057722636Z" level=info msg="StartContainer for \"904988622a1d911f7490b6801c85ce3b8d2a9c7c21aade81c1ded658eaf5b22f\"" Mar 17 21:53:07.093053 systemd[1]: Started cri-containerd-904988622a1d911f7490b6801c85ce3b8d2a9c7c21aade81c1ded658eaf5b22f.scope. Mar 17 21:53:07.142141 env[1194]: time="2025-03-17T21:53:07.142063704Z" level=info msg="StartContainer for \"904988622a1d911f7490b6801c85ce3b8d2a9c7c21aade81c1ded658eaf5b22f\" returns successfully" Mar 17 21:53:07.150040 systemd[1]: cri-containerd-904988622a1d911f7490b6801c85ce3b8d2a9c7c21aade81c1ded658eaf5b22f.scope: Deactivated successfully. Mar 17 21:53:07.185523 env[1194]: time="2025-03-17T21:53:07.185411516Z" level=info msg="shim disconnected" id=904988622a1d911f7490b6801c85ce3b8d2a9c7c21aade81c1ded658eaf5b22f Mar 17 21:53:07.185523 env[1194]: time="2025-03-17T21:53:07.185522162Z" level=warning msg="cleaning up after shim disconnected" id=904988622a1d911f7490b6801c85ce3b8d2a9c7c21aade81c1ded658eaf5b22f namespace=k8s.io Mar 17 21:53:07.186129 env[1194]: time="2025-03-17T21:53:07.185540313Z" level=info msg="cleaning up dead shim" Mar 17 21:53:07.198435 env[1194]: time="2025-03-17T21:53:07.198376858Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:53:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4178 runtime=io.containerd.runc.v2\n" Mar 17 21:53:07.294369 kubelet[1934]: E0317 21:53:07.293665 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-8bdsj" podUID="726f0478-ad58-40c7-9dd8-66627ce80e31" Mar 17 21:53:07.347985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-904988622a1d911f7490b6801c85ce3b8d2a9c7c21aade81c1ded658eaf5b22f-rootfs.mount: Deactivated successfully. Mar 17 21:53:08.034008 env[1194]: time="2025-03-17T21:53:08.033891829Z" level=info msg="CreateContainer within sandbox \"b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 21:53:08.054768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987446598.mount: Deactivated successfully. Mar 17 21:53:08.067216 env[1194]: time="2025-03-17T21:53:08.067078963Z" level=info msg="CreateContainer within sandbox \"b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2695e7b2dfff52389ccfbcbec3e05dc8dbd54742e70315e66c89d3deab4e2874\"" Mar 17 21:53:08.069596 env[1194]: time="2025-03-17T21:53:08.069513348Z" level=info msg="StartContainer for \"2695e7b2dfff52389ccfbcbec3e05dc8dbd54742e70315e66c89d3deab4e2874\"" Mar 17 21:53:08.101568 systemd[1]: Started cri-containerd-2695e7b2dfff52389ccfbcbec3e05dc8dbd54742e70315e66c89d3deab4e2874.scope. Mar 17 21:53:08.151126 systemd[1]: cri-containerd-2695e7b2dfff52389ccfbcbec3e05dc8dbd54742e70315e66c89d3deab4e2874.scope: Deactivated successfully. Mar 17 21:53:08.152681 env[1194]: time="2025-03-17T21:53:08.152616348Z" level=info msg="StartContainer for \"2695e7b2dfff52389ccfbcbec3e05dc8dbd54742e70315e66c89d3deab4e2874\" returns successfully" Mar 17 21:53:08.185901 env[1194]: time="2025-03-17T21:53:08.185831339Z" level=info msg="shim disconnected" id=2695e7b2dfff52389ccfbcbec3e05dc8dbd54742e70315e66c89d3deab4e2874 Mar 17 21:53:08.185901 env[1194]: time="2025-03-17T21:53:08.185901376Z" level=warning msg="cleaning up after shim disconnected" id=2695e7b2dfff52389ccfbcbec3e05dc8dbd54742e70315e66c89d3deab4e2874 namespace=k8s.io Mar 17 21:53:08.185901 env[1194]: time="2025-03-17T21:53:08.185919083Z" level=info msg="cleaning up dead shim" Mar 17 21:53:08.196915 env[1194]: time="2025-03-17T21:53:08.196852538Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:53:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4234 runtime=io.containerd.runc.v2\n" Mar 17 21:53:08.347814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2695e7b2dfff52389ccfbcbec3e05dc8dbd54742e70315e66c89d3deab4e2874-rootfs.mount: Deactivated successfully. Mar 17 21:53:08.836325 kubelet[1934]: W0317 21:53:08.836231 1934 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38802fba_5dbd_497b_8cf9_dae0edeaeb88.slice/cri-containerd-9ad26e12d32684aabe43f453c6083572bff9b5d07f11af26406bbf04d97ec504.scope WatchSource:0}: task 9ad26e12d32684aabe43f453c6083572bff9b5d07f11af26406bbf04d97ec504 not found: not found Mar 17 21:53:09.037720 env[1194]: time="2025-03-17T21:53:09.037647853Z" level=info msg="CreateContainer within sandbox \"b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 21:53:09.064534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866269701.mount: Deactivated successfully. Mar 17 21:53:09.078321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243939841.mount: Deactivated successfully. Mar 17 21:53:09.083892 env[1194]: time="2025-03-17T21:53:09.083770582Z" level=info msg="CreateContainer within sandbox \"b7ebd4c717f80f8979e1d7bd95db50a8f6e9459118cbf3c7fc0386a0b4f7279c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cc745bff82170c14be8e3366172e46fb27842570854e21f8ef98cfeaaa09ed30\"" Mar 17 21:53:09.087910 env[1194]: time="2025-03-17T21:53:09.085003495Z" level=info msg="StartContainer for \"cc745bff82170c14be8e3366172e46fb27842570854e21f8ef98cfeaaa09ed30\"" Mar 17 21:53:09.112003 systemd[1]: Started cri-containerd-cc745bff82170c14be8e3366172e46fb27842570854e21f8ef98cfeaaa09ed30.scope. Mar 17 21:53:09.198842 env[1194]: time="2025-03-17T21:53:09.198655260Z" level=info msg="StartContainer for \"cc745bff82170c14be8e3366172e46fb27842570854e21f8ef98cfeaaa09ed30\" returns successfully" Mar 17 21:53:09.294108 kubelet[1934]: E0317 21:53:09.294030 1934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-8bdsj" podUID="726f0478-ad58-40c7-9dd8-66627ce80e31" Mar 17 21:53:10.066880 kubelet[1934]: I0317 21:53:10.066717 1934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9fjpp" podStartSLOduration=5.066671916 podStartE2EDuration="5.066671916s" podCreationTimestamp="2025-03-17 21:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:53:10.063929757 +0000 UTC m=+190.087743424" watchObservedRunningTime="2025-03-17 21:53:10.066671916 +0000 UTC m=+190.090485576" Mar 17 21:53:10.140381 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 21:53:11.953359 kubelet[1934]: W0317 21:53:11.953237 1934 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38802fba_5dbd_497b_8cf9_dae0edeaeb88.slice/cri-containerd-bac2147272abb299e766ae8ecf3d6cfbf032461f778c8c2d2a065ebdd8df0c13.scope WatchSource:0}: task bac2147272abb299e766ae8ecf3d6cfbf032461f778c8c2d2a065ebdd8df0c13 not found: not found Mar 17 21:53:13.911822 systemd-networkd[1025]: lxc_health: Link UP Mar 17 21:53:13.963589 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 21:53:13.964241 systemd-networkd[1025]: lxc_health: Gained carrier Mar 17 21:53:14.280569 systemd[1]: run-containerd-runc-k8s.io-cc745bff82170c14be8e3366172e46fb27842570854e21f8ef98cfeaaa09ed30-runc.4It8xX.mount: Deactivated successfully. Mar 17 21:53:15.073753 kubelet[1934]: W0317 21:53:15.073503 1934 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38802fba_5dbd_497b_8cf9_dae0edeaeb88.slice/cri-containerd-904988622a1d911f7490b6801c85ce3b8d2a9c7c21aade81c1ded658eaf5b22f.scope WatchSource:0}: task 904988622a1d911f7490b6801c85ce3b8d2a9c7c21aade81c1ded658eaf5b22f not found: not found Mar 17 21:53:15.500572 systemd-networkd[1025]: lxc_health: Gained IPv6LL Mar 17 21:53:18.185994 kubelet[1934]: W0317 21:53:18.185887 1934 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38802fba_5dbd_497b_8cf9_dae0edeaeb88.slice/cri-containerd-2695e7b2dfff52389ccfbcbec3e05dc8dbd54742e70315e66c89d3deab4e2874.scope WatchSource:0}: task 2695e7b2dfff52389ccfbcbec3e05dc8dbd54742e70315e66c89d3deab4e2874 not found: not found Mar 17 21:53:21.021190 systemd[1]: run-containerd-runc-k8s.io-cc745bff82170c14be8e3366172e46fb27842570854e21f8ef98cfeaaa09ed30-runc.Vo95Z8.mount: Deactivated successfully. Mar 17 21:53:21.245815 sshd[3941]: pam_unix(sshd:session): session closed for user core Mar 17 21:53:21.251874 systemd-logind[1183]: Session 24 logged out. Waiting for processes to exit. Mar 17 21:53:21.253514 systemd[1]: sshd@23-10.230.29.198:22-139.178.89.65:48740.service: Deactivated successfully. Mar 17 21:53:21.254749 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 21:53:21.256207 systemd-logind[1183]: Removed session 24.