May 10 00:43:35.873157 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 9 23:12:23 -00 2025 May 10 00:43:35.878144 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:43:35.878160 kernel: BIOS-provided physical RAM map: May 10 00:43:35.878167 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 10 00:43:35.878173 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 10 00:43:35.878179 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 10 00:43:35.878187 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable May 10 00:43:35.878194 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved May 10 00:43:35.878200 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 10 00:43:35.878206 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 10 00:43:35.878215 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 10 00:43:35.878221 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 10 00:43:35.878244 kernel: NX (Execute Disable) protection: active May 10 00:43:35.878251 kernel: SMBIOS 2.8 present. May 10 00:43:35.878260 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 May 10 00:43:35.878267 kernel: Hypervisor detected: KVM May 10 00:43:35.878277 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 00:43:35.878284 kernel: kvm-clock: cpu 0, msr 6d196001, primary cpu clock May 10 00:43:35.878291 kernel: kvm-clock: using sched offset of 4147258352 cycles May 10 00:43:35.878298 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 00:43:35.878306 kernel: tsc: Detected 2294.608 MHz processor May 10 00:43:35.878313 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 00:43:35.878321 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 00:43:35.878328 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 May 10 00:43:35.878335 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 00:43:35.878345 kernel: Using GB pages for direct mapping May 10 00:43:35.878352 kernel: ACPI: Early table checksum verification disabled May 10 00:43:35.878359 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) May 10 00:43:35.878367 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:43:35.878374 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:43:35.878381 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:43:35.878388 kernel: ACPI: FACS 0x000000007FFDFD40 000040 May 10 00:43:35.878395 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:43:35.878402 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:43:35.878411 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:43:35.878418 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:43:35.878425 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] May 10 00:43:35.878432 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] May 10 00:43:35.878439 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] May 10 00:43:35.878447 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] May 10 00:43:35.878457 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] May 10 00:43:35.878467 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] May 10 00:43:35.878475 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] May 10 00:43:35.878483 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 10 00:43:35.878490 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 10 00:43:35.878498 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 10 00:43:35.878506 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 May 10 00:43:35.878513 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 10 00:43:35.878523 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 May 10 00:43:35.878530 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 10 00:43:35.878538 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 May 10 00:43:35.878545 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 10 00:43:35.878553 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 May 10 00:43:35.878561 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 10 00:43:35.878568 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 May 10 00:43:35.878576 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 10 00:43:35.878584 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 May 10 00:43:35.878591 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 10 00:43:35.878601 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 May 10 00:43:35.878608 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 10 00:43:35.878616 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 10 00:43:35.878624 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug May 10 00:43:35.878631 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] May 10 00:43:35.878639 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] May 10 00:43:35.878647 kernel: Zone ranges: May 10 00:43:35.878655 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 00:43:35.878663 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] May 10 00:43:35.878673 kernel: Normal empty May 10 00:43:35.878680 kernel: Movable zone start for each node May 10 00:43:35.878688 kernel: Early memory node ranges May 10 00:43:35.878696 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 10 00:43:35.878703 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] May 10 00:43:35.878711 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] May 10 00:43:35.878719 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 00:43:35.878726 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 10 00:43:35.878734 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges May 10 00:43:35.878744 kernel: ACPI: PM-Timer IO Port: 0x608 May 10 00:43:35.878751 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 00:43:35.878759 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 00:43:35.878767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 10 00:43:35.878774 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 00:43:35.878782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 00:43:35.878790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 00:43:35.878797 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 00:43:35.878805 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 00:43:35.878815 kernel: TSC deadline timer available May 10 00:43:35.878823 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs May 10 00:43:35.878830 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 10 00:43:35.878838 kernel: Booting paravirtualized kernel on KVM May 10 00:43:35.878846 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 00:43:35.878854 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 May 10 00:43:35.878862 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 May 10 00:43:35.878869 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 May 10 00:43:35.878877 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 10 00:43:35.878887 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 May 10 00:43:35.878895 kernel: kvm-guest: PV spinlocks enabled May 10 00:43:35.878902 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 00:43:35.878910 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 May 10 00:43:35.878918 kernel: Policy zone: DMA32 May 10 00:43:35.878927 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:43:35.878935 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:43:35.878943 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 00:43:35.878953 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 10 00:43:35.878961 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:43:35.878969 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 192524K reserved, 0K cma-reserved) May 10 00:43:35.878977 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 10 00:43:35.878984 kernel: ftrace: allocating 34584 entries in 136 pages May 10 00:43:35.878992 kernel: ftrace: allocated 136 pages with 2 groups May 10 00:43:35.878999 kernel: rcu: Hierarchical RCU implementation. May 10 00:43:35.879008 kernel: rcu: RCU event tracing is enabled. May 10 00:43:35.879016 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 10 00:43:35.879026 kernel: Rude variant of Tasks RCU enabled. May 10 00:43:35.879034 kernel: Tracing variant of Tasks RCU enabled. May 10 00:43:35.879042 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:43:35.879050 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 10 00:43:35.879057 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 May 10 00:43:35.879065 kernel: random: crng init done May 10 00:43:35.879073 kernel: Console: colour VGA+ 80x25 May 10 00:43:35.879091 kernel: printk: console [tty0] enabled May 10 00:43:35.879106 kernel: printk: console [ttyS0] enabled May 10 00:43:35.879114 kernel: ACPI: Core revision 20210730 May 10 00:43:35.879122 kernel: APIC: Switch to symmetric I/O mode setup May 10 00:43:35.879130 kernel: x2apic enabled May 10 00:43:35.879141 kernel: Switched APIC routing to physical x2apic. May 10 00:43:35.879150 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns May 10 00:43:35.879158 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) May 10 00:43:35.879167 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 10 00:43:35.879175 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 10 00:43:35.879185 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 10 00:43:35.879193 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 00:43:35.879202 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 10 00:43:35.879210 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 10 00:43:35.879218 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 10 00:43:35.879238 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 10 00:43:35.879246 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 10 00:43:35.879254 kernel: RETBleed: Mitigation: Enhanced IBRS May 10 00:43:35.879262 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 10 00:43:35.879271 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 10 00:43:35.879281 kernel: TAA: Mitigation: Clear CPU buffers May 10 00:43:35.879289 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 10 00:43:35.879297 kernel: GDS: Unknown: Dependent on hypervisor status May 10 00:43:35.879305 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 00:43:35.879313 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 00:43:35.879321 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 00:43:35.879330 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 10 00:43:35.879353 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 10 00:43:35.879362 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 10 00:43:35.879371 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 10 00:43:35.879380 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 00:43:35.879392 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 10 00:43:35.879401 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 10 00:43:35.879409 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 10 00:43:35.879418 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 May 10 00:43:35.879427 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. May 10 00:43:35.879436 kernel: Freeing SMP alternatives memory: 32K May 10 00:43:35.879445 kernel: pid_max: default: 32768 minimum: 301 May 10 00:43:35.879454 kernel: LSM: Security Framework initializing May 10 00:43:35.879463 kernel: SELinux: Initializing. May 10 00:43:35.879472 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 00:43:35.879481 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 00:43:35.879490 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) May 10 00:43:35.879502 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 10 00:43:35.879511 kernel: signal: max sigframe size: 3632 May 10 00:43:35.879520 kernel: rcu: Hierarchical SRCU implementation. May 10 00:43:35.879529 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 10 00:43:35.879538 kernel: smp: Bringing up secondary CPUs ... May 10 00:43:35.879547 kernel: x86: Booting SMP configuration: May 10 00:43:35.879556 kernel: .... node #0, CPUs: #1 May 10 00:43:35.879565 kernel: kvm-clock: cpu 1, msr 6d196041, secondary cpu clock May 10 00:43:35.879574 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 10 00:43:35.879586 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 May 10 00:43:35.879595 kernel: smp: Brought up 1 node, 2 CPUs May 10 00:43:35.879604 kernel: smpboot: Max logical packages: 16 May 10 00:43:35.879613 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) May 10 00:43:35.879622 kernel: devtmpfs: initialized May 10 00:43:35.879631 kernel: x86/mm: Memory block size: 128MB May 10 00:43:35.879640 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:43:35.879650 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 10 00:43:35.879659 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:43:35.879670 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:43:35.879679 kernel: audit: initializing netlink subsys (disabled) May 10 00:43:35.879689 kernel: audit: type=2000 audit(1746837814.833:1): state=initialized audit_enabled=0 res=1 May 10 00:43:35.879697 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:43:35.879706 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 00:43:35.879715 kernel: cpuidle: using governor menu May 10 00:43:35.879724 kernel: ACPI: bus type PCI registered May 10 00:43:35.879733 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:43:35.879743 kernel: dca service started, version 1.12.1 May 10 00:43:35.879754 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 10 00:43:35.879763 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 10 00:43:35.879772 kernel: PCI: Using configuration type 1 for base access May 10 00:43:35.879781 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 00:43:35.879790 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:43:35.879799 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:43:35.879809 kernel: ACPI: Added _OSI(Module Device) May 10 00:43:35.879818 kernel: ACPI: Added _OSI(Processor Device) May 10 00:43:35.879827 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:43:35.879838 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:43:35.879847 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 10 00:43:35.879856 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 10 00:43:35.879865 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 10 00:43:35.879874 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 00:43:35.879883 kernel: ACPI: Interpreter enabled May 10 00:43:35.879892 kernel: ACPI: PM: (supports S0 S5) May 10 00:43:35.879901 kernel: ACPI: Using IOAPIC for interrupt routing May 10 00:43:35.879910 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 00:43:35.879919 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 10 00:43:35.879931 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:43:35.880104 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 00:43:35.880193 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 10 00:43:35.880297 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 10 00:43:35.880310 kernel: PCI host bridge to bus 0000:00 May 10 00:43:35.880404 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 00:43:35.880483 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 00:43:35.880558 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 00:43:35.880632 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 10 00:43:35.880706 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 10 00:43:35.880778 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] May 10 00:43:35.880852 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:43:35.880948 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 10 00:43:35.881044 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 May 10 00:43:35.881137 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] May 10 00:43:35.881220 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] May 10 00:43:35.881318 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] May 10 00:43:35.881393 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 00:43:35.881503 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 10 00:43:35.881583 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] May 10 00:43:35.881675 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 10 00:43:35.881752 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] May 10 00:43:35.881833 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 10 00:43:35.881908 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] May 10 00:43:35.881988 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 10 00:43:35.882063 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] May 10 00:43:35.882181 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 10 00:43:35.889258 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] May 10 00:43:35.889401 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 10 00:43:35.889493 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] May 10 00:43:35.889588 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 10 00:43:35.889674 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] May 10 00:43:35.889769 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 10 00:43:35.889853 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] May 10 00:43:35.889943 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 10 00:43:35.890026 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] May 10 00:43:35.890122 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] May 10 00:43:35.890203 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 10 00:43:35.890305 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] May 10 00:43:35.890407 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 10 00:43:35.890483 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 10 00:43:35.890558 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] May 10 00:43:35.890633 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] May 10 00:43:35.890713 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 10 00:43:35.890789 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 10 00:43:35.890875 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 10 00:43:35.890951 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] May 10 00:43:35.891026 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] May 10 00:43:35.891112 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 10 00:43:35.891189 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 10 00:43:35.891296 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 May 10 00:43:35.891376 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] May 10 00:43:35.891458 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] May 10 00:43:35.891532 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] May 10 00:43:35.891608 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 10 00:43:35.891693 kernel: pci_bus 0000:02: extended config space not accessible May 10 00:43:35.891784 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 May 10 00:43:35.891867 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] May 10 00:43:35.891950 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] May 10 00:43:35.892028 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] May 10 00:43:35.892135 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 May 10 00:43:35.892224 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] May 10 00:43:35.892318 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] May 10 00:43:35.892401 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] May 10 00:43:35.892485 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 00:43:35.892589 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 May 10 00:43:35.892677 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 10 00:43:35.892763 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] May 10 00:43:35.892847 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] May 10 00:43:35.892931 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 00:43:35.893014 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] May 10 00:43:35.893108 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] May 10 00:43:35.893210 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 00:43:35.893311 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] May 10 00:43:35.893387 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] May 10 00:43:35.893461 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 00:43:35.893538 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] May 10 00:43:35.893635 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] May 10 00:43:35.893718 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 00:43:35.893801 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] May 10 00:43:35.893887 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] May 10 00:43:35.893970 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 00:43:35.894052 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] May 10 00:43:35.894144 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] May 10 00:43:35.895280 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 00:43:35.895303 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 00:43:35.895313 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 00:43:35.895321 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 00:43:35.895330 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 00:43:35.895343 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 10 00:43:35.895351 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 10 00:43:35.895359 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 10 00:43:35.895368 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 10 00:43:35.895376 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 10 00:43:35.895385 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 10 00:43:35.895393 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 10 00:43:35.895401 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 10 00:43:35.895410 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 10 00:43:35.895421 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 10 00:43:35.895429 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 10 00:43:35.895438 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 10 00:43:35.895446 kernel: iommu: Default domain type: Translated May 10 00:43:35.895455 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 00:43:35.895560 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 10 00:43:35.895638 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 00:43:35.895732 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 10 00:43:35.895747 kernel: vgaarb: loaded May 10 00:43:35.895757 kernel: pps_core: LinuxPPS API ver. 1 registered May 10 00:43:35.895766 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 10 00:43:35.895776 kernel: PTP clock support registered May 10 00:43:35.895785 kernel: PCI: Using ACPI for IRQ routing May 10 00:43:35.895794 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 00:43:35.895803 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 10 00:43:35.895812 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] May 10 00:43:35.895821 kernel: clocksource: Switched to clocksource kvm-clock May 10 00:43:35.895833 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:43:35.895842 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:43:35.895852 kernel: pnp: PnP ACPI init May 10 00:43:35.895941 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 10 00:43:35.895955 kernel: pnp: PnP ACPI: found 5 devices May 10 00:43:35.895964 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 00:43:35.895973 kernel: NET: Registered PF_INET protocol family May 10 00:43:35.895983 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 00:43:35.895995 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 10 00:43:35.896004 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:43:35.896014 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 00:43:35.896023 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 10 00:43:35.896032 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 10 00:43:35.896041 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 00:43:35.896051 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 00:43:35.896060 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:43:35.896069 kernel: NET: Registered PF_XDP protocol family May 10 00:43:35.896167 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 May 10 00:43:35.898297 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 10 00:43:35.898393 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 10 00:43:35.898475 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 10 00:43:35.898554 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 10 00:43:35.898631 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 10 00:43:35.898711 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 10 00:43:35.898788 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 10 00:43:35.898864 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 10 00:43:35.898939 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 10 00:43:35.899013 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 10 00:43:35.899087 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 10 00:43:35.899171 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 10 00:43:35.899279 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 10 00:43:35.899356 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 10 00:43:35.899431 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 10 00:43:35.899511 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] May 10 00:43:35.899589 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] May 10 00:43:35.899664 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] May 10 00:43:35.899740 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 10 00:43:35.899815 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] May 10 00:43:35.899894 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 10 00:43:35.899970 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] May 10 00:43:35.900044 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 10 00:43:35.900127 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] May 10 00:43:35.900203 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 00:43:35.901334 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] May 10 00:43:35.901426 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 10 00:43:35.901506 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] May 10 00:43:35.901581 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 00:43:35.901657 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] May 10 00:43:35.901732 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 10 00:43:35.901807 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] May 10 00:43:35.901882 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 00:43:35.901958 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] May 10 00:43:35.902033 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 10 00:43:35.902119 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] May 10 00:43:35.902194 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 00:43:35.903332 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] May 10 00:43:35.903411 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 10 00:43:35.903479 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] May 10 00:43:35.903545 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 00:43:35.903640 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] May 10 00:43:35.903719 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 10 00:43:35.903796 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] May 10 00:43:35.903873 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 00:43:35.903948 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] May 10 00:43:35.904022 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 10 00:43:35.904127 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] May 10 00:43:35.904209 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 00:43:35.904302 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 00:43:35.904377 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 00:43:35.904451 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 00:43:35.904524 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 10 00:43:35.904597 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 10 00:43:35.904670 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] May 10 00:43:35.904759 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 10 00:43:35.904842 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] May 10 00:43:35.904921 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 10 00:43:35.905005 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] May 10 00:43:35.905106 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] May 10 00:43:35.905178 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] May 10 00:43:35.905255 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 00:43:35.905333 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] May 10 00:43:35.905407 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] May 10 00:43:35.905478 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 00:43:35.905560 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] May 10 00:43:35.905631 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] May 10 00:43:35.905701 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 00:43:35.905777 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] May 10 00:43:35.905848 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] May 10 00:43:35.905921 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 00:43:35.905998 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] May 10 00:43:35.906071 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] May 10 00:43:35.906148 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 00:43:35.906224 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] May 10 00:43:35.906304 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] May 10 00:43:35.906375 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 00:43:35.906453 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] May 10 00:43:35.906524 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] May 10 00:43:35.906595 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 00:43:35.906607 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 10 00:43:35.906617 kernel: PCI: CLS 0 bytes, default 64 May 10 00:43:35.906626 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 10 00:43:35.906635 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) May 10 00:43:35.906645 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 10 00:43:35.906657 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns May 10 00:43:35.906666 kernel: Initialise system trusted keyrings May 10 00:43:35.906675 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 10 00:43:35.906684 kernel: Key type asymmetric registered May 10 00:43:35.906692 kernel: Asymmetric key parser 'x509' registered May 10 00:43:35.906701 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 10 00:43:35.906710 kernel: io scheduler mq-deadline registered May 10 00:43:35.906719 kernel: io scheduler kyber registered May 10 00:43:35.906728 kernel: io scheduler bfq registered May 10 00:43:35.906807 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 10 00:43:35.906886 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 10 00:43:35.906963 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:43:35.907042 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 10 00:43:35.907127 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 10 00:43:35.907203 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:43:35.908776 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 10 00:43:35.908879 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 10 00:43:35.908975 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:43:35.909056 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 10 00:43:35.909144 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 10 00:43:35.909221 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:43:35.909313 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 10 00:43:35.909391 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 10 00:43:35.909472 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:43:35.909551 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 10 00:43:35.909628 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 10 00:43:35.909704 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:43:35.909786 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 10 00:43:35.909860 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 10 00:43:35.909936 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:43:35.910014 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 10 00:43:35.910089 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 10 00:43:35.910172 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:43:35.910187 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 00:43:35.910197 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 10 00:43:35.910207 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 10 00:43:35.910215 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:43:35.910225 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 00:43:35.910251 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 00:43:35.910260 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 00:43:35.910269 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 00:43:35.910281 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 00:43:35.910369 kernel: rtc_cmos 00:03: RTC can wake from S4 May 10 00:43:35.910442 kernel: rtc_cmos 00:03: registered as rtc0 May 10 00:43:35.910515 kernel: rtc_cmos 00:03: setting system clock to 2025-05-10T00:43:35 UTC (1746837815) May 10 00:43:35.910585 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 10 00:43:35.910596 kernel: intel_pstate: CPU model not supported May 10 00:43:35.910605 kernel: NET: Registered PF_INET6 protocol family May 10 00:43:35.910614 kernel: Segment Routing with IPv6 May 10 00:43:35.910626 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:43:35.910635 kernel: NET: Registered PF_PACKET protocol family May 10 00:43:35.910644 kernel: Key type dns_resolver registered May 10 00:43:35.910653 kernel: IPI shorthand broadcast: enabled May 10 00:43:35.910662 kernel: sched_clock: Marking stable (703150788, 118804169)->(1019089480, -197134523) May 10 00:43:35.910671 kernel: registered taskstats version 1 May 10 00:43:35.910680 kernel: Loading compiled-in X.509 certificates May 10 00:43:35.910689 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 0c62a22cd9157131d2e97d5a2e1bd9023e187117' May 10 00:43:35.910697 kernel: Key type .fscrypt registered May 10 00:43:35.910708 kernel: Key type fscrypt-provisioning registered May 10 00:43:35.910717 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:43:35.910726 kernel: ima: Allocated hash algorithm: sha1 May 10 00:43:35.910735 kernel: ima: No architecture policies found May 10 00:43:35.910743 kernel: clk: Disabling unused clocks May 10 00:43:35.910752 kernel: Freeing unused kernel image (initmem) memory: 47456K May 10 00:43:35.910761 kernel: Write protecting the kernel read-only data: 28672k May 10 00:43:35.910770 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 10 00:43:35.910779 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 10 00:43:35.910790 kernel: Run /init as init process May 10 00:43:35.910799 kernel: with arguments: May 10 00:43:35.910809 kernel: /init May 10 00:43:35.910817 kernel: with environment: May 10 00:43:35.910826 kernel: HOME=/ May 10 00:43:35.910834 kernel: TERM=linux May 10 00:43:35.910843 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:43:35.910854 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:43:35.910868 systemd[1]: Detected virtualization kvm. May 10 00:43:35.910878 systemd[1]: Detected architecture x86-64. May 10 00:43:35.910887 systemd[1]: Running in initrd. May 10 00:43:35.910896 systemd[1]: No hostname configured, using default hostname. May 10 00:43:35.910905 systemd[1]: Hostname set to . May 10 00:43:35.910915 systemd[1]: Initializing machine ID from VM UUID. May 10 00:43:35.910924 systemd[1]: Queued start job for default target initrd.target. May 10 00:43:35.910935 systemd[1]: Started systemd-ask-password-console.path. May 10 00:43:35.910946 systemd[1]: Reached target cryptsetup.target. May 10 00:43:35.910955 systemd[1]: Reached target paths.target. May 10 00:43:35.910964 systemd[1]: Reached target slices.target. May 10 00:43:35.910973 systemd[1]: Reached target swap.target. May 10 00:43:35.910982 systemd[1]: Reached target timers.target. May 10 00:43:35.910992 systemd[1]: Listening on iscsid.socket. May 10 00:43:35.911001 systemd[1]: Listening on iscsiuio.socket. May 10 00:43:35.911011 systemd[1]: Listening on systemd-journald-audit.socket. May 10 00:43:35.911022 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 00:43:35.911032 systemd[1]: Listening on systemd-journald.socket. May 10 00:43:35.911041 systemd[1]: Listening on systemd-networkd.socket. May 10 00:43:35.911050 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:43:35.911060 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:43:35.911069 systemd[1]: Reached target sockets.target. May 10 00:43:35.911078 systemd[1]: Starting kmod-static-nodes.service... May 10 00:43:35.911087 systemd[1]: Finished network-cleanup.service. May 10 00:43:35.911103 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:43:35.911114 systemd[1]: Starting systemd-journald.service... May 10 00:43:35.911124 systemd[1]: Starting systemd-modules-load.service... May 10 00:43:35.911133 systemd[1]: Starting systemd-resolved.service... May 10 00:43:35.911142 systemd[1]: Starting systemd-vconsole-setup.service... May 10 00:43:35.911151 systemd[1]: Finished kmod-static-nodes.service. May 10 00:43:35.911161 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:43:35.911170 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:43:35.911184 systemd-journald[201]: Journal started May 10 00:43:35.911268 systemd-journald[201]: Runtime Journal (/run/log/journal/d4fd95ae2e9745b8998e3a8d1114b46c) is 4.7M, max 38.1M, 33.3M free. May 10 00:43:35.876687 systemd-modules-load[202]: Inserted module 'overlay' May 10 00:43:35.897617 systemd-resolved[203]: Positive Trust Anchors: May 10 00:43:35.897630 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:43:35.936242 systemd[1]: Started systemd-resolved.service. May 10 00:43:35.936267 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:43:35.897667 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:43:35.944751 kernel: audit: type=1130 audit(1746837815.936:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:35.944773 systemd[1]: Started systemd-journald.service. May 10 00:43:35.944787 kernel: audit: type=1130 audit(1746837815.940:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:35.944806 kernel: Bridge firewalling registered May 10 00:43:35.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:35.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:35.907719 systemd-resolved[203]: Defaulting to hostname 'linux'. May 10 00:43:35.954167 kernel: audit: type=1130 audit(1746837815.945:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:35.954198 kernel: audit: type=1130 audit(1746837815.946:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:35.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:35.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:35.943707 systemd[1]: Finished systemd-vconsole-setup.service. May 10 00:43:35.945502 systemd-modules-load[202]: Inserted module 'br_netfilter' May 10 00:43:35.946095 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:43:35.946539 systemd[1]: Reached target nss-lookup.target. May 10 00:43:35.947683 systemd[1]: Starting dracut-cmdline-ask.service... May 10 00:43:35.967062 systemd[1]: Finished dracut-cmdline-ask.service. May 10 00:43:35.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:35.968253 systemd[1]: Starting dracut-cmdline.service... May 10 00:43:35.971878 kernel: audit: type=1130 audit(1746837815.967:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:35.976248 kernel: SCSI subsystem initialized May 10 00:43:35.978252 dracut-cmdline[219]: dracut-dracut-053 May 10 00:43:35.980575 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:43:35.991447 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:43:35.991494 kernel: device-mapper: uevent: version 1.0.3 May 10 00:43:36.000504 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 10 00:43:36.004394 systemd-modules-load[202]: Inserted module 'dm_multipath' May 10 00:43:36.005067 systemd[1]: Finished systemd-modules-load.service. May 10 00:43:36.014360 kernel: audit: type=1130 audit(1746837816.005:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:36.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:36.006944 systemd[1]: Starting systemd-sysctl.service... May 10 00:43:36.020769 systemd[1]: Finished systemd-sysctl.service. May 10 00:43:36.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:36.025253 kernel: audit: type=1130 audit(1746837816.020:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:36.077263 kernel: Loading iSCSI transport class v2.0-870. May 10 00:43:36.096279 kernel: iscsi: registered transport (tcp) May 10 00:43:36.120324 kernel: iscsi: registered transport (qla4xxx) May 10 00:43:36.120420 kernel: QLogic iSCSI HBA Driver May 10 00:43:36.164384 systemd[1]: Finished dracut-cmdline.service. May 10 00:43:36.165669 systemd[1]: Starting dracut-pre-udev.service... May 10 00:43:36.168756 kernel: audit: type=1130 audit(1746837816.164:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:36.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:36.219305 kernel: raid6: avx512x4 gen() 17690 MB/s May 10 00:43:36.236350 kernel: raid6: avx512x4 xor() 8190 MB/s May 10 00:43:36.253310 kernel: raid6: avx512x2 gen() 17736 MB/s May 10 00:43:36.270321 kernel: raid6: avx512x2 xor() 22464 MB/s May 10 00:43:36.287286 kernel: raid6: avx512x1 gen() 17544 MB/s May 10 00:43:36.304305 kernel: raid6: avx512x1 xor() 19647 MB/s May 10 00:43:36.321309 kernel: raid6: avx2x4 gen() 17632 MB/s May 10 00:43:36.338292 kernel: raid6: avx2x4 xor() 7196 MB/s May 10 00:43:36.355288 kernel: raid6: avx2x2 gen() 17414 MB/s May 10 00:43:36.372283 kernel: raid6: avx2x2 xor() 16185 MB/s May 10 00:43:36.389297 kernel: raid6: avx2x1 gen() 13374 MB/s May 10 00:43:36.406304 kernel: raid6: avx2x1 xor() 14213 MB/s May 10 00:43:36.423283 kernel: raid6: sse2x4 gen() 8216 MB/s May 10 00:43:36.440299 kernel: raid6: sse2x4 xor() 5436 MB/s May 10 00:43:36.457326 kernel: raid6: sse2x2 gen() 9231 MB/s May 10 00:43:36.474319 kernel: raid6: sse2x2 xor() 5373 MB/s May 10 00:43:36.491310 kernel: raid6: sse2x1 gen() 8375 MB/s May 10 00:43:36.508898 kernel: raid6: sse2x1 xor() 4209 MB/s May 10 00:43:36.508984 kernel: raid6: using algorithm avx512x2 gen() 17736 MB/s May 10 00:43:36.509021 kernel: raid6: .... xor() 22464 MB/s, rmw enabled May 10 00:43:36.509622 kernel: raid6: using avx512x2 recovery algorithm May 10 00:43:36.524275 kernel: xor: automatically using best checksumming function avx May 10 00:43:36.623286 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 10 00:43:36.638051 systemd[1]: Finished dracut-pre-udev.service. May 10 00:43:36.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:36.639519 systemd[1]: Starting systemd-udevd.service... May 10 00:43:36.643623 kernel: audit: type=1130 audit(1746837816.638:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:36.638000 audit: BPF prog-id=7 op=LOAD May 10 00:43:36.638000 audit: BPF prog-id=8 op=LOAD May 10 00:43:36.655657 systemd-udevd[401]: Using default interface naming scheme 'v252'. May 10 00:43:36.660982 systemd[1]: Started systemd-udevd.service. May 10 00:43:36.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:36.667696 systemd[1]: Starting dracut-pre-trigger.service... May 10 00:43:36.684907 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation May 10 00:43:36.725511 systemd[1]: Finished dracut-pre-trigger.service. May 10 00:43:36.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:36.726801 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:43:36.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:36.778359 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:43:36.830246 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 10 00:43:36.875532 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:43:36.875551 kernel: GPT:17805311 != 125829119 May 10 00:43:36.875563 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:43:36.875575 kernel: GPT:17805311 != 125829119 May 10 00:43:36.875586 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:43:36.875598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:43:36.875610 kernel: ACPI: bus type USB registered May 10 00:43:36.875627 kernel: usbcore: registered new interface driver usbfs May 10 00:43:36.875639 kernel: usbcore: registered new interface driver hub May 10 00:43:36.875651 kernel: usbcore: registered new device driver usb May 10 00:43:36.875663 kernel: cryptd: max_cpu_qlen set to 1000 May 10 00:43:36.903252 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) May 10 00:43:36.912794 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 10 00:43:36.973737 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller May 10 00:43:36.973918 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 May 10 00:43:36.974031 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 10 00:43:36.974126 kernel: libata version 3.00 loaded. May 10 00:43:36.974139 kernel: AVX2 version of gcm_enc/dec engaged. May 10 00:43:36.974152 kernel: AES CTR mode by8 optimization enabled May 10 00:43:36.974163 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller May 10 00:43:36.974272 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 May 10 00:43:36.974371 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed May 10 00:43:36.974465 kernel: hub 1-0:1.0: USB hub found May 10 00:43:36.974585 kernel: hub 1-0:1.0: 4 ports detected May 10 00:43:36.974689 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 10 00:43:36.974878 kernel: hub 2-0:1.0: USB hub found May 10 00:43:36.975002 kernel: hub 2-0:1.0: 4 ports detected May 10 00:43:36.975115 kernel: ahci 0000:00:1f.2: version 3.0 May 10 00:43:36.975213 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 10 00:43:36.975236 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 10 00:43:36.975329 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 10 00:43:36.975423 kernel: scsi host0: ahci May 10 00:43:36.975532 kernel: scsi host1: ahci May 10 00:43:36.975641 kernel: scsi host2: ahci May 10 00:43:36.975752 kernel: scsi host3: ahci May 10 00:43:36.975848 kernel: scsi host4: ahci May 10 00:43:36.975944 kernel: scsi host5: ahci May 10 00:43:36.976048 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 May 10 00:43:36.976062 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 May 10 00:43:36.976074 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 May 10 00:43:36.976086 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 May 10 00:43:36.976101 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 May 10 00:43:36.976113 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 May 10 00:43:36.974257 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 10 00:43:36.982885 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 10 00:43:36.986770 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 10 00:43:36.991235 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:43:36.992453 systemd[1]: Starting disk-uuid.service... May 10 00:43:37.000250 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:43:37.002334 disk-uuid[528]: Primary Header is updated. May 10 00:43:37.002334 disk-uuid[528]: Secondary Entries is updated. May 10 00:43:37.002334 disk-uuid[528]: Secondary Header is updated. May 10 00:43:37.017254 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:43:37.171482 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 10 00:43:37.259060 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 10 00:43:37.259187 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 10 00:43:37.264386 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 10 00:43:37.264511 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 10 00:43:37.266513 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 10 00:43:37.266589 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 10 00:43:37.317253 kernel: hid: raw HID events driver (C) Jiri Kosina May 10 00:43:37.323246 kernel: usbcore: registered new interface driver usbhid May 10 00:43:37.323286 kernel: usbhid: USB HID core driver May 10 00:43:37.329104 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 May 10 00:43:37.329136 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 May 10 00:43:38.013252 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:43:38.013626 disk-uuid[530]: The operation has completed successfully. May 10 00:43:38.059992 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:43:38.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.060097 systemd[1]: Finished disk-uuid.service. May 10 00:43:38.061441 systemd[1]: Starting verity-setup.service... May 10 00:43:38.078250 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 10 00:43:38.116296 systemd[1]: Found device dev-mapper-usr.device. May 10 00:43:38.118644 systemd[1]: Mounting sysusr-usr.mount... May 10 00:43:38.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.128455 systemd[1]: Finished verity-setup.service. May 10 00:43:38.199258 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 00:43:38.199052 systemd[1]: Mounted sysusr-usr.mount. May 10 00:43:38.200017 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 10 00:43:38.201323 systemd[1]: Starting ignition-setup.service... May 10 00:43:38.204599 systemd[1]: Starting parse-ip-for-networkd.service... May 10 00:43:38.226641 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:43:38.226728 kernel: BTRFS info (device vda6): using free space tree May 10 00:43:38.226764 kernel: BTRFS info (device vda6): has skinny extents May 10 00:43:38.241128 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:43:38.245864 systemd[1]: Finished ignition-setup.service. May 10 00:43:38.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.247110 systemd[1]: Starting ignition-fetch-offline.service... May 10 00:43:38.311346 systemd[1]: Finished parse-ip-for-networkd.service. May 10 00:43:38.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.312000 audit: BPF prog-id=9 op=LOAD May 10 00:43:38.313379 systemd[1]: Starting systemd-networkd.service... May 10 00:43:38.337403 systemd-networkd[714]: lo: Link UP May 10 00:43:38.337977 systemd-networkd[714]: lo: Gained carrier May 10 00:43:38.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.339721 systemd-networkd[714]: Enumeration completed May 10 00:43:38.339926 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:43:38.340045 systemd[1]: Started systemd-networkd.service. May 10 00:43:38.340897 systemd[1]: Reached target network.target. May 10 00:43:38.342140 systemd[1]: Starting iscsiuio.service... May 10 00:43:38.345738 systemd-networkd[714]: eth0: Link UP May 10 00:43:38.345743 systemd-networkd[714]: eth0: Gained carrier May 10 00:43:38.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.353354 systemd[1]: Started iscsiuio.service. May 10 00:43:38.354659 systemd[1]: Starting iscsid.service... May 10 00:43:38.359879 iscsid[719]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 10 00:43:38.359879 iscsid[719]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 10 00:43:38.359879 iscsid[719]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 10 00:43:38.359879 iscsid[719]: If using hardware iscsi like qla4xxx this message can be ignored. May 10 00:43:38.359879 iscsid[719]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 10 00:43:38.359879 iscsid[719]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 10 00:43:38.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.362698 systemd[1]: Started iscsid.service. May 10 00:43:38.364385 systemd[1]: Starting dracut-initqueue.service... May 10 00:43:38.378319 ignition[652]: Ignition 2.14.0 May 10 00:43:38.378336 ignition[652]: Stage: fetch-offline May 10 00:43:38.378420 ignition[652]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:38.378470 ignition[652]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:43:38.379684 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:43:38.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.380346 systemd-networkd[714]: eth0: DHCPv4 address 10.244.93.58/30, gateway 10.244.93.57 acquired from 10.244.93.57 May 10 00:43:38.379822 ignition[652]: parsed url from cmdline: "" May 10 00:43:38.381183 systemd[1]: Finished ignition-fetch-offline.service. May 10 00:43:38.379826 ignition[652]: no config URL provided May 10 00:43:38.383132 systemd[1]: Starting ignition-fetch.service... May 10 00:43:38.379833 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:43:38.379843 ignition[652]: no config at "/usr/lib/ignition/user.ign" May 10 00:43:38.379850 ignition[652]: failed to fetch config: resource requires networking May 10 00:43:38.380007 ignition[652]: Ignition finished successfully May 10 00:43:38.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.386996 systemd[1]: Finished dracut-initqueue.service. May 10 00:43:38.387442 systemd[1]: Reached target remote-fs-pre.target. May 10 00:43:38.387741 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:43:38.388095 systemd[1]: Reached target remote-fs.target. May 10 00:43:38.389201 systemd[1]: Starting dracut-pre-mount.service... May 10 00:43:38.402188 ignition[728]: Ignition 2.14.0 May 10 00:43:38.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.404424 systemd[1]: Finished dracut-pre-mount.service. May 10 00:43:38.402209 ignition[728]: Stage: fetch May 10 00:43:38.402339 ignition[728]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:38.402358 ignition[728]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:43:38.404083 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:43:38.404189 ignition[728]: parsed url from cmdline: "" May 10 00:43:38.404193 ignition[728]: no config URL provided May 10 00:43:38.404199 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:43:38.404207 ignition[728]: no config at "/usr/lib/ignition/user.ign" May 10 00:43:38.406994 ignition[728]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 10 00:43:38.407028 ignition[728]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 10 00:43:38.407067 ignition[728]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 10 00:43:38.423791 ignition[728]: GET result: OK May 10 00:43:38.424027 ignition[728]: parsing config with SHA512: 2ced6ccd32070a2f9d521555d1cce4817a92876405febb6fe5ac1a3151d938334582c50883c106ca7a9fe58b76991a1e89bf3a3ca8aa3033aeb985102bfed816 May 10 00:43:38.440659 unknown[728]: fetched base config from "system" May 10 00:43:38.440674 unknown[728]: fetched base config from "system" May 10 00:43:38.441335 ignition[728]: fetch: fetch complete May 10 00:43:38.440682 unknown[728]: fetched user config from "openstack" May 10 00:43:38.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.441342 ignition[728]: fetch: fetch passed May 10 00:43:38.443215 systemd[1]: Finished ignition-fetch.service. May 10 00:43:38.441389 ignition[728]: Ignition finished successfully May 10 00:43:38.445136 systemd[1]: Starting ignition-kargs.service... May 10 00:43:38.457425 ignition[739]: Ignition 2.14.0 May 10 00:43:38.457440 ignition[739]: Stage: kargs May 10 00:43:38.457552 ignition[739]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:38.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.457569 ignition[739]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:43:38.461067 systemd[1]: Finished ignition-kargs.service. May 10 00:43:38.458423 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:43:38.463195 systemd[1]: Starting ignition-disks.service... May 10 00:43:38.459737 ignition[739]: kargs: kargs passed May 10 00:43:38.459799 ignition[739]: Ignition finished successfully May 10 00:43:38.472738 ignition[745]: Ignition 2.14.0 May 10 00:43:38.472750 ignition[745]: Stage: disks May 10 00:43:38.472892 ignition[745]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:38.472914 ignition[745]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:43:38.473997 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:43:38.475332 ignition[745]: disks: disks passed May 10 00:43:38.475412 ignition[745]: Ignition finished successfully May 10 00:43:38.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.476382 systemd[1]: Finished ignition-disks.service. May 10 00:43:38.477005 systemd[1]: Reached target initrd-root-device.target. May 10 00:43:38.477558 systemd[1]: Reached target local-fs-pre.target. May 10 00:43:38.478160 systemd[1]: Reached target local-fs.target. May 10 00:43:38.478850 systemd[1]: Reached target sysinit.target. May 10 00:43:38.479483 systemd[1]: Reached target basic.target. May 10 00:43:38.481112 systemd[1]: Starting systemd-fsck-root.service... May 10 00:43:38.498949 systemd-fsck[752]: ROOT: clean, 623/1628000 files, 124060/1617920 blocks May 10 00:43:38.503006 systemd[1]: Finished systemd-fsck-root.service. May 10 00:43:38.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.504781 systemd[1]: Mounting sysroot.mount... May 10 00:43:38.512245 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 00:43:38.512490 systemd[1]: Mounted sysroot.mount. May 10 00:43:38.513313 systemd[1]: Reached target initrd-root-fs.target. May 10 00:43:38.515276 systemd[1]: Mounting sysroot-usr.mount... May 10 00:43:38.516551 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 10 00:43:38.517857 systemd[1]: Starting flatcar-openstack-hostname.service... May 10 00:43:38.518758 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:43:38.519343 systemd[1]: Reached target ignition-diskful.target. May 10 00:43:38.521746 systemd[1]: Mounted sysroot-usr.mount. May 10 00:43:38.524478 systemd[1]: Starting initrd-setup-root.service... May 10 00:43:38.529186 initrd-setup-root[763]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:43:38.542692 initrd-setup-root[771]: cut: /sysroot/etc/group: No such file or directory May 10 00:43:38.550076 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:43:38.559919 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:43:38.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.620535 systemd[1]: Finished initrd-setup-root.service. May 10 00:43:38.621888 systemd[1]: Starting ignition-mount.service... May 10 00:43:38.624434 systemd[1]: Starting sysroot-boot.service... May 10 00:43:38.634012 bash[806]: umount: /sysroot/usr/share/oem: not mounted. May 10 00:43:38.645397 coreos-metadata[758]: May 10 00:43:38.645 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 10 00:43:38.657566 ignition[808]: INFO : Ignition 2.14.0 May 10 00:43:38.658214 ignition[808]: INFO : Stage: mount May 10 00:43:38.658752 ignition[808]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:38.659319 ignition[808]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:43:38.661424 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:43:38.663388 systemd[1]: Finished sysroot-boot.service. May 10 00:43:38.664205 coreos-metadata[758]: May 10 00:43:38.663 INFO Fetch successful May 10 00:43:38.664621 ignition[808]: INFO : mount: mount passed May 10 00:43:38.664621 ignition[808]: INFO : Ignition finished successfully May 10 00:43:38.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.665523 coreos-metadata[758]: May 10 00:43:38.665 INFO wrote hostname srv-2i5m2.gb1.brightbox.com to /sysroot/etc/hostname May 10 00:43:38.665196 systemd[1]: Finished ignition-mount.service. May 10 00:43:38.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.667870 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 10 00:43:38.667959 systemd[1]: Finished flatcar-openstack-hostname.service. May 10 00:43:38.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:38.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:39.136613 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:43:39.150716 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (815) May 10 00:43:39.153219 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:43:39.153278 kernel: BTRFS info (device vda6): using free space tree May 10 00:43:39.153312 kernel: BTRFS info (device vda6): has skinny extents May 10 00:43:39.161464 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:43:39.163716 systemd[1]: Starting ignition-files.service... May 10 00:43:39.183415 ignition[835]: INFO : Ignition 2.14.0 May 10 00:43:39.184022 ignition[835]: INFO : Stage: files May 10 00:43:39.184539 ignition[835]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:39.185030 ignition[835]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:43:39.186746 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:43:39.189201 ignition[835]: DEBUG : files: compiled without relabeling support, skipping May 10 00:43:39.190167 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:43:39.190673 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:43:39.193814 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:43:39.194524 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:43:39.195819 unknown[835]: wrote ssh authorized keys file for user: core May 10 00:43:39.196414 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:43:39.197578 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 10 00:43:39.198284 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 10 00:43:39.390924 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 00:43:39.654899 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 10 00:43:39.657078 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:43:39.657078 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 10 00:43:39.958831 systemd-networkd[714]: eth0: Gained IPv6LL May 10 00:43:40.286003 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 00:43:40.486760 systemd-networkd[714]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:174e:24:19ff:fef4:5d3a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:174e:24:19ff:fef4:5d3a/64 assigned by NDisc. May 10 00:43:40.486788 systemd-networkd[714]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. May 10 00:43:40.813052 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:43:40.815109 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 10 00:43:40.817349 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:43:40.818988 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:43:40.818988 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:43:40.818988 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:43:40.818988 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:43:40.818988 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:43:40.818988 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:43:40.825510 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:43:40.825510 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:43:40.825510 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 00:43:40.825510 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 00:43:40.825510 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 00:43:40.825510 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 10 00:43:41.373366 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 10 00:43:43.677210 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 10 00:43:43.681539 ignition[835]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" May 10 00:43:43.681539 ignition[835]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 10 00:43:43.681539 ignition[835]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 10 00:43:43.681539 ignition[835]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:43:43.689923 ignition[835]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:43:43.689923 ignition[835]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 10 00:43:43.689923 ignition[835]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:43:43.689923 ignition[835]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:43:43.689923 ignition[835]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 10 00:43:43.689923 ignition[835]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:43:43.698515 ignition[835]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:43:43.698515 ignition[835]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:43:43.698515 ignition[835]: INFO : files: files passed May 10 00:43:43.698515 ignition[835]: INFO : Ignition finished successfully May 10 00:43:43.707958 kernel: kauditd_printk_skb: 26 callbacks suppressed May 10 00:43:43.707999 kernel: audit: type=1130 audit(1746837823.698:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.695828 systemd[1]: Finished ignition-files.service. May 10 00:43:43.700889 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 10 00:43:43.706881 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 10 00:43:43.707829 systemd[1]: Starting ignition-quench.service... May 10 00:43:43.712221 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:43:43.712356 systemd[1]: Finished ignition-quench.service. May 10 00:43:43.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.719097 kernel: audit: type=1130 audit(1746837823.712:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.719168 kernel: audit: type=1131 audit(1746837823.712:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.723160 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:43:43.726001 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 10 00:43:43.732517 kernel: audit: type=1130 audit(1746837823.726:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.726464 systemd[1]: Reached target ignition-complete.target. May 10 00:43:43.728053 systemd[1]: Starting initrd-parse-etc.service... May 10 00:43:43.754525 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:43:43.755715 systemd[1]: Finished initrd-parse-etc.service. May 10 00:43:43.776955 kernel: audit: type=1130 audit(1746837823.764:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.776992 kernel: audit: type=1131 audit(1746837823.764:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.768380 systemd[1]: Reached target initrd-fs.target. May 10 00:43:43.775143 systemd[1]: Reached target initrd.target. May 10 00:43:43.775535 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 10 00:43:43.776412 systemd[1]: Starting dracut-pre-pivot.service... May 10 00:43:43.790221 systemd[1]: Finished dracut-pre-pivot.service. May 10 00:43:43.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.791315 systemd[1]: Starting initrd-cleanup.service... May 10 00:43:43.794612 kernel: audit: type=1130 audit(1746837823.790:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.801652 systemd[1]: Stopped target nss-lookup.target. May 10 00:43:43.802070 systemd[1]: Stopped target remote-cryptsetup.target. May 10 00:43:43.802723 systemd[1]: Stopped target timers.target. May 10 00:43:43.803364 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:43:43.803455 systemd[1]: Stopped dracut-pre-pivot.service. May 10 00:43:43.807233 kernel: audit: type=1131 audit(1746837823.803:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.803995 systemd[1]: Stopped target initrd.target. May 10 00:43:43.806967 systemd[1]: Stopped target basic.target. May 10 00:43:43.807582 systemd[1]: Stopped target ignition-complete.target. May 10 00:43:43.808179 systemd[1]: Stopped target ignition-diskful.target. May 10 00:43:43.808798 systemd[1]: Stopped target initrd-root-device.target. May 10 00:43:43.809434 systemd[1]: Stopped target remote-fs.target. May 10 00:43:43.810025 systemd[1]: Stopped target remote-fs-pre.target. May 10 00:43:43.810653 systemd[1]: Stopped target sysinit.target. May 10 00:43:43.811315 systemd[1]: Stopped target local-fs.target. May 10 00:43:43.811973 systemd[1]: Stopped target local-fs-pre.target. May 10 00:43:43.812644 systemd[1]: Stopped target swap.target. May 10 00:43:43.816820 kernel: audit: type=1131 audit(1746837823.813:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.813258 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:43:43.813365 systemd[1]: Stopped dracut-pre-mount.service. May 10 00:43:43.820734 kernel: audit: type=1131 audit(1746837823.817:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.814017 systemd[1]: Stopped target cryptsetup.target. May 10 00:43:43.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.817153 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:43:43.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.817257 systemd[1]: Stopped dracut-initqueue.service. May 10 00:43:43.817960 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:43:43.818056 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 10 00:43:43.821162 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:43:43.826213 iscsid[719]: iscsid shutting down. May 10 00:43:43.821273 systemd[1]: Stopped ignition-files.service. May 10 00:43:43.822886 systemd[1]: Stopping ignition-mount.service... May 10 00:43:43.823660 systemd[1]: Stopping iscsid.service... May 10 00:43:43.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.828601 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:43:43.828726 systemd[1]: Stopped kmod-static-nodes.service. May 10 00:43:43.830106 systemd[1]: Stopping sysroot-boot.service... May 10 00:43:43.830630 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:43:43.830890 systemd[1]: Stopped systemd-udev-trigger.service. May 10 00:43:43.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.831518 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:43:43.831651 systemd[1]: Stopped dracut-pre-trigger.service. May 10 00:43:43.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.835634 systemd[1]: iscsid.service: Deactivated successfully. May 10 00:43:43.836306 systemd[1]: Stopped iscsid.service. May 10 00:43:43.838615 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:43:43.838897 systemd[1]: Finished initrd-cleanup.service. May 10 00:43:43.843809 systemd[1]: Stopping iscsiuio.service... May 10 00:43:43.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.844733 systemd[1]: iscsiuio.service: Deactivated successfully. May 10 00:43:43.845213 systemd[1]: Stopped iscsiuio.service. May 10 00:43:43.849801 ignition[873]: INFO : Ignition 2.14.0 May 10 00:43:43.849801 ignition[873]: INFO : Stage: umount May 10 00:43:43.850784 ignition[873]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:43:43.850784 ignition[873]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:43:43.851980 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:43:43.857358 ignition[873]: INFO : umount: umount passed May 10 00:43:43.857358 ignition[873]: INFO : Ignition finished successfully May 10 00:43:43.857028 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:43:43.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.859422 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:43:43.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.859521 systemd[1]: Stopped sysroot-boot.service. May 10 00:43:43.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.860083 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:43:43.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.860162 systemd[1]: Stopped ignition-mount.service. May 10 00:43:43.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.860703 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:43:43.860741 systemd[1]: Stopped ignition-disks.service. May 10 00:43:43.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.861428 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:43:43.861465 systemd[1]: Stopped ignition-kargs.service. May 10 00:43:43.862071 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 00:43:43.862103 systemd[1]: Stopped ignition-fetch.service. May 10 00:43:43.862757 systemd[1]: Stopped target network.target. May 10 00:43:43.863437 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:43:43.863476 systemd[1]: Stopped ignition-fetch-offline.service. May 10 00:43:43.864166 systemd[1]: Stopped target paths.target. May 10 00:43:43.864783 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:43:43.868259 systemd[1]: Stopped systemd-ask-password-console.path. May 10 00:43:43.868638 systemd[1]: Stopped target slices.target. May 10 00:43:43.869377 systemd[1]: Stopped target sockets.target. May 10 00:43:43.870077 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:43:43.870110 systemd[1]: Closed iscsid.socket. May 10 00:43:43.870745 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:43:43.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.870776 systemd[1]: Closed iscsiuio.socket. May 10 00:43:43.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.871401 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:43:43.871437 systemd[1]: Stopped ignition-setup.service. May 10 00:43:43.872034 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:43:43.872065 systemd[1]: Stopped initrd-setup-root.service. May 10 00:43:43.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.872814 systemd[1]: Stopping systemd-networkd.service... May 10 00:43:43.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.874212 systemd[1]: Stopping systemd-resolved.service... May 10 00:43:43.877408 systemd-networkd[714]: eth0: DHCPv6 lease lost May 10 00:43:43.887000 audit: BPF prog-id=9 op=UNLOAD May 10 00:43:43.880725 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:43:43.880962 systemd[1]: Stopped systemd-networkd.service. May 10 00:43:43.890000 audit: BPF prog-id=6 op=UNLOAD May 10 00:43:43.884299 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:43:43.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.884529 systemd[1]: Stopped systemd-resolved.service. May 10 00:43:43.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.886422 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:43:43.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.886472 systemd[1]: Closed systemd-networkd.socket. May 10 00:43:43.888150 systemd[1]: Stopping network-cleanup.service... May 10 00:43:43.888709 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:43:43.888793 systemd[1]: Stopped parse-ip-for-networkd.service. May 10 00:43:43.892069 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:43:43.892122 systemd[1]: Stopped systemd-sysctl.service. May 10 00:43:43.893782 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:43:43.893830 systemd[1]: Stopped systemd-modules-load.service. May 10 00:43:43.904290 systemd[1]: Stopping systemd-udevd.service... May 10 00:43:43.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.906013 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 00:43:43.907483 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:43:43.907595 systemd[1]: Stopped systemd-udevd.service. May 10 00:43:43.909181 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:43:43.909271 systemd[1]: Closed systemd-udevd-control.socket. May 10 00:43:43.909696 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:43:43.909726 systemd[1]: Closed systemd-udevd-kernel.socket. May 10 00:43:43.913166 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:43:43.913449 systemd[1]: Stopped dracut-pre-udev.service. May 10 00:43:43.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.914905 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:43:43.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.915007 systemd[1]: Stopped dracut-cmdline.service. May 10 00:43:43.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.916061 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:43:43.916156 systemd[1]: Stopped dracut-cmdline-ask.service. May 10 00:43:43.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.918881 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 10 00:43:43.919609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:43:43.919699 systemd[1]: Stopped systemd-vconsole-setup.service. May 10 00:43:43.921021 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:43:43.921195 systemd[1]: Stopped network-cleanup.service. May 10 00:43:43.926898 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:43:43.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:43.927130 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 10 00:43:43.929877 systemd[1]: Reached target initrd-switch-root.target. May 10 00:43:43.933761 systemd[1]: Starting initrd-switch-root.service... May 10 00:43:43.955258 systemd[1]: Switching root. May 10 00:43:43.976486 systemd-journald[201]: Journal stopped May 10 00:43:46.949937 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). May 10 00:43:46.950016 kernel: SELinux: Class mctp_socket not defined in policy. May 10 00:43:46.950044 kernel: SELinux: Class anon_inode not defined in policy. May 10 00:43:46.950056 kernel: SELinux: the above unknown classes and permissions will be allowed May 10 00:43:46.950068 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:43:46.950080 kernel: SELinux: policy capability open_perms=1 May 10 00:43:46.950097 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:43:46.950112 kernel: SELinux: policy capability always_check_network=0 May 10 00:43:46.950124 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:43:46.950139 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:43:46.950151 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:43:46.950167 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:43:46.950184 systemd[1]: Successfully loaded SELinux policy in 49.935ms. May 10 00:43:46.950208 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.259ms. May 10 00:43:46.950223 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:43:46.950974 systemd[1]: Detected virtualization kvm. May 10 00:43:46.950994 systemd[1]: Detected architecture x86-64. May 10 00:43:46.951012 systemd[1]: Detected first boot. May 10 00:43:46.951028 systemd[1]: Hostname set to . May 10 00:43:46.951052 systemd[1]: Initializing machine ID from VM UUID. May 10 00:43:46.951067 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 10 00:43:46.951082 systemd[1]: Populated /etc with preset unit settings. May 10 00:43:46.951097 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:43:46.951112 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:43:46.951128 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:43:46.951145 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 00:43:46.951159 systemd[1]: Stopped initrd-switch-root.service. May 10 00:43:46.951173 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 00:43:46.951188 systemd[1]: Created slice system-addon\x2dconfig.slice. May 10 00:43:46.951202 systemd[1]: Created slice system-addon\x2drun.slice. May 10 00:43:46.951217 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 10 00:43:46.951240 systemd[1]: Created slice system-getty.slice. May 10 00:43:46.951263 systemd[1]: Created slice system-modprobe.slice. May 10 00:43:46.951278 systemd[1]: Created slice system-serial\x2dgetty.slice. May 10 00:43:46.951303 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 10 00:43:46.951318 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 10 00:43:46.951336 systemd[1]: Created slice user.slice. May 10 00:43:46.951350 systemd[1]: Started systemd-ask-password-console.path. May 10 00:43:46.951364 systemd[1]: Started systemd-ask-password-wall.path. May 10 00:43:46.951378 systemd[1]: Set up automount boot.automount. May 10 00:43:46.951395 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 10 00:43:46.951409 systemd[1]: Stopped target initrd-switch-root.target. May 10 00:43:46.951424 systemd[1]: Stopped target initrd-fs.target. May 10 00:43:46.951438 systemd[1]: Stopped target initrd-root-fs.target. May 10 00:43:46.951452 systemd[1]: Reached target integritysetup.target. May 10 00:43:46.951466 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:43:46.951481 systemd[1]: Reached target remote-fs.target. May 10 00:43:46.951497 systemd[1]: Reached target slices.target. May 10 00:43:46.951512 systemd[1]: Reached target swap.target. May 10 00:43:46.951527 systemd[1]: Reached target torcx.target. May 10 00:43:46.951541 systemd[1]: Reached target veritysetup.target. May 10 00:43:46.951555 systemd[1]: Listening on systemd-coredump.socket. May 10 00:43:46.951569 systemd[1]: Listening on systemd-initctl.socket. May 10 00:43:46.951584 systemd[1]: Listening on systemd-networkd.socket. May 10 00:43:46.951598 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:43:46.951611 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:43:46.951626 systemd[1]: Listening on systemd-userdbd.socket. May 10 00:43:46.951642 systemd[1]: Mounting dev-hugepages.mount... May 10 00:43:46.951657 systemd[1]: Mounting dev-mqueue.mount... May 10 00:43:46.951671 systemd[1]: Mounting media.mount... May 10 00:43:46.951685 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:43:46.951699 systemd[1]: Mounting sys-kernel-debug.mount... May 10 00:43:46.951714 systemd[1]: Mounting sys-kernel-tracing.mount... May 10 00:43:46.951727 systemd[1]: Mounting tmp.mount... May 10 00:43:46.951741 systemd[1]: Starting flatcar-tmpfiles.service... May 10 00:43:46.951756 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:43:46.951773 systemd[1]: Starting kmod-static-nodes.service... May 10 00:43:46.951786 systemd[1]: Starting modprobe@configfs.service... May 10 00:43:46.951802 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:43:46.951815 systemd[1]: Starting modprobe@drm.service... May 10 00:43:46.951830 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:43:46.951844 systemd[1]: Starting modprobe@fuse.service... May 10 00:43:46.951859 systemd[1]: Starting modprobe@loop.service... May 10 00:43:46.951873 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:43:46.951888 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 00:43:46.951909 systemd[1]: Stopped systemd-fsck-root.service. May 10 00:43:46.951924 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 00:43:46.951939 systemd[1]: Stopped systemd-fsck-usr.service. May 10 00:43:46.951953 systemd[1]: Stopped systemd-journald.service. May 10 00:43:46.951967 systemd[1]: Starting systemd-journald.service... May 10 00:43:46.951981 kernel: loop: module loaded May 10 00:43:46.951995 systemd[1]: Starting systemd-modules-load.service... May 10 00:43:46.952010 systemd[1]: Starting systemd-network-generator.service... May 10 00:43:46.952024 systemd[1]: Starting systemd-remount-fs.service... May 10 00:43:46.952050 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:43:46.952065 systemd[1]: verity-setup.service: Deactivated successfully. May 10 00:43:46.953028 systemd[1]: Stopped verity-setup.service. May 10 00:43:46.953055 kernel: fuse: init (API version 7.34) May 10 00:43:46.953070 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:43:46.953090 systemd[1]: Mounted dev-hugepages.mount. May 10 00:43:46.953105 systemd[1]: Mounted dev-mqueue.mount. May 10 00:43:46.953119 systemd[1]: Mounted media.mount. May 10 00:43:46.953133 systemd[1]: Mounted sys-kernel-debug.mount. May 10 00:43:46.953151 systemd[1]: Mounted sys-kernel-tracing.mount. May 10 00:43:46.953165 systemd[1]: Mounted tmp.mount. May 10 00:43:46.953180 systemd[1]: Finished kmod-static-nodes.service. May 10 00:43:46.953194 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:43:46.953208 systemd[1]: Finished modprobe@configfs.service. May 10 00:43:46.953222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:43:46.953670 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:43:46.953687 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:43:46.953702 systemd[1]: Finished modprobe@drm.service. May 10 00:43:46.953717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:43:46.953731 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:43:46.953745 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:43:46.953760 systemd[1]: Finished modprobe@fuse.service. May 10 00:43:46.953774 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:43:46.953791 systemd[1]: Finished modprobe@loop.service. May 10 00:43:46.953805 systemd[1]: Finished systemd-modules-load.service. May 10 00:43:46.953820 systemd[1]: Finished systemd-network-generator.service. May 10 00:43:46.953839 systemd-journald[984]: Journal started May 10 00:43:46.953896 systemd-journald[984]: Runtime Journal (/run/log/journal/d4fd95ae2e9745b8998e3a8d1114b46c) is 4.7M, max 38.1M, 33.3M free. May 10 00:43:44.098000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:43:44.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:43:44.149000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:43:44.149000 audit: BPF prog-id=10 op=LOAD May 10 00:43:44.149000 audit: BPF prog-id=10 op=UNLOAD May 10 00:43:44.149000 audit: BPF prog-id=11 op=LOAD May 10 00:43:44.149000 audit: BPF prog-id=11 op=UNLOAD May 10 00:43:44.246000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 10 00:43:44.246000 audit[906]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178c2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:43:44.246000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:43:44.248000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 10 00:43:44.248000 audit[906]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000117999 a2=1ed a3=0 items=2 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:43:44.248000 audit: CWD cwd="/" May 10 00:43:44.248000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:44.248000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:44.248000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:43:46.770000 audit: BPF prog-id=12 op=LOAD May 10 00:43:46.770000 audit: BPF prog-id=3 op=UNLOAD May 10 00:43:46.770000 audit: BPF prog-id=13 op=LOAD May 10 00:43:46.770000 audit: BPF prog-id=14 op=LOAD May 10 00:43:46.770000 audit: BPF prog-id=4 op=UNLOAD May 10 00:43:46.770000 audit: BPF prog-id=5 op=UNLOAD May 10 00:43:46.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.957249 systemd[1]: Finished systemd-remount-fs.service. May 10 00:43:46.957281 systemd[1]: Started systemd-journald.service. May 10 00:43:46.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.782000 audit: BPF prog-id=12 op=UNLOAD May 10 00:43:46.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.887000 audit: BPF prog-id=15 op=LOAD May 10 00:43:46.888000 audit: BPF prog-id=16 op=LOAD May 10 00:43:46.888000 audit: BPF prog-id=17 op=LOAD May 10 00:43:46.888000 audit: BPF prog-id=13 op=UNLOAD May 10 00:43:46.888000 audit: BPF prog-id=14 op=UNLOAD May 10 00:43:46.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.948000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 00:43:46.948000 audit[984]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc11c2dd70 a2=4000 a3=7ffc11c2de0c items=0 ppid=1 pid=984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:43:46.948000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 10 00:43:46.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:44.243274 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:43:46.768159 systemd[1]: Queued start job for default target multi-user.target. May 10 00:43:46.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:44.243951 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:43:46.768173 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 10 00:43:44.243974 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:43:46.771949 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 00:43:44.244022 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 10 00:43:46.958015 systemd[1]: Reached target network-pre.target. May 10 00:43:44.244035 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=debug msg="skipped missing lower profile" missing profile=oem May 10 00:43:44.244077 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 10 00:43:44.244094 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 10 00:43:44.244386 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 10 00:43:44.244429 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:43:44.244445 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:43:46.959635 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 10 00:43:44.245843 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 10 00:43:44.245888 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 10 00:43:44.245911 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 10 00:43:44.245929 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 10 00:43:44.245951 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 10 00:43:44.245969 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 10 00:43:46.419402 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:46Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:43:46.419722 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:46Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:43:46.419870 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:46Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:43:46.420103 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:46Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:43:46.420161 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:46Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 10 00:43:46.420235 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-05-10T00:43:46Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 10 00:43:46.964380 systemd[1]: Mounting sys-kernel-config.mount... May 10 00:43:46.966292 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:43:46.968009 systemd[1]: Starting systemd-hwdb-update.service... May 10 00:43:46.969643 systemd[1]: Starting systemd-journal-flush.service... May 10 00:43:46.970109 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:43:46.971394 systemd[1]: Starting systemd-random-seed.service... May 10 00:43:46.972299 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:43:46.973666 systemd[1]: Starting systemd-sysctl.service... May 10 00:43:46.977615 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 10 00:43:46.979407 systemd[1]: Mounted sys-kernel-config.mount. May 10 00:43:46.995079 systemd[1]: Finished systemd-sysctl.service. May 10 00:43:46.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.996180 systemd-journald[984]: Time spent on flushing to /var/log/journal/d4fd95ae2e9745b8998e3a8d1114b46c is 50.151ms for 1306 entries. May 10 00:43:46.996180 systemd-journald[984]: System Journal (/var/log/journal/d4fd95ae2e9745b8998e3a8d1114b46c) is 8.0M, max 584.8M, 576.8M free. May 10 00:43:47.050884 systemd-journald[984]: Received client request to flush runtime journal. May 10 00:43:46.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:47.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:47.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:47.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:46.998508 systemd[1]: Finished systemd-random-seed.service. May 10 00:43:46.998918 systemd[1]: Reached target first-boot-complete.target. May 10 00:43:47.026368 systemd[1]: Finished flatcar-tmpfiles.service. May 10 00:43:47.030146 systemd[1]: Starting systemd-sysusers.service... May 10 00:43:47.044522 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:43:47.046181 systemd[1]: Starting systemd-udev-settle.service... May 10 00:43:47.051669 systemd[1]: Finished systemd-journal-flush.service. May 10 00:43:47.057312 udevadm[1015]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 10 00:43:47.065597 systemd[1]: Finished systemd-sysusers.service. May 10 00:43:47.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:47.545196 systemd[1]: Finished systemd-hwdb-update.service. May 10 00:43:47.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:47.546000 audit: BPF prog-id=18 op=LOAD May 10 00:43:47.546000 audit: BPF prog-id=19 op=LOAD May 10 00:43:47.546000 audit: BPF prog-id=7 op=UNLOAD May 10 00:43:47.546000 audit: BPF prog-id=8 op=UNLOAD May 10 00:43:47.547252 systemd[1]: Starting systemd-udevd.service... May 10 00:43:47.575279 systemd-udevd[1017]: Using default interface naming scheme 'v252'. May 10 00:43:47.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:47.598000 audit: BPF prog-id=20 op=LOAD May 10 00:43:47.595556 systemd[1]: Started systemd-udevd.service. May 10 00:43:47.602776 systemd[1]: Starting systemd-networkd.service... May 10 00:43:47.617000 audit: BPF prog-id=21 op=LOAD May 10 00:43:47.617000 audit: BPF prog-id=22 op=LOAD May 10 00:43:47.617000 audit: BPF prog-id=23 op=LOAD May 10 00:43:47.619014 systemd[1]: Starting systemd-userdbd.service... May 10 00:43:47.656673 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 10 00:43:47.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:47.661663 systemd[1]: Started systemd-userdbd.service. May 10 00:43:47.734855 systemd-networkd[1031]: lo: Link UP May 10 00:43:47.735161 systemd-networkd[1031]: lo: Gained carrier May 10 00:43:47.735899 systemd-networkd[1031]: Enumeration completed May 10 00:43:47.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:47.736208 systemd[1]: Started systemd-networkd.service. May 10 00:43:47.736917 systemd-networkd[1031]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:43:47.738431 systemd-networkd[1031]: eth0: Link UP May 10 00:43:47.738517 systemd-networkd[1031]: eth0: Gained carrier May 10 00:43:47.749251 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:43:47.750397 systemd-networkd[1031]: eth0: DHCPv4 address 10.244.93.58/30, gateway 10.244.93.57 acquired from 10.244.93.57 May 10 00:43:47.756339 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 10 00:43:47.759770 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:43:47.760517 kernel: ACPI: button: Power Button [PWRF] May 10 00:43:47.787000 audit[1026]: AVC avc: denied { confidentiality } for pid=1026 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:43:47.787000 audit[1026]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a08d5d45b0 a1=338ac a2=7f1344611bc5 a3=5 items=110 ppid=1017 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:43:47.787000 audit: CWD cwd="/" May 10 00:43:47.787000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=1 name=(null) inode=15357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=2 name=(null) inode=15357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=3 name=(null) inode=15358 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=4 name=(null) inode=15357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=5 name=(null) inode=15359 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=6 name=(null) inode=15357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=7 name=(null) inode=15360 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=8 name=(null) inode=15360 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=9 name=(null) inode=16385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=10 name=(null) inode=15360 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=11 name=(null) inode=16386 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=12 name=(null) inode=15360 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=13 name=(null) inode=16387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=14 name=(null) inode=15360 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=15 name=(null) inode=16388 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=16 name=(null) inode=15360 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=17 name=(null) inode=16389 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=18 name=(null) inode=15357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=19 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=20 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=21 name=(null) inode=16391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=22 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=23 name=(null) inode=16392 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=24 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=25 name=(null) inode=16393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=26 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=27 name=(null) inode=16394 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=28 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=29 name=(null) inode=16395 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=30 name=(null) inode=15357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=31 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=32 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=33 name=(null) inode=16397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=34 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=35 name=(null) inode=16398 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=36 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=37 name=(null) inode=16399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=38 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=39 name=(null) inode=16400 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=40 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=41 name=(null) inode=16401 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=42 name=(null) inode=15357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=43 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=44 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=45 name=(null) inode=16403 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=46 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=47 name=(null) inode=16404 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=48 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=49 name=(null) inode=16405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=50 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=51 name=(null) inode=16406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=52 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=53 name=(null) inode=16407 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=55 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=56 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=57 name=(null) inode=16409 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=58 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=59 name=(null) inode=16410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=60 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=61 name=(null) inode=16411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=62 name=(null) inode=16411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=63 name=(null) inode=16412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=64 name=(null) inode=16411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=65 name=(null) inode=16413 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=66 name=(null) inode=16411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=67 name=(null) inode=16414 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=68 name=(null) inode=16411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=69 name=(null) inode=16415 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=70 name=(null) inode=16411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=71 name=(null) inode=16416 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=72 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=73 name=(null) inode=16417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=74 name=(null) inode=16417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=75 name=(null) inode=16418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=76 name=(null) inode=16417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=77 name=(null) inode=16419 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=78 name=(null) inode=16417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=79 name=(null) inode=16420 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=80 name=(null) inode=16417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=81 name=(null) inode=16421 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=82 name=(null) inode=16417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=83 name=(null) inode=16422 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=84 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=85 name=(null) inode=16423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=86 name=(null) inode=16423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=87 name=(null) inode=16424 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=88 name=(null) inode=16423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=89 name=(null) inode=16425 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=90 name=(null) inode=16423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=91 name=(null) inode=16426 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=92 name=(null) inode=16423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=93 name=(null) inode=16427 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=94 name=(null) inode=16423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=95 name=(null) inode=16428 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=96 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=97 name=(null) inode=16429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=98 name=(null) inode=16429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=99 name=(null) inode=16430 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=100 name=(null) inode=16429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=101 name=(null) inode=16431 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=102 name=(null) inode=16429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=103 name=(null) inode=16432 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=104 name=(null) inode=16429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=105 name=(null) inode=16433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=106 name=(null) inode=16429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=107 name=(null) inode=16434 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PATH item=109 name=(null) inode=16435 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:43:47.787000 audit: PROCTITLE proctitle="(udev-worker)" May 10 00:43:47.818262 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 10 00:43:47.837252 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 10 00:43:47.851425 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 10 00:43:47.851569 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 10 00:43:47.998587 systemd[1]: Finished systemd-udev-settle.service. May 10 00:43:47.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.003296 systemd[1]: Starting lvm2-activation-early.service... May 10 00:43:48.034107 lvm[1046]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:43:48.062889 systemd[1]: Finished lvm2-activation-early.service. May 10 00:43:48.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.064395 systemd[1]: Reached target cryptsetup.target. May 10 00:43:48.068419 systemd[1]: Starting lvm2-activation.service... May 10 00:43:48.074437 lvm[1047]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:43:48.096753 systemd[1]: Finished lvm2-activation.service. May 10 00:43:48.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.098066 systemd[1]: Reached target local-fs-pre.target. May 10 00:43:48.099101 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:43:48.099173 systemd[1]: Reached target local-fs.target. May 10 00:43:48.100141 systemd[1]: Reached target machines.target. May 10 00:43:48.104025 systemd[1]: Starting ldconfig.service... May 10 00:43:48.105499 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:43:48.105570 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:43:48.107178 systemd[1]: Starting systemd-boot-update.service... May 10 00:43:48.115339 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 10 00:43:48.116804 systemd[1]: Starting systemd-machine-id-commit.service... May 10 00:43:48.119952 systemd[1]: Starting systemd-sysext.service... May 10 00:43:48.125194 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1049 (bootctl) May 10 00:43:48.126260 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 10 00:43:48.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.143720 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 10 00:43:48.149546 systemd[1]: Unmounting usr-share-oem.mount... May 10 00:43:48.168297 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 10 00:43:48.168498 systemd[1]: Unmounted usr-share-oem.mount. May 10 00:43:48.187334 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:43:48.187886 systemd[1]: Finished systemd-machine-id-commit.service. May 10 00:43:48.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.192472 kernel: loop0: detected capacity change from 0 to 218376 May 10 00:43:48.215257 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:43:48.241248 kernel: loop1: detected capacity change from 0 to 218376 May 10 00:43:48.241993 systemd-fsck[1058]: fsck.fat 4.2 (2021-01-31) May 10 00:43:48.241993 systemd-fsck[1058]: /dev/vda1: 790 files, 120688/258078 clusters May 10 00:43:48.244955 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 10 00:43:48.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.246815 systemd[1]: Mounting boot.mount... May 10 00:43:48.258585 (sd-sysext)[1061]: Using extensions 'kubernetes'. May 10 00:43:48.260313 (sd-sysext)[1061]: Merged extensions into '/usr'. May 10 00:43:48.269670 systemd[1]: Mounted boot.mount. May 10 00:43:48.282007 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:43:48.284600 systemd[1]: Mounting usr-share-oem.mount... May 10 00:43:48.285190 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:43:48.287005 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:43:48.289868 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:43:48.292564 systemd[1]: Starting modprobe@loop.service... May 10 00:43:48.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.293011 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:43:48.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.293158 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:43:48.293314 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:43:48.294952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:43:48.295105 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:43:48.297274 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:43:48.297386 systemd[1]: Finished modprobe@loop.service. May 10 00:43:48.298024 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:43:48.299152 systemd[1]: Finished systemd-boot-update.service. May 10 00:43:48.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.307206 systemd[1]: Mounted usr-share-oem.mount. May 10 00:43:48.307942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:43:48.308063 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:43:48.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.308812 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:43:48.310613 systemd[1]: Finished systemd-sysext.service. May 10 00:43:48.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.312204 systemd[1]: Starting ensure-sysext.service... May 10 00:43:48.314554 systemd[1]: Starting systemd-tmpfiles-setup.service... May 10 00:43:48.326371 systemd[1]: Reloading. May 10 00:43:48.347938 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 10 00:43:48.356835 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:43:48.367160 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:43:48.407822 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2025-05-10T00:43:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:43:48.421317 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2025-05-10T00:43:48Z" level=info msg="torcx already run" May 10 00:43:48.454771 ldconfig[1048]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:43:48.515167 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:43:48.515384 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:43:48.535075 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:43:48.588000 audit: BPF prog-id=24 op=LOAD May 10 00:43:48.588000 audit: BPF prog-id=25 op=LOAD May 10 00:43:48.588000 audit: BPF prog-id=18 op=UNLOAD May 10 00:43:48.588000 audit: BPF prog-id=19 op=UNLOAD May 10 00:43:48.590000 audit: BPF prog-id=26 op=LOAD May 10 00:43:48.590000 audit: BPF prog-id=21 op=UNLOAD May 10 00:43:48.590000 audit: BPF prog-id=27 op=LOAD May 10 00:43:48.590000 audit: BPF prog-id=28 op=LOAD May 10 00:43:48.590000 audit: BPF prog-id=22 op=UNLOAD May 10 00:43:48.590000 audit: BPF prog-id=23 op=UNLOAD May 10 00:43:48.591000 audit: BPF prog-id=29 op=LOAD May 10 00:43:48.591000 audit: BPF prog-id=15 op=UNLOAD May 10 00:43:48.591000 audit: BPF prog-id=30 op=LOAD May 10 00:43:48.591000 audit: BPF prog-id=31 op=LOAD May 10 00:43:48.591000 audit: BPF prog-id=16 op=UNLOAD May 10 00:43:48.591000 audit: BPF prog-id=17 op=UNLOAD May 10 00:43:48.593000 audit: BPF prog-id=32 op=LOAD May 10 00:43:48.593000 audit: BPF prog-id=20 op=UNLOAD May 10 00:43:48.597888 systemd[1]: Finished ldconfig.service. May 10 00:43:48.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.606910 systemd[1]: Finished systemd-tmpfiles-setup.service. May 10 00:43:48.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.612523 systemd[1]: Starting audit-rules.service... May 10 00:43:48.614495 systemd[1]: Starting clean-ca-certificates.service... May 10 00:43:48.621000 audit: BPF prog-id=33 op=LOAD May 10 00:43:48.623000 audit: BPF prog-id=34 op=LOAD May 10 00:43:48.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.616516 systemd[1]: Starting systemd-journal-catalog-update.service... May 10 00:43:48.622319 systemd[1]: Starting systemd-resolved.service... May 10 00:43:48.625123 systemd[1]: Starting systemd-timesyncd.service... May 10 00:43:48.628430 systemd[1]: Starting systemd-update-utmp.service... May 10 00:43:48.629507 systemd[1]: Finished clean-ca-certificates.service. May 10 00:43:48.630349 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:43:48.645382 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:43:48.646953 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:43:48.649007 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:43:48.651410 systemd[1]: Starting modprobe@loop.service... May 10 00:43:48.652335 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:43:48.652454 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:43:48.652563 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:43:48.653000 audit[1142]: SYSTEM_BOOT pid=1142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 10 00:43:48.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.654243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:43:48.654368 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:43:48.656094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:43:48.656269 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:43:48.657091 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:43:48.657194 systemd[1]: Finished modprobe@loop.service. May 10 00:43:48.658799 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:43:48.658985 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:43:48.662366 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:43:48.664470 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:43:48.666834 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:43:48.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.669578 systemd[1]: Starting modprobe@loop.service... May 10 00:43:48.669996 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:43:48.670106 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:43:48.670204 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:43:48.671485 systemd[1]: Finished systemd-update-utmp.service. May 10 00:43:48.672683 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:43:48.672793 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:43:48.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.675711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:43:48.675830 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:43:48.676749 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:43:48.676857 systemd[1]: Finished modprobe@loop.service. May 10 00:43:48.678448 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:43:48.678543 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:43:48.681279 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:43:48.683340 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:43:48.685366 systemd[1]: Starting modprobe@drm.service... May 10 00:43:48.698481 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:43:48.700745 systemd[1]: Starting modprobe@loop.service... May 10 00:43:48.701338 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:43:48.701442 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:43:48.703012 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 00:43:48.703620 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:43:48.709275 kernel: kauditd_printk_skb: 271 callbacks suppressed May 10 00:43:48.709339 kernel: audit: type=1130 audit(1746837828.706:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.706409 systemd[1]: Finished systemd-journal-catalog-update.service. May 10 00:43:48.711613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:43:48.711731 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:43:48.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.715253 kernel: audit: type=1130 audit(1746837828.711:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.715298 kernel: audit: type=1131 audit(1746837828.715:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.718310 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:43:48.718419 systemd[1]: Finished modprobe@drm.service. May 10 00:43:48.721670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:43:48.721790 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:43:48.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.727710 kernel: audit: type=1130 audit(1746837828.721:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.727758 kernel: audit: type=1131 audit(1746837828.721:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.728627 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:43:48.728733 systemd[1]: Finished modprobe@loop.service. May 10 00:43:48.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.732253 kernel: audit: type=1130 audit(1746837828.728:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.732514 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:43:48.732610 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:43:48.734455 systemd[1]: Starting systemd-update-done.service... May 10 00:43:48.735350 systemd[1]: Finished ensure-sysext.service. May 10 00:43:48.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.738998 augenrules[1167]: No rules May 10 00:43:48.739272 kernel: audit: type=1131 audit(1746837828.728:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.740424 systemd[1]: Finished audit-rules.service. May 10 00:43:48.745247 kernel: audit: type=1130 audit(1746837828.731:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.748616 systemd[1]: Finished systemd-update-done.service. May 10 00:43:48.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.751938 kernel: audit: type=1131 audit(1746837828.731:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.751984 kernel: audit: type=1130 audit(1746837828.738:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:43:48.738000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 10 00:43:48.738000 audit[1167]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc24f732a0 a2=420 a3=0 items=0 ppid=1136 pid=1167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:43:48.738000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 10 00:43:48.770603 systemd-resolved[1139]: Positive Trust Anchors: May 10 00:43:48.770622 systemd-resolved[1139]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:43:48.770657 systemd-resolved[1139]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:43:48.772645 systemd[1]: Started systemd-timesyncd.service. May 10 00:43:48.773169 systemd[1]: Reached target time-set.target. May 10 00:43:48.778809 systemd-resolved[1139]: Using system hostname 'srv-2i5m2.gb1.brightbox.com'. May 10 00:43:48.780542 systemd[1]: Started systemd-resolved.service. May 10 00:43:48.780958 systemd[1]: Reached target network.target. May 10 00:43:48.781337 systemd[1]: Reached target nss-lookup.target. May 10 00:43:48.781680 systemd[1]: Reached target sysinit.target. May 10 00:43:48.782102 systemd[1]: Started motdgen.path. May 10 00:43:48.782443 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 10 00:43:48.782949 systemd[1]: Started logrotate.timer. May 10 00:43:48.783371 systemd[1]: Started mdadm.timer. May 10 00:43:48.783694 systemd[1]: Started systemd-tmpfiles-clean.timer. May 10 00:43:48.784043 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:43:48.784085 systemd[1]: Reached target paths.target. May 10 00:43:48.784385 systemd[1]: Reached target timers.target. May 10 00:43:48.785015 systemd[1]: Listening on dbus.socket. May 10 00:43:48.786641 systemd[1]: Starting docker.socket... May 10 00:43:48.791497 systemd[1]: Listening on sshd.socket. May 10 00:43:48.791966 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:43:48.792433 systemd[1]: Listening on docker.socket. May 10 00:43:48.792813 systemd[1]: Reached target sockets.target. May 10 00:43:48.793148 systemd[1]: Reached target basic.target. May 10 00:43:48.793547 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:43:48.793581 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:43:48.794853 systemd[1]: Starting containerd.service... May 10 00:43:48.796706 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 10 00:43:48.801919 systemd[1]: Starting dbus.service... May 10 00:43:48.810989 systemd[1]: Starting enable-oem-cloudinit.service... May 10 00:43:48.815655 systemd[1]: Starting extend-filesystems.service... May 10 00:43:48.816388 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 10 00:43:48.819782 systemd[1]: Starting motdgen.service... May 10 00:43:48.824374 jq[1181]: false May 10 00:43:48.823714 systemd[1]: Starting prepare-helm.service... May 10 00:43:48.826426 systemd[1]: Starting ssh-key-proc-cmdline.service... May 10 00:43:48.829953 systemd[1]: Starting sshd-keygen.service... May 10 00:43:48.834982 systemd[1]: Starting systemd-logind.service... May 10 00:43:48.835492 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:43:48.835603 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:43:48.836147 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:43:48.837002 systemd[1]: Starting update-engine.service... May 10 00:43:48.839203 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 10 00:43:48.844457 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:43:48.844936 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 10 00:43:48.853704 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:43:48.853893 systemd[1]: Finished ssh-key-proc-cmdline.service. May 10 00:43:48.861545 tar[1196]: linux-amd64/LICENSE May 10 00:43:48.861740 jq[1192]: true May 10 00:43:48.862484 tar[1196]: linux-amd64/helm May 10 00:43:48.870729 dbus-daemon[1180]: [system] SELinux support is enabled May 10 00:43:48.871116 systemd[1]: Started dbus.service. May 10 00:43:48.872432 dbus-daemon[1180]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1031 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 10 00:43:48.873530 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:43:48.873563 systemd[1]: Reached target system-config.target. May 10 00:43:48.873943 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:43:48.873972 systemd[1]: Reached target user-config.target. May 10 00:43:48.874310 dbus-daemon[1180]: [system] Successfully activated service 'org.freedesktop.systemd1' May 10 00:43:48.877944 systemd[1]: Starting systemd-hostnamed.service... May 10 00:43:48.879590 jq[1200]: true May 10 00:43:48.898190 extend-filesystems[1182]: Found loop1 May 10 00:43:48.898897 extend-filesystems[1182]: Found vda May 10 00:43:48.898897 extend-filesystems[1182]: Found vda1 May 10 00:43:48.898897 extend-filesystems[1182]: Found vda2 May 10 00:43:48.898897 extend-filesystems[1182]: Found vda3 May 10 00:43:48.898897 extend-filesystems[1182]: Found usr May 10 00:43:48.898897 extend-filesystems[1182]: Found vda4 May 10 00:43:48.898897 extend-filesystems[1182]: Found vda6 May 10 00:43:48.898897 extend-filesystems[1182]: Found vda7 May 10 00:43:48.898897 extend-filesystems[1182]: Found vda9 May 10 00:43:48.898897 extend-filesystems[1182]: Checking size of /dev/vda9 May 10 00:43:48.917238 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:43:48.917446 systemd[1]: Finished motdgen.service. May 10 00:43:48.945733 extend-filesystems[1182]: Resized partition /dev/vda9 May 10 00:43:48.977203 update_engine[1191]: I0510 00:43:48.976804 1191 main.cc:92] Flatcar Update Engine starting May 10 00:43:48.980407 systemd[1]: Started update-engine.service. May 10 00:43:48.982940 systemd[1]: Started locksmithd.service. May 10 00:43:48.984110 update_engine[1191]: I0510 00:43:48.984077 1191 update_check_scheduler.cc:74] Next update check in 11m18s May 10 00:43:48.989356 extend-filesystems[1232]: resize2fs 1.46.5 (30-Dec-2021) May 10 00:43:48.998309 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks May 10 00:43:49.041858 bash[1233]: Updated "/home/core/.ssh/authorized_keys" May 10 00:43:49.045924 env[1194]: time="2025-05-10T00:43:49.045089960Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 10 00:43:49.054036 systemd-timesyncd[1140]: Contacted time server 85.199.214.101:123 (0.flatcar.pool.ntp.org). May 10 00:43:49.054101 systemd-timesyncd[1140]: Initial clock synchronization to Sat 2025-05-10 00:43:49.315436 UTC. May 10 00:43:49.057990 systemd-logind[1190]: Watching system buttons on /dev/input/event2 (Power Button) May 10 00:43:49.058010 systemd-logind[1190]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 00:43:49.058658 systemd[1]: Created slice system-sshd.slice. May 10 00:43:49.059007 systemd-logind[1190]: New seat seat0. May 10 00:43:49.086062 dbus-daemon[1180]: [system] Successfully activated service 'org.freedesktop.hostname1' May 10 00:43:49.087257 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 10 00:43:49.088113 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 10 00:43:49.089290 dbus-daemon[1180]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1204 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 10 00:43:49.095708 systemd[1]: Started systemd-logind.service. May 10 00:43:49.096279 systemd[1]: Started systemd-hostnamed.service. May 10 00:43:49.099734 systemd[1]: Starting polkit.service... May 10 00:43:49.107310 extend-filesystems[1232]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 10 00:43:49.107310 extend-filesystems[1232]: old_desc_blocks = 1, new_desc_blocks = 8 May 10 00:43:49.107310 extend-filesystems[1232]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 10 00:43:49.111066 extend-filesystems[1182]: Resized filesystem in /dev/vda9 May 10 00:43:49.108077 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:43:49.108230 systemd[1]: Finished extend-filesystems.service. May 10 00:43:49.123194 polkitd[1238]: Started polkitd version 121 May 10 00:43:49.126467 env[1194]: time="2025-05-10T00:43:49.126431482Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:43:49.128655 env[1194]: time="2025-05-10T00:43:49.128632352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:43:49.132803 env[1194]: time="2025-05-10T00:43:49.132469532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:43:49.134486 env[1194]: time="2025-05-10T00:43:49.134465223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:43:49.134788 env[1194]: time="2025-05-10T00:43:49.134767808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:43:49.141698 env[1194]: time="2025-05-10T00:43:49.141384555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:43:49.141698 env[1194]: time="2025-05-10T00:43:49.141414268Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 10 00:43:49.141698 env[1194]: time="2025-05-10T00:43:49.141438339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:43:49.142511 polkitd[1238]: Loading rules from directory /etc/polkit-1/rules.d May 10 00:43:49.143403 env[1194]: time="2025-05-10T00:43:49.142654036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:43:49.143403 env[1194]: time="2025-05-10T00:43:49.143007938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:43:49.143403 env[1194]: time="2025-05-10T00:43:49.143163365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:43:49.143403 env[1194]: time="2025-05-10T00:43:49.143187820Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:43:49.143403 env[1194]: time="2025-05-10T00:43:49.143253363Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 10 00:43:49.143403 env[1194]: time="2025-05-10T00:43:49.143274110Z" level=info msg="metadata content store policy set" policy=shared May 10 00:43:49.143664 polkitd[1238]: Loading rules from directory /usr/share/polkit-1/rules.d May 10 00:43:49.144919 polkitd[1238]: Finished loading, compiling and executing 2 rules May 10 00:43:49.145244 env[1194]: time="2025-05-10T00:43:49.145039644Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:43:49.145244 env[1194]: time="2025-05-10T00:43:49.145077266Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:43:49.145244 env[1194]: time="2025-05-10T00:43:49.145089622Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:43:49.145244 env[1194]: time="2025-05-10T00:43:49.145144551Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:43:49.145244 env[1194]: time="2025-05-10T00:43:49.145158587Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:43:49.145244 env[1194]: time="2025-05-10T00:43:49.145180524Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:43:49.145244 env[1194]: time="2025-05-10T00:43:49.145192567Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:43:49.145244 env[1194]: time="2025-05-10T00:43:49.145206056Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:43:49.145244 env[1194]: time="2025-05-10T00:43:49.145218626Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 10 00:43:49.145621 env[1194]: time="2025-05-10T00:43:49.145478230Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:43:49.145621 env[1194]: time="2025-05-10T00:43:49.145496872Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:43:49.145621 env[1194]: time="2025-05-10T00:43:49.145509800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:43:49.145621 env[1194]: time="2025-05-10T00:43:49.145604787Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:43:49.145842 env[1194]: time="2025-05-10T00:43:49.145829277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:43:49.146192 env[1194]: time="2025-05-10T00:43:49.146172171Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:43:49.146378 env[1194]: time="2025-05-10T00:43:49.146363344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:43:49.146696 dbus-daemon[1180]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 10 00:43:49.146853 systemd[1]: Started polkit.service. May 10 00:43:49.146960 env[1194]: time="2025-05-10T00:43:49.146943214Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:43:49.147115 env[1194]: time="2025-05-10T00:43:49.147101279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:43:49.147185 env[1194]: time="2025-05-10T00:43:49.147173953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:43:49.147592 env[1194]: time="2025-05-10T00:43:49.147575223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:43:49.147713 env[1194]: time="2025-05-10T00:43:49.147698281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:43:49.147790 env[1194]: time="2025-05-10T00:43:49.147777219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:43:49.147882 polkitd[1238]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 10 00:43:49.147972 env[1194]: time="2025-05-10T00:43:49.147958749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:43:49.148075 env[1194]: time="2025-05-10T00:43:49.148058318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:43:49.148476 env[1194]: time="2025-05-10T00:43:49.148460381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:43:49.148571 env[1194]: time="2025-05-10T00:43:49.148558119Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:43:49.148758 env[1194]: time="2025-05-10T00:43:49.148744061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:43:49.148839 env[1194]: time="2025-05-10T00:43:49.148810765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:43:49.148900 env[1194]: time="2025-05-10T00:43:49.148889467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:43:49.148969 env[1194]: time="2025-05-10T00:43:49.148957859Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:43:49.149033 env[1194]: time="2025-05-10T00:43:49.149019155Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 10 00:43:49.149085 env[1194]: time="2025-05-10T00:43:49.149073113Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:43:49.149156 env[1194]: time="2025-05-10T00:43:49.149144035Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 10 00:43:49.149252 env[1194]: time="2025-05-10T00:43:49.149223680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:43:49.149749 env[1194]: time="2025-05-10T00:43:49.149695304Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:43:49.151185 env[1194]: time="2025-05-10T00:43:49.149908448Z" level=info msg="Connect containerd service" May 10 00:43:49.151185 env[1194]: time="2025-05-10T00:43:49.149969528Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:43:49.151547 env[1194]: time="2025-05-10T00:43:49.151526445Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:43:49.152253 env[1194]: time="2025-05-10T00:43:49.152217879Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:43:49.159125 env[1194]: time="2025-05-10T00:43:49.159075810Z" level=info msg="Start subscribing containerd event" May 10 00:43:49.159185 env[1194]: time="2025-05-10T00:43:49.159136001Z" level=info msg="Start recovering state" May 10 00:43:49.159277 env[1194]: time="2025-05-10T00:43:49.159218829Z" level=info msg="Start event monitor" May 10 00:43:49.159322 env[1194]: time="2025-05-10T00:43:49.159293894Z" level=info msg="Start snapshots syncer" May 10 00:43:49.159355 env[1194]: time="2025-05-10T00:43:49.159319855Z" level=info msg="Start cni network conf syncer for default" May 10 00:43:49.159355 env[1194]: time="2025-05-10T00:43:49.159330558Z" level=info msg="Start streaming server" May 10 00:43:49.159568 env[1194]: time="2025-05-10T00:43:49.159253950Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:43:49.159723 env[1194]: time="2025-05-10T00:43:49.159709004Z" level=info msg="containerd successfully booted in 0.121270s" May 10 00:43:49.159765 systemd[1]: Started containerd.service. May 10 00:43:49.168801 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:43:49.168875 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:43:49.174851 systemd-networkd[1031]: eth0: Gained IPv6LL May 10 00:43:49.177688 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 00:43:49.178390 systemd[1]: Reached target network-online.target. May 10 00:43:49.180920 systemd[1]: Starting kubelet.service... May 10 00:43:49.194887 systemd-hostnamed[1204]: Hostname set to (static) May 10 00:43:49.733562 tar[1196]: linux-amd64/README.md May 10 00:43:49.739674 systemd[1]: Finished prepare-helm.service. May 10 00:43:49.864741 locksmithd[1234]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:43:50.216328 systemd-networkd[1031]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:174e:24:19ff:fef4:5d3a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:174e:24:19ff:fef4:5d3a/64 assigned by NDisc. May 10 00:43:50.216337 systemd-networkd[1031]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. May 10 00:43:50.284416 systemd[1]: Started kubelet.service. May 10 00:43:50.764264 sshd_keygen[1195]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:43:50.790705 systemd[1]: Finished sshd-keygen.service. May 10 00:43:50.796442 systemd[1]: Starting issuegen.service... May 10 00:43:50.801238 systemd[1]: Started sshd@0-10.244.93.58:22-139.178.68.195:49944.service. May 10 00:43:50.807009 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:43:50.807162 systemd[1]: Finished issuegen.service. May 10 00:43:50.809237 systemd[1]: Starting systemd-user-sessions.service... May 10 00:43:50.820549 systemd[1]: Finished systemd-user-sessions.service. May 10 00:43:50.822469 systemd[1]: Started getty@tty1.service. May 10 00:43:50.824443 systemd[1]: Started serial-getty@ttyS0.service. May 10 00:43:50.825030 systemd[1]: Reached target getty.target. May 10 00:43:50.890142 kubelet[1260]: E0510 00:43:50.890097 1260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:43:50.894547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:43:50.894941 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:43:50.895695 systemd[1]: kubelet.service: Consumed 1.101s CPU time. May 10 00:43:51.741146 sshd[1274]: Accepted publickey for core from 139.178.68.195 port 49944 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:43:51.745951 sshd[1274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:43:51.769446 systemd[1]: Created slice user-500.slice. May 10 00:43:51.771764 systemd[1]: Starting user-runtime-dir@500.service... May 10 00:43:51.778882 systemd-logind[1190]: New session 1 of user core. May 10 00:43:51.786295 systemd[1]: Finished user-runtime-dir@500.service. May 10 00:43:51.789175 systemd[1]: Starting user@500.service... May 10 00:43:51.793176 (systemd)[1282]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:43:51.868745 systemd[1282]: Queued start job for default target default.target. May 10 00:43:51.869228 systemd[1282]: Reached target paths.target. May 10 00:43:51.869246 systemd[1282]: Reached target sockets.target. May 10 00:43:51.869269 systemd[1282]: Reached target timers.target. May 10 00:43:51.869290 systemd[1282]: Reached target basic.target. May 10 00:43:51.869328 systemd[1282]: Reached target default.target. May 10 00:43:51.869358 systemd[1282]: Startup finished in 68ms. May 10 00:43:51.872020 systemd[1]: Started user@500.service. May 10 00:43:51.876028 systemd[1]: Started session-1.scope. May 10 00:43:52.525197 systemd[1]: Started sshd@1-10.244.93.58:22-139.178.68.195:49950.service. May 10 00:43:53.446358 sshd[1292]: Accepted publickey for core from 139.178.68.195 port 49950 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:43:53.450163 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:43:53.461206 systemd-logind[1190]: New session 2 of user core. May 10 00:43:53.461883 systemd[1]: Started session-2.scope. May 10 00:43:54.085812 sshd[1292]: pam_unix(sshd:session): session closed for user core May 10 00:43:54.093887 systemd-logind[1190]: Session 2 logged out. Waiting for processes to exit. May 10 00:43:54.095525 systemd[1]: sshd@1-10.244.93.58:22-139.178.68.195:49950.service: Deactivated successfully. May 10 00:43:54.097309 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:43:54.098667 systemd-logind[1190]: Removed session 2. May 10 00:43:54.242173 systemd[1]: Started sshd@2-10.244.93.58:22-139.178.68.195:49962.service. May 10 00:43:55.178947 sshd[1298]: Accepted publickey for core from 139.178.68.195 port 49962 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:43:55.181700 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:43:55.191344 systemd-logind[1190]: New session 3 of user core. May 10 00:43:55.192497 systemd[1]: Started session-3.scope. May 10 00:43:55.821030 sshd[1298]: pam_unix(sshd:session): session closed for user core May 10 00:43:55.829228 systemd[1]: sshd@2-10.244.93.58:22-139.178.68.195:49962.service: Deactivated successfully. May 10 00:43:55.830371 systemd-logind[1190]: Session 3 logged out. Waiting for processes to exit. May 10 00:43:55.831871 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:43:55.849406 systemd-logind[1190]: Removed session 3. May 10 00:43:55.892235 coreos-metadata[1177]: May 10 00:43:55.891 WARN failed to locate config-drive, using the metadata service API instead May 10 00:43:55.948027 coreos-metadata[1177]: May 10 00:43:55.947 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 10 00:43:55.973522 coreos-metadata[1177]: May 10 00:43:55.973 INFO Fetch successful May 10 00:43:55.973883 coreos-metadata[1177]: May 10 00:43:55.973 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 10 00:43:56.000770 coreos-metadata[1177]: May 10 00:43:56.000 INFO Fetch successful May 10 00:43:56.003734 unknown[1177]: wrote ssh authorized keys file for user: core May 10 00:43:56.021450 update-ssh-keys[1305]: Updated "/home/core/.ssh/authorized_keys" May 10 00:43:56.023092 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 10 00:43:56.024111 systemd[1]: Reached target multi-user.target. May 10 00:43:56.028900 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 10 00:43:56.040016 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 10 00:43:56.040204 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 10 00:43:56.043302 systemd[1]: Startup finished in 859ms (kernel) + 8.376s (initrd) + 12.000s (userspace) = 21.237s. May 10 00:44:00.956695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:44:00.957158 systemd[1]: Stopped kubelet.service. May 10 00:44:00.957285 systemd[1]: kubelet.service: Consumed 1.101s CPU time. May 10 00:44:00.960915 systemd[1]: Starting kubelet.service... May 10 00:44:01.085490 systemd[1]: Started kubelet.service. May 10 00:44:01.135942 kubelet[1311]: E0510 00:44:01.135896 1311 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:44:01.141389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:44:01.141597 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:44:06.061731 systemd[1]: Started sshd@3-10.244.93.58:22-139.178.68.195:57716.service. May 10 00:44:06.968177 sshd[1318]: Accepted publickey for core from 139.178.68.195 port 57716 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:44:06.971867 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:44:06.982968 systemd[1]: Started session-4.scope. May 10 00:44:06.983467 systemd-logind[1190]: New session 4 of user core. May 10 00:44:07.600220 sshd[1318]: pam_unix(sshd:session): session closed for user core May 10 00:44:07.608097 systemd[1]: sshd@3-10.244.93.58:22-139.178.68.195:57716.service: Deactivated successfully. May 10 00:44:07.610157 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:44:07.611581 systemd-logind[1190]: Session 4 logged out. Waiting for processes to exit. May 10 00:44:07.613302 systemd-logind[1190]: Removed session 4. May 10 00:44:07.751124 systemd[1]: Started sshd@4-10.244.93.58:22-139.178.68.195:57720.service. May 10 00:44:08.651484 sshd[1324]: Accepted publickey for core from 139.178.68.195 port 57720 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:44:08.655924 sshd[1324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:44:08.666349 systemd-logind[1190]: New session 5 of user core. May 10 00:44:08.666476 systemd[1]: Started session-5.scope. May 10 00:44:09.272313 sshd[1324]: pam_unix(sshd:session): session closed for user core May 10 00:44:09.279478 systemd-logind[1190]: Session 5 logged out. Waiting for processes to exit. May 10 00:44:09.280110 systemd[1]: sshd@4-10.244.93.58:22-139.178.68.195:57720.service: Deactivated successfully. May 10 00:44:09.281666 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:44:09.283741 systemd-logind[1190]: Removed session 5. May 10 00:44:09.423504 systemd[1]: Started sshd@5-10.244.93.58:22-139.178.68.195:57736.service. May 10 00:44:10.315563 sshd[1330]: Accepted publickey for core from 139.178.68.195 port 57736 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:44:10.319967 sshd[1330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:44:10.330012 systemd-logind[1190]: New session 6 of user core. May 10 00:44:10.330131 systemd[1]: Started session-6.scope. May 10 00:44:10.938838 sshd[1330]: pam_unix(sshd:session): session closed for user core May 10 00:44:10.945780 systemd-logind[1190]: Session 6 logged out. Waiting for processes to exit. May 10 00:44:10.947041 systemd[1]: sshd@5-10.244.93.58:22-139.178.68.195:57736.service: Deactivated successfully. May 10 00:44:10.948304 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:44:10.949974 systemd-logind[1190]: Removed session 6. May 10 00:44:11.090604 systemd[1]: Started sshd@6-10.244.93.58:22-139.178.68.195:57744.service. May 10 00:44:11.206900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:44:11.207419 systemd[1]: Stopped kubelet.service. May 10 00:44:11.211098 systemd[1]: Starting kubelet.service... May 10 00:44:11.320436 systemd[1]: Started kubelet.service. May 10 00:44:11.366465 kubelet[1342]: E0510 00:44:11.366354 1342 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:44:11.371207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:44:11.371393 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:44:11.992104 sshd[1336]: Accepted publickey for core from 139.178.68.195 port 57744 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:44:11.995750 sshd[1336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:44:12.006482 systemd-logind[1190]: New session 7 of user core. May 10 00:44:12.008012 systemd[1]: Started session-7.scope. May 10 00:44:12.491159 sudo[1348]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:44:12.492197 sudo[1348]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 00:44:12.537796 systemd[1]: Starting docker.service... May 10 00:44:12.611070 env[1358]: time="2025-05-10T00:44:12.610926451Z" level=info msg="Starting up" May 10 00:44:12.614719 env[1358]: time="2025-05-10T00:44:12.614650915Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:44:12.614719 env[1358]: time="2025-05-10T00:44:12.614685467Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:44:12.614719 env[1358]: time="2025-05-10T00:44:12.614708041Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:44:12.614719 env[1358]: time="2025-05-10T00:44:12.614722948Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:44:12.620465 env[1358]: time="2025-05-10T00:44:12.620414366Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:44:12.620465 env[1358]: time="2025-05-10T00:44:12.620437858Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:44:12.620465 env[1358]: time="2025-05-10T00:44:12.620454526Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:44:12.620465 env[1358]: time="2025-05-10T00:44:12.620466897Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:44:12.629461 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport68187860-merged.mount: Deactivated successfully. May 10 00:44:12.650718 env[1358]: time="2025-05-10T00:44:12.650662212Z" level=info msg="Loading containers: start." May 10 00:44:12.802097 kernel: Initializing XFRM netlink socket May 10 00:44:12.852017 env[1358]: time="2025-05-10T00:44:12.851915237Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 10 00:44:12.954702 systemd-networkd[1031]: docker0: Link UP May 10 00:44:12.969698 env[1358]: time="2025-05-10T00:44:12.969649028Z" level=info msg="Loading containers: done." May 10 00:44:12.984306 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck269059298-merged.mount: Deactivated successfully. May 10 00:44:12.992596 env[1358]: time="2025-05-10T00:44:12.992442299Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:44:12.993039 env[1358]: time="2025-05-10T00:44:12.992983298Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 10 00:44:12.993356 env[1358]: time="2025-05-10T00:44:12.993318765Z" level=info msg="Daemon has completed initialization" May 10 00:44:13.015499 systemd[1]: Started docker.service. May 10 00:44:13.028634 env[1358]: time="2025-05-10T00:44:13.028551756Z" level=info msg="API listen on /run/docker.sock" May 10 00:44:14.347471 env[1194]: time="2025-05-10T00:44:14.347354617Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 10 00:44:15.334399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1474140691.mount: Deactivated successfully. May 10 00:44:16.902621 env[1194]: time="2025-05-10T00:44:16.902502733Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:16.904844 env[1194]: time="2025-05-10T00:44:16.904781051Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:16.907430 env[1194]: time="2025-05-10T00:44:16.907367778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:16.909678 env[1194]: time="2025-05-10T00:44:16.909623155Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:16.910649 env[1194]: time="2025-05-10T00:44:16.910589091Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 10 00:44:16.912642 env[1194]: time="2025-05-10T00:44:16.912577847Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 10 00:44:19.491779 env[1194]: time="2025-05-10T00:44:19.491670208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:19.496298 env[1194]: time="2025-05-10T00:44:19.496196533Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:19.499574 env[1194]: time="2025-05-10T00:44:19.499535161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:19.502679 env[1194]: time="2025-05-10T00:44:19.502618166Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:19.503917 env[1194]: time="2025-05-10T00:44:19.503854832Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 10 00:44:19.506599 env[1194]: time="2025-05-10T00:44:19.506561586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 10 00:44:20.259564 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 10 00:44:21.033407 env[1194]: time="2025-05-10T00:44:21.033289265Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:21.035415 env[1194]: time="2025-05-10T00:44:21.035340329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:21.037448 env[1194]: time="2025-05-10T00:44:21.037395392Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:21.039333 env[1194]: time="2025-05-10T00:44:21.039262159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:21.040122 env[1194]: time="2025-05-10T00:44:21.040069494Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 10 00:44:21.041591 env[1194]: time="2025-05-10T00:44:21.041541474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 10 00:44:21.456797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 00:44:21.457480 systemd[1]: Stopped kubelet.service. May 10 00:44:21.461985 systemd[1]: Starting kubelet.service... May 10 00:44:21.594397 systemd[1]: Started kubelet.service. May 10 00:44:21.732777 kubelet[1492]: E0510 00:44:21.732250 1492 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:44:21.735084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:44:21.735245 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:44:22.306496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123935203.mount: Deactivated successfully. May 10 00:44:23.064830 env[1194]: time="2025-05-10T00:44:23.064765265Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:23.067645 env[1194]: time="2025-05-10T00:44:23.067613929Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:23.068982 env[1194]: time="2025-05-10T00:44:23.068959083Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:23.071220 env[1194]: time="2025-05-10T00:44:23.071178394Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:23.072527 env[1194]: time="2025-05-10T00:44:23.072460697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 10 00:44:23.073716 env[1194]: time="2025-05-10T00:44:23.073656594Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 10 00:44:23.327847 systemd[1]: Started sshd@7-10.244.93.58:22-80.94.95.115:62104.service. May 10 00:44:23.687556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1605532099.mount: Deactivated successfully. May 10 00:44:24.827372 env[1194]: time="2025-05-10T00:44:24.827265310Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:24.830156 env[1194]: time="2025-05-10T00:44:24.830098491Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:24.834393 env[1194]: time="2025-05-10T00:44:24.834334207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:24.837337 env[1194]: time="2025-05-10T00:44:24.837268809Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:24.838366 env[1194]: time="2025-05-10T00:44:24.838331935Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 10 00:44:24.839043 env[1194]: time="2025-05-10T00:44:24.839018607Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 10 00:44:25.388982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3724620592.mount: Deactivated successfully. May 10 00:44:25.399886 env[1194]: time="2025-05-10T00:44:25.399831409Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:25.401016 env[1194]: time="2025-05-10T00:44:25.400989259Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:25.402873 env[1194]: time="2025-05-10T00:44:25.402852247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:25.404328 env[1194]: time="2025-05-10T00:44:25.404288170Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:25.404882 env[1194]: time="2025-05-10T00:44:25.404854146Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 10 00:44:25.405631 env[1194]: time="2025-05-10T00:44:25.405582971Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 10 00:44:26.002537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611341363.mount: Deactivated successfully. May 10 00:44:26.332378 sshd[1498]: Invalid user backups from 80.94.95.115 port 62104 May 10 00:44:26.660502 sshd[1498]: pam_faillock(sshd:auth): User unknown May 10 00:44:26.662761 sshd[1498]: pam_unix(sshd:auth): check pass; user unknown May 10 00:44:26.662883 sshd[1498]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.95.115 May 10 00:44:26.664780 sshd[1498]: pam_faillock(sshd:auth): User unknown May 10 00:44:28.851927 env[1194]: time="2025-05-10T00:44:28.851840006Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.853972 env[1194]: time="2025-05-10T00:44:28.853942845Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.857295 env[1194]: time="2025-05-10T00:44:28.857214864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.859659 env[1194]: time="2025-05-10T00:44:28.859633451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:28.865211 env[1194]: time="2025-05-10T00:44:28.862045039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 10 00:44:29.232729 sshd[1498]: Failed password for invalid user backups from 80.94.95.115 port 62104 ssh2 May 10 00:44:30.962718 sshd[1498]: Connection closed by invalid user backups 80.94.95.115 port 62104 [preauth] May 10 00:44:30.966180 systemd[1]: sshd@7-10.244.93.58:22-80.94.95.115:62104.service: Deactivated successfully. May 10 00:44:31.956549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 10 00:44:31.957009 systemd[1]: Stopped kubelet.service. May 10 00:44:31.961554 systemd[1]: Starting kubelet.service... May 10 00:44:32.358523 systemd[1]: Started kubelet.service. May 10 00:44:32.463890 kubelet[1523]: E0510 00:44:32.463842 1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:44:32.466132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:44:32.466280 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:44:32.714591 systemd[1]: Stopped kubelet.service. May 10 00:44:32.717615 systemd[1]: Starting kubelet.service... May 10 00:44:32.755501 systemd[1]: Reloading. May 10 00:44:32.893030 /usr/lib/systemd/system-generators/torcx-generator[1558]: time="2025-05-10T00:44:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:44:32.894301 /usr/lib/systemd/system-generators/torcx-generator[1558]: time="2025-05-10T00:44:32Z" level=info msg="torcx already run" May 10 00:44:32.972142 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:44:32.972459 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:44:32.990902 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:44:33.087683 systemd[1]: Stopping kubelet.service... May 10 00:44:33.088478 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:44:33.088764 systemd[1]: Stopped kubelet.service. May 10 00:44:33.091074 systemd[1]: Starting kubelet.service... May 10 00:44:33.192453 systemd[1]: Started kubelet.service. May 10 00:44:33.238686 kubelet[1609]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:44:33.238686 kubelet[1609]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 10 00:44:33.238686 kubelet[1609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:44:33.239068 kubelet[1609]: I0510 00:44:33.238645 1609 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:44:33.583652 kubelet[1609]: I0510 00:44:33.583467 1609 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 10 00:44:33.584130 kubelet[1609]: I0510 00:44:33.584115 1609 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:44:33.584631 kubelet[1609]: I0510 00:44:33.584610 1609 server.go:954] "Client rotation is on, will bootstrap in background" May 10 00:44:33.645678 kubelet[1609]: E0510 00:44:33.645627 1609 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.93.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.93.58:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:33.646515 kubelet[1609]: I0510 00:44:33.646488 1609 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:44:33.656555 kubelet[1609]: E0510 00:44:33.656518 1609 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:44:33.656555 kubelet[1609]: I0510 00:44:33.656554 1609 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:44:33.660756 kubelet[1609]: I0510 00:44:33.660728 1609 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:44:33.662352 kubelet[1609]: I0510 00:44:33.662299 1609 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:44:33.662617 kubelet[1609]: I0510 00:44:33.662352 1609 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-2i5m2.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:44:33.662762 kubelet[1609]: I0510 00:44:33.662637 1609 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:44:33.662762 kubelet[1609]: I0510 00:44:33.662650 1609 container_manager_linux.go:304] "Creating device plugin manager" May 10 00:44:33.662829 kubelet[1609]: I0510 00:44:33.662801 1609 state_mem.go:36] "Initialized new in-memory state store" May 10 00:44:33.667964 kubelet[1609]: I0510 00:44:33.667946 1609 kubelet.go:446] "Attempting to sync node with API server" May 10 00:44:33.668029 kubelet[1609]: I0510 00:44:33.667968 1609 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:44:33.668029 kubelet[1609]: I0510 00:44:33.667993 1609 kubelet.go:352] "Adding apiserver pod source" May 10 00:44:33.668029 kubelet[1609]: I0510 00:44:33.668006 1609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:44:33.680402 kubelet[1609]: W0510 00:44:33.680176 1609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.93.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.93.58:6443: connect: connection refused May 10 00:44:33.680402 kubelet[1609]: E0510 00:44:33.680244 1609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.93.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.93.58:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:33.680402 kubelet[1609]: W0510 00:44:33.680329 1609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.93.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2i5m2.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.93.58:6443: connect: connection refused May 10 00:44:33.680402 kubelet[1609]: E0510 00:44:33.680359 1609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.93.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2i5m2.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.93.58:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:33.680598 kubelet[1609]: I0510 00:44:33.680436 1609 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:44:33.680862 kubelet[1609]: I0510 00:44:33.680847 1609 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:44:33.681568 kubelet[1609]: W0510 00:44:33.681547 1609 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:44:33.691335 kubelet[1609]: I0510 00:44:33.691299 1609 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 10 00:44:33.691410 kubelet[1609]: I0510 00:44:33.691354 1609 server.go:1287] "Started kubelet" May 10 00:44:33.693924 kubelet[1609]: I0510 00:44:33.693881 1609 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:44:33.695018 kubelet[1609]: I0510 00:44:33.695001 1609 server.go:490] "Adding debug handlers to kubelet server" May 10 00:44:33.696630 kubelet[1609]: I0510 00:44:33.696578 1609 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:44:33.696841 kubelet[1609]: I0510 00:44:33.696827 1609 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:44:33.701386 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 10 00:44:33.701815 kubelet[1609]: I0510 00:44:33.701777 1609 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:44:33.702307 kubelet[1609]: E0510 00:44:33.697002 1609 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.93.58:6443/api/v1/namespaces/default/events\": dial tcp 10.244.93.58:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-2i5m2.gb1.brightbox.com.183e03ca7d91d7f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-2i5m2.gb1.brightbox.com,UID:srv-2i5m2.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-2i5m2.gb1.brightbox.com,},FirstTimestamp:2025-05-10 00:44:33.691318263 +0000 UTC m=+0.495543612,LastTimestamp:2025-05-10 00:44:33.691318263 +0000 UTC m=+0.495543612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-2i5m2.gb1.brightbox.com,}" May 10 00:44:33.704803 kubelet[1609]: I0510 00:44:33.704745 1609 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:44:33.711045 kubelet[1609]: E0510 00:44:33.711008 1609 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:44:33.711298 kubelet[1609]: E0510 00:44:33.711276 1609 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-2i5m2.gb1.brightbox.com\" not found" May 10 00:44:33.711355 kubelet[1609]: I0510 00:44:33.711347 1609 volume_manager.go:297] "Starting Kubelet Volume Manager" May 10 00:44:33.711655 kubelet[1609]: I0510 00:44:33.711633 1609 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:44:33.711758 kubelet[1609]: I0510 00:44:33.711742 1609 reconciler.go:26] "Reconciler: start to sync state" May 10 00:44:33.712959 kubelet[1609]: W0510 00:44:33.712885 1609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.93.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.93.58:6443: connect: connection refused May 10 00:44:33.713029 kubelet[1609]: E0510 00:44:33.712979 1609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.93.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.93.58:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:33.715415 kubelet[1609]: I0510 00:44:33.715385 1609 factory.go:221] Registration of the containerd container factory successfully May 10 00:44:33.715415 kubelet[1609]: I0510 00:44:33.715415 1609 factory.go:221] Registration of the systemd container factory successfully May 10 00:44:33.715570 kubelet[1609]: I0510 00:44:33.715546 1609 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:44:33.722583 kubelet[1609]: E0510 00:44:33.722465 1609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.93.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2i5m2.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.93.58:6443: connect: connection refused" interval="200ms" May 10 00:44:33.737284 kubelet[1609]: I0510 00:44:33.735610 1609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:44:33.739241 kubelet[1609]: I0510 00:44:33.739203 1609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:44:33.739323 kubelet[1609]: I0510 00:44:33.739252 1609 status_manager.go:227] "Starting to sync pod status with apiserver" May 10 00:44:33.739323 kubelet[1609]: I0510 00:44:33.739293 1609 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 10 00:44:33.739323 kubelet[1609]: I0510 00:44:33.739303 1609 kubelet.go:2388] "Starting kubelet main sync loop" May 10 00:44:33.739582 kubelet[1609]: E0510 00:44:33.739558 1609 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:44:33.741524 kubelet[1609]: W0510 00:44:33.741478 1609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.93.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.93.58:6443: connect: connection refused May 10 00:44:33.741651 kubelet[1609]: E0510 00:44:33.741634 1609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.93.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.93.58:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:33.745798 kubelet[1609]: I0510 00:44:33.745784 1609 cpu_manager.go:221] "Starting CPU manager" policy="none" May 10 00:44:33.745900 kubelet[1609]: I0510 00:44:33.745890 1609 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 10 00:44:33.745975 kubelet[1609]: I0510 00:44:33.745968 1609 state_mem.go:36] "Initialized new in-memory state store" May 10 00:44:33.747832 kubelet[1609]: I0510 00:44:33.747818 1609 policy_none.go:49] "None policy: Start" May 10 00:44:33.747965 kubelet[1609]: I0510 00:44:33.747952 1609 memory_manager.go:186] "Starting memorymanager" policy="None" May 10 00:44:33.748075 kubelet[1609]: I0510 00:44:33.748067 1609 state_mem.go:35] "Initializing new in-memory state store" May 10 00:44:33.754164 systemd[1]: Created slice kubepods.slice. May 10 00:44:33.758891 systemd[1]: Created slice kubepods-burstable.slice. May 10 00:44:33.761678 systemd[1]: Created slice kubepods-besteffort.slice. May 10 00:44:33.767031 kubelet[1609]: I0510 00:44:33.767013 1609 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:44:33.768810 kubelet[1609]: I0510 00:44:33.768795 1609 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:44:33.769501 kubelet[1609]: I0510 00:44:33.769463 1609 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:44:33.769842 kubelet[1609]: I0510 00:44:33.769819 1609 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:44:33.770518 kubelet[1609]: E0510 00:44:33.770488 1609 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 10 00:44:33.770607 kubelet[1609]: E0510 00:44:33.770546 1609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-2i5m2.gb1.brightbox.com\" not found" May 10 00:44:33.807779 update_engine[1191]: I0510 00:44:33.807575 1191 update_attempter.cc:509] Updating boot flags... May 10 00:44:33.849516 systemd[1]: Created slice kubepods-burstable-pod2d7bda3fccb2f467ef8176a78294173d.slice. May 10 00:44:33.856620 kubelet[1609]: E0510 00:44:33.854938 1609 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2i5m2.gb1.brightbox.com\" not found" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.861647 systemd[1]: Created slice kubepods-burstable-pod075c9df3fa88c8b40b67d797286022a8.slice. May 10 00:44:33.863576 kubelet[1609]: E0510 00:44:33.863544 1609 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2i5m2.gb1.brightbox.com\" not found" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.865724 systemd[1]: Created slice kubepods-burstable-pod056117dd1bc37372770bb8d75e717325.slice. May 10 00:44:33.867723 kubelet[1609]: E0510 00:44:33.867702 1609 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2i5m2.gb1.brightbox.com\" not found" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.878437 kubelet[1609]: I0510 00:44:33.878413 1609 kubelet_node_status.go:76] "Attempting to register node" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.878822 kubelet[1609]: E0510 00:44:33.878796 1609 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.244.93.58:6443/api/v1/nodes\": dial tcp 10.244.93.58:6443: connect: connection refused" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.924238 kubelet[1609]: I0510 00:44:33.923968 1609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d7bda3fccb2f467ef8176a78294173d-ca-certs\") pod \"kube-apiserver-srv-2i5m2.gb1.brightbox.com\" (UID: \"2d7bda3fccb2f467ef8176a78294173d\") " pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.924238 kubelet[1609]: I0510 00:44:33.923998 1609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d7bda3fccb2f467ef8176a78294173d-k8s-certs\") pod \"kube-apiserver-srv-2i5m2.gb1.brightbox.com\" (UID: \"2d7bda3fccb2f467ef8176a78294173d\") " pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.924238 kubelet[1609]: I0510 00:44:33.924022 1609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d7bda3fccb2f467ef8176a78294173d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-2i5m2.gb1.brightbox.com\" (UID: \"2d7bda3fccb2f467ef8176a78294173d\") " pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.924238 kubelet[1609]: I0510 00:44:33.924040 1609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/075c9df3fa88c8b40b67d797286022a8-ca-certs\") pod \"kube-controller-manager-srv-2i5m2.gb1.brightbox.com\" (UID: \"075c9df3fa88c8b40b67d797286022a8\") " pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.924238 kubelet[1609]: I0510 00:44:33.924056 1609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/075c9df3fa88c8b40b67d797286022a8-kubeconfig\") pod \"kube-controller-manager-srv-2i5m2.gb1.brightbox.com\" (UID: \"075c9df3fa88c8b40b67d797286022a8\") " pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.924503 kubelet[1609]: I0510 00:44:33.924072 1609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/075c9df3fa88c8b40b67d797286022a8-flexvolume-dir\") pod \"kube-controller-manager-srv-2i5m2.gb1.brightbox.com\" (UID: \"075c9df3fa88c8b40b67d797286022a8\") " pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.924503 kubelet[1609]: I0510 00:44:33.924090 1609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/075c9df3fa88c8b40b67d797286022a8-k8s-certs\") pod \"kube-controller-manager-srv-2i5m2.gb1.brightbox.com\" (UID: \"075c9df3fa88c8b40b67d797286022a8\") " pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.924503 kubelet[1609]: I0510 00:44:33.924105 1609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/075c9df3fa88c8b40b67d797286022a8-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-2i5m2.gb1.brightbox.com\" (UID: \"075c9df3fa88c8b40b67d797286022a8\") " pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.924503 kubelet[1609]: I0510 00:44:33.924121 1609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/056117dd1bc37372770bb8d75e717325-kubeconfig\") pod \"kube-scheduler-srv-2i5m2.gb1.brightbox.com\" (UID: \"056117dd1bc37372770bb8d75e717325\") " pod="kube-system/kube-scheduler-srv-2i5m2.gb1.brightbox.com" May 10 00:44:33.924631 kubelet[1609]: E0510 00:44:33.924515 1609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.93.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2i5m2.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.93.58:6443: connect: connection refused" interval="400ms" May 10 00:44:34.084020 kubelet[1609]: I0510 00:44:34.083842 1609 kubelet_node_status.go:76] "Attempting to register node" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:34.084836 kubelet[1609]: E0510 00:44:34.084750 1609 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.244.93.58:6443/api/v1/nodes\": dial tcp 10.244.93.58:6443: connect: connection refused" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:34.160553 env[1194]: time="2025-05-10T00:44:34.158814086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-2i5m2.gb1.brightbox.com,Uid:2d7bda3fccb2f467ef8176a78294173d,Namespace:kube-system,Attempt:0,}" May 10 00:44:34.166310 env[1194]: time="2025-05-10T00:44:34.166039962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-2i5m2.gb1.brightbox.com,Uid:075c9df3fa88c8b40b67d797286022a8,Namespace:kube-system,Attempt:0,}" May 10 00:44:34.169972 env[1194]: time="2025-05-10T00:44:34.169475849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-2i5m2.gb1.brightbox.com,Uid:056117dd1bc37372770bb8d75e717325,Namespace:kube-system,Attempt:0,}" May 10 00:44:34.326125 kubelet[1609]: E0510 00:44:34.325980 1609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.93.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2i5m2.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.93.58:6443: connect: connection refused" interval="800ms" May 10 00:44:34.506483 kubelet[1609]: I0510 00:44:34.506387 1609 kubelet_node_status.go:76] "Attempting to register node" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:34.507631 kubelet[1609]: E0510 00:44:34.507583 1609 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.244.93.58:6443/api/v1/nodes\": dial tcp 10.244.93.58:6443: connect: connection refused" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:34.724742 env[1194]: time="2025-05-10T00:44:34.724681515Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.725045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2467493186.mount: Deactivated successfully. May 10 00:44:34.727862 env[1194]: time="2025-05-10T00:44:34.727833389Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.729805 env[1194]: time="2025-05-10T00:44:34.729781470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.732423 env[1194]: time="2025-05-10T00:44:34.732395983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.733377 env[1194]: time="2025-05-10T00:44:34.733346842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.736675 env[1194]: time="2025-05-10T00:44:34.736634107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.739966 env[1194]: time="2025-05-10T00:44:34.739928608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.740558 env[1194]: time="2025-05-10T00:44:34.740537570Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.741141 env[1194]: time="2025-05-10T00:44:34.741116576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.741757 env[1194]: time="2025-05-10T00:44:34.741724357Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.742287 env[1194]: time="2025-05-10T00:44:34.742265759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.742804 env[1194]: time="2025-05-10T00:44:34.742782594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:34.783322 env[1194]: time="2025-05-10T00:44:34.783128541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:34.783497 env[1194]: time="2025-05-10T00:44:34.783180018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:34.783497 env[1194]: time="2025-05-10T00:44:34.783201427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:34.784078 env[1194]: time="2025-05-10T00:44:34.783949017Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e7ff9654357729b6b9ba52a521828d976427dfe09e5567d25c5327f8e62f7d2 pid=1674 runtime=io.containerd.runc.v2 May 10 00:44:34.786391 env[1194]: time="2025-05-10T00:44:34.786330170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:34.786539 env[1194]: time="2025-05-10T00:44:34.786514472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:34.786639 env[1194]: time="2025-05-10T00:44:34.786618536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:34.786881 env[1194]: time="2025-05-10T00:44:34.786835089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:34.786990 env[1194]: time="2025-05-10T00:44:34.786969363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:34.787075 env[1194]: time="2025-05-10T00:44:34.787056899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:34.787326 env[1194]: time="2025-05-10T00:44:34.787294700Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68c11b7213b5460fe393f653a121eba0cd2f14f35c37c7cc1f771ff9328aa389 pid=1680 runtime=io.containerd.runc.v2 May 10 00:44:34.787518 env[1194]: time="2025-05-10T00:44:34.787492750Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb9f6ab94c50dda046d102ce3037caf8a2492ab0e80f0c8fb295d6166f4bda58 pid=1681 runtime=io.containerd.runc.v2 May 10 00:44:34.803422 systemd[1]: Started cri-containerd-2e7ff9654357729b6b9ba52a521828d976427dfe09e5567d25c5327f8e62f7d2.scope. May 10 00:44:34.835072 systemd[1]: Started cri-containerd-68c11b7213b5460fe393f653a121eba0cd2f14f35c37c7cc1f771ff9328aa389.scope. May 10 00:44:34.841817 systemd[1]: Started cri-containerd-fb9f6ab94c50dda046d102ce3037caf8a2492ab0e80f0c8fb295d6166f4bda58.scope. May 10 00:44:34.890537 kubelet[1609]: W0510 00:44:34.890407 1609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.93.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.93.58:6443: connect: connection refused May 10 00:44:34.890537 kubelet[1609]: E0510 00:44:34.890495 1609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.93.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.93.58:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:34.916761 env[1194]: time="2025-05-10T00:44:34.916690284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-2i5m2.gb1.brightbox.com,Uid:075c9df3fa88c8b40b67d797286022a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e7ff9654357729b6b9ba52a521828d976427dfe09e5567d25c5327f8e62f7d2\"" May 10 00:44:34.921030 env[1194]: time="2025-05-10T00:44:34.920994952Z" level=info msg="CreateContainer within sandbox \"2e7ff9654357729b6b9ba52a521828d976427dfe09e5567d25c5327f8e62f7d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:44:34.928360 env[1194]: time="2025-05-10T00:44:34.928321076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-2i5m2.gb1.brightbox.com,Uid:2d7bda3fccb2f467ef8176a78294173d,Namespace:kube-system,Attempt:0,} returns sandbox id \"68c11b7213b5460fe393f653a121eba0cd2f14f35c37c7cc1f771ff9328aa389\"" May 10 00:44:34.932224 kubelet[1609]: W0510 00:44:34.932160 1609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.93.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2i5m2.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.93.58:6443: connect: connection refused May 10 00:44:34.932350 kubelet[1609]: E0510 00:44:34.932252 1609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.93.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2i5m2.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.93.58:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:34.946966 env[1194]: time="2025-05-10T00:44:34.946893240Z" level=info msg="CreateContainer within sandbox \"68c11b7213b5460fe393f653a121eba0cd2f14f35c37c7cc1f771ff9328aa389\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:44:34.948575 env[1194]: time="2025-05-10T00:44:34.948465992Z" level=info msg="CreateContainer within sandbox \"2e7ff9654357729b6b9ba52a521828d976427dfe09e5567d25c5327f8e62f7d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"87756cedcbe4617e00e37445a837d253841a87c0b1025c5205e52389dc5861d3\"" May 10 00:44:34.949624 env[1194]: time="2025-05-10T00:44:34.949597496Z" level=info msg="StartContainer for \"87756cedcbe4617e00e37445a837d253841a87c0b1025c5205e52389dc5861d3\"" May 10 00:44:34.961645 env[1194]: time="2025-05-10T00:44:34.961601853Z" level=info msg="CreateContainer within sandbox \"68c11b7213b5460fe393f653a121eba0cd2f14f35c37c7cc1f771ff9328aa389\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"31a6884790eba074419322f332336a80c80a4c263009a1cfcc708b0f9a2645ec\"" May 10 00:44:34.962370 env[1194]: time="2025-05-10T00:44:34.962343643Z" level=info msg="StartContainer for \"31a6884790eba074419322f332336a80c80a4c263009a1cfcc708b0f9a2645ec\"" May 10 00:44:34.968795 env[1194]: time="2025-05-10T00:44:34.968763999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-2i5m2.gb1.brightbox.com,Uid:056117dd1bc37372770bb8d75e717325,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb9f6ab94c50dda046d102ce3037caf8a2492ab0e80f0c8fb295d6166f4bda58\"" May 10 00:44:34.971208 env[1194]: time="2025-05-10T00:44:34.971180595Z" level=info msg="CreateContainer within sandbox \"fb9f6ab94c50dda046d102ce3037caf8a2492ab0e80f0c8fb295d6166f4bda58\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:44:34.984379 systemd[1]: Started cri-containerd-87756cedcbe4617e00e37445a837d253841a87c0b1025c5205e52389dc5861d3.scope. May 10 00:44:34.992620 env[1194]: time="2025-05-10T00:44:34.992566872Z" level=info msg="CreateContainer within sandbox \"fb9f6ab94c50dda046d102ce3037caf8a2492ab0e80f0c8fb295d6166f4bda58\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8e03f216012a4f74abc0bb9155e3c48b639c38b0c78fbd158c5a9a3d34275895\"" May 10 00:44:34.994864 env[1194]: time="2025-05-10T00:44:34.994836158Z" level=info msg="StartContainer for \"8e03f216012a4f74abc0bb9155e3c48b639c38b0c78fbd158c5a9a3d34275895\"" May 10 00:44:35.001427 systemd[1]: Started cri-containerd-31a6884790eba074419322f332336a80c80a4c263009a1cfcc708b0f9a2645ec.scope. May 10 00:44:35.031692 systemd[1]: Started cri-containerd-8e03f216012a4f74abc0bb9155e3c48b639c38b0c78fbd158c5a9a3d34275895.scope. May 10 00:44:35.081136 env[1194]: time="2025-05-10T00:44:35.081024611Z" level=info msg="StartContainer for \"87756cedcbe4617e00e37445a837d253841a87c0b1025c5205e52389dc5861d3\" returns successfully" May 10 00:44:35.100773 env[1194]: time="2025-05-10T00:44:35.100716694Z" level=info msg="StartContainer for \"31a6884790eba074419322f332336a80c80a4c263009a1cfcc708b0f9a2645ec\" returns successfully" May 10 00:44:35.118850 env[1194]: time="2025-05-10T00:44:35.118793454Z" level=info msg="StartContainer for \"8e03f216012a4f74abc0bb9155e3c48b639c38b0c78fbd158c5a9a3d34275895\" returns successfully" May 10 00:44:35.127330 kubelet[1609]: E0510 00:44:35.127267 1609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.93.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2i5m2.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.93.58:6443: connect: connection refused" interval="1.6s" May 10 00:44:35.282175 kubelet[1609]: W0510 00:44:35.282090 1609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.93.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.93.58:6443: connect: connection refused May 10 00:44:35.282449 kubelet[1609]: E0510 00:44:35.282420 1609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.93.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.93.58:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:35.292102 kubelet[1609]: W0510 00:44:35.292052 1609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.93.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.93.58:6443: connect: connection refused May 10 00:44:35.292257 kubelet[1609]: E0510 00:44:35.292223 1609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.93.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.93.58:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:35.310848 kubelet[1609]: I0510 00:44:35.310814 1609 kubelet_node_status.go:76] "Attempting to register node" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:35.311367 kubelet[1609]: E0510 00:44:35.311331 1609 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.244.93.58:6443/api/v1/nodes\": dial tcp 10.244.93.58:6443: connect: connection refused" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:35.737099 kubelet[1609]: E0510 00:44:35.737040 1609 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.93.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.93.58:6443: connect: connection refused" logger="UnhandledError" May 10 00:44:35.759812 kubelet[1609]: E0510 00:44:35.759782 1609 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2i5m2.gb1.brightbox.com\" not found" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:35.760402 kubelet[1609]: E0510 00:44:35.760384 1609 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2i5m2.gb1.brightbox.com\" not found" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:35.764392 kubelet[1609]: E0510 00:44:35.764374 1609 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2i5m2.gb1.brightbox.com\" not found" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:36.765432 kubelet[1609]: E0510 00:44:36.765392 1609 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2i5m2.gb1.brightbox.com\" not found" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:36.765836 kubelet[1609]: E0510 00:44:36.765817 1609 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2i5m2.gb1.brightbox.com\" not found" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:36.914250 kubelet[1609]: I0510 00:44:36.914214 1609 kubelet_node_status.go:76] "Attempting to register node" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:37.357414 kubelet[1609]: E0510 00:44:37.357364 1609 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-2i5m2.gb1.brightbox.com\" not found" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:37.407449 kubelet[1609]: I0510 00:44:37.407365 1609 kubelet_node_status.go:79] "Successfully registered node" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:37.412916 kubelet[1609]: I0510 00:44:37.412878 1609 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" May 10 00:44:37.487480 kubelet[1609]: E0510 00:44:37.487407 1609 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-2i5m2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" May 10 00:44:37.487480 kubelet[1609]: I0510 00:44:37.487470 1609 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:37.492395 kubelet[1609]: E0510 00:44:37.492359 1609 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-2i5m2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:37.492570 kubelet[1609]: I0510 00:44:37.492558 1609 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-2i5m2.gb1.brightbox.com" May 10 00:44:37.495277 kubelet[1609]: E0510 00:44:37.495258 1609 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-2i5m2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-2i5m2.gb1.brightbox.com" May 10 00:44:37.682087 kubelet[1609]: I0510 00:44:37.681078 1609 apiserver.go:52] "Watching apiserver" May 10 00:44:37.712220 kubelet[1609]: I0510 00:44:37.712163 1609 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:44:39.688927 systemd[1]: Reloading. May 10 00:44:39.823076 kubelet[1609]: I0510 00:44:39.823040 1609 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-2i5m2.gb1.brightbox.com" May 10 00:44:39.824080 /usr/lib/systemd/system-generators/torcx-generator[1916]: time="2025-05-10T00:44:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:44:39.824109 /usr/lib/systemd/system-generators/torcx-generator[1916]: time="2025-05-10T00:44:39Z" level=info msg="torcx already run" May 10 00:44:39.834473 kubelet[1609]: W0510 00:44:39.833949 1609 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:44:39.933965 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:44:39.934295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:44:39.958265 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:44:40.077039 kubelet[1609]: I0510 00:44:40.076983 1609 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:44:40.077819 systemd[1]: Stopping kubelet.service... May 10 00:44:40.101969 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:44:40.102966 systemd[1]: Stopped kubelet.service. May 10 00:44:40.109745 systemd[1]: Starting kubelet.service... May 10 00:44:41.145815 systemd[1]: Started kubelet.service. May 10 00:44:41.235967 kubelet[1968]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:44:41.235967 kubelet[1968]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 10 00:44:41.235967 kubelet[1968]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:44:41.236509 kubelet[1968]: I0510 00:44:41.236085 1968 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:44:41.249033 sudo[1979]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 00:44:41.249319 sudo[1979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 10 00:44:41.253382 kubelet[1968]: I0510 00:44:41.253349 1968 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 10 00:44:41.253545 kubelet[1968]: I0510 00:44:41.253523 1968 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:44:41.255418 kubelet[1968]: I0510 00:44:41.255401 1968 server.go:954] "Client rotation is on, will bootstrap in background" May 10 00:44:41.260003 kubelet[1968]: I0510 00:44:41.259985 1968 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:44:41.264613 kubelet[1968]: I0510 00:44:41.264590 1968 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:44:41.287607 kubelet[1968]: E0510 00:44:41.287195 1968 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:44:41.287607 kubelet[1968]: I0510 00:44:41.287223 1968 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:44:41.291536 kubelet[1968]: I0510 00:44:41.291518 1968 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:44:41.291933 kubelet[1968]: I0510 00:44:41.291904 1968 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:44:41.292212 kubelet[1968]: I0510 00:44:41.292018 1968 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-2i5m2.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:44:41.292507 kubelet[1968]: I0510 00:44:41.292495 1968 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:44:41.292591 kubelet[1968]: I0510 00:44:41.292582 1968 container_manager_linux.go:304] "Creating device plugin manager" May 10 00:44:41.292715 kubelet[1968]: I0510 00:44:41.292707 1968 state_mem.go:36] "Initialized new in-memory state store" May 10 00:44:41.292930 kubelet[1968]: I0510 00:44:41.292921 1968 kubelet.go:446] "Attempting to sync node with API server" May 10 00:44:41.293084 kubelet[1968]: I0510 00:44:41.293076 1968 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:44:41.293156 kubelet[1968]: I0510 00:44:41.293148 1968 kubelet.go:352] "Adding apiserver pod source" May 10 00:44:41.293220 kubelet[1968]: I0510 00:44:41.293212 1968 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:44:41.300703 kubelet[1968]: I0510 00:44:41.300685 1968 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:44:41.301275 kubelet[1968]: I0510 00:44:41.301258 1968 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:44:41.302789 kubelet[1968]: I0510 00:44:41.302771 1968 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 10 00:44:41.302904 kubelet[1968]: I0510 00:44:41.302893 1968 server.go:1287] "Started kubelet" May 10 00:44:41.309743 kubelet[1968]: I0510 00:44:41.309685 1968 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:44:41.315063 kubelet[1968]: I0510 00:44:41.315003 1968 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:44:41.316171 kubelet[1968]: I0510 00:44:41.316155 1968 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:44:41.318311 kubelet[1968]: I0510 00:44:41.318293 1968 server.go:490] "Adding debug handlers to kubelet server" May 10 00:44:41.327322 kubelet[1968]: I0510 00:44:41.327306 1968 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:44:41.343060 kubelet[1968]: I0510 00:44:41.343035 1968 volume_manager.go:297] "Starting Kubelet Volume Manager" May 10 00:44:41.343301 kubelet[1968]: E0510 00:44:41.343283 1968 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"srv-2i5m2.gb1.brightbox.com\" not found" May 10 00:44:41.343550 kubelet[1968]: I0510 00:44:41.343524 1968 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:44:41.343681 kubelet[1968]: I0510 00:44:41.343670 1968 reconciler.go:26] "Reconciler: start to sync state" May 10 00:44:41.351173 kubelet[1968]: I0510 00:44:41.351150 1968 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:44:41.361056 kubelet[1968]: I0510 00:44:41.361032 1968 factory.go:221] Registration of the containerd container factory successfully May 10 00:44:41.361263 kubelet[1968]: I0510 00:44:41.361250 1968 factory.go:221] Registration of the systemd container factory successfully May 10 00:44:41.361443 kubelet[1968]: I0510 00:44:41.361424 1968 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:44:41.370770 kubelet[1968]: I0510 00:44:41.370730 1968 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:44:41.371893 kubelet[1968]: I0510 00:44:41.371874 1968 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:44:41.372024 kubelet[1968]: I0510 00:44:41.372012 1968 status_manager.go:227] "Starting to sync pod status with apiserver" May 10 00:44:41.372114 kubelet[1968]: I0510 00:44:41.372104 1968 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 10 00:44:41.372176 kubelet[1968]: I0510 00:44:41.372167 1968 kubelet.go:2388] "Starting kubelet main sync loop" May 10 00:44:41.372304 kubelet[1968]: E0510 00:44:41.372288 1968 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:44:41.381031 kubelet[1968]: E0510 00:44:41.381013 1968 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:44:41.433994 kubelet[1968]: I0510 00:44:41.433881 1968 cpu_manager.go:221] "Starting CPU manager" policy="none" May 10 00:44:41.435721 kubelet[1968]: I0510 00:44:41.435696 1968 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 10 00:44:41.435856 kubelet[1968]: I0510 00:44:41.435846 1968 state_mem.go:36] "Initialized new in-memory state store" May 10 00:44:41.436132 kubelet[1968]: I0510 00:44:41.436116 1968 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:44:41.436247 kubelet[1968]: I0510 00:44:41.436208 1968 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:44:41.436328 kubelet[1968]: I0510 00:44:41.436319 1968 policy_none.go:49] "None policy: Start" May 10 00:44:41.436421 kubelet[1968]: I0510 00:44:41.436411 1968 memory_manager.go:186] "Starting memorymanager" policy="None" May 10 00:44:41.436500 kubelet[1968]: I0510 00:44:41.436491 1968 state_mem.go:35] "Initializing new in-memory state store" May 10 00:44:41.436705 kubelet[1968]: I0510 00:44:41.436695 1968 state_mem.go:75] "Updated machine memory state" May 10 00:44:41.441322 kubelet[1968]: I0510 00:44:41.441302 1968 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:44:41.441653 kubelet[1968]: I0510 00:44:41.441639 1968 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:44:41.441764 kubelet[1968]: I0510 00:44:41.441730 1968 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:44:41.442757 kubelet[1968]: I0510 00:44:41.442741 1968 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:44:41.444947 kubelet[1968]: E0510 00:44:41.444931 1968 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 10 00:44:41.473232 kubelet[1968]: I0510 00:44:41.473193 1968 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.475284 kubelet[1968]: I0510 00:44:41.473709 1968 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.476349 kubelet[1968]: I0510 00:44:41.476327 1968 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.481247 kubelet[1968]: W0510 00:44:41.479881 1968 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:44:41.481247 kubelet[1968]: W0510 00:44:41.480616 1968 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:44:41.481247 kubelet[1968]: E0510 00:44:41.480662 1968 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-2i5m2.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.483051 kubelet[1968]: W0510 00:44:41.482418 1968 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:44:41.547980 kubelet[1968]: I0510 00:44:41.547929 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d7bda3fccb2f467ef8176a78294173d-ca-certs\") pod \"kube-apiserver-srv-2i5m2.gb1.brightbox.com\" (UID: \"2d7bda3fccb2f467ef8176a78294173d\") " pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.547980 kubelet[1968]: I0510 00:44:41.547984 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d7bda3fccb2f467ef8176a78294173d-k8s-certs\") pod \"kube-apiserver-srv-2i5m2.gb1.brightbox.com\" (UID: \"2d7bda3fccb2f467ef8176a78294173d\") " pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.548261 kubelet[1968]: I0510 00:44:41.548016 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d7bda3fccb2f467ef8176a78294173d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-2i5m2.gb1.brightbox.com\" (UID: \"2d7bda3fccb2f467ef8176a78294173d\") " pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.548261 kubelet[1968]: I0510 00:44:41.548048 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/075c9df3fa88c8b40b67d797286022a8-flexvolume-dir\") pod \"kube-controller-manager-srv-2i5m2.gb1.brightbox.com\" (UID: \"075c9df3fa88c8b40b67d797286022a8\") " pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.548261 kubelet[1968]: I0510 00:44:41.548075 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/075c9df3fa88c8b40b67d797286022a8-kubeconfig\") pod \"kube-controller-manager-srv-2i5m2.gb1.brightbox.com\" (UID: \"075c9df3fa88c8b40b67d797286022a8\") " pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.548261 kubelet[1968]: I0510 00:44:41.548102 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/075c9df3fa88c8b40b67d797286022a8-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-2i5m2.gb1.brightbox.com\" (UID: \"075c9df3fa88c8b40b67d797286022a8\") " pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.548261 kubelet[1968]: I0510 00:44:41.548128 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/056117dd1bc37372770bb8d75e717325-kubeconfig\") pod \"kube-scheduler-srv-2i5m2.gb1.brightbox.com\" (UID: \"056117dd1bc37372770bb8d75e717325\") " pod="kube-system/kube-scheduler-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.548464 kubelet[1968]: I0510 00:44:41.548160 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/075c9df3fa88c8b40b67d797286022a8-ca-certs\") pod \"kube-controller-manager-srv-2i5m2.gb1.brightbox.com\" (UID: \"075c9df3fa88c8b40b67d797286022a8\") " pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.548464 kubelet[1968]: I0510 00:44:41.548185 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/075c9df3fa88c8b40b67d797286022a8-k8s-certs\") pod \"kube-controller-manager-srv-2i5m2.gb1.brightbox.com\" (UID: \"075c9df3fa88c8b40b67d797286022a8\") " pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.557290 kubelet[1968]: I0510 00:44:41.557261 1968 kubelet_node_status.go:76] "Attempting to register node" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.565835 kubelet[1968]: I0510 00:44:41.565616 1968 kubelet_node_status.go:125] "Node was previously registered" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:41.565835 kubelet[1968]: I0510 00:44:41.565699 1968 kubelet_node_status.go:79] "Successfully registered node" node="srv-2i5m2.gb1.brightbox.com" May 10 00:44:42.057996 sudo[1979]: pam_unix(sudo:session): session closed for user root May 10 00:44:42.314153 kubelet[1968]: I0510 00:44:42.314049 1968 apiserver.go:52] "Watching apiserver" May 10 00:44:42.343769 kubelet[1968]: I0510 00:44:42.343710 1968 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:44:42.407729 kubelet[1968]: I0510 00:44:42.407678 1968 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" May 10 00:44:42.408477 kubelet[1968]: I0510 00:44:42.408423 1968 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-2i5m2.gb1.brightbox.com" May 10 00:44:42.421353 kubelet[1968]: W0510 00:44:42.421312 1968 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:44:42.421508 kubelet[1968]: E0510 00:44:42.421394 1968 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-2i5m2.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-2i5m2.gb1.brightbox.com" May 10 00:44:42.431040 kubelet[1968]: W0510 00:44:42.431007 1968 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:44:42.431185 kubelet[1968]: E0510 00:44:42.431068 1968 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-2i5m2.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" May 10 00:44:42.443773 kubelet[1968]: I0510 00:44:42.443710 1968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-2i5m2.gb1.brightbox.com" podStartSLOduration=3.44368129 podStartE2EDuration="3.44368129s" podCreationTimestamp="2025-05-10 00:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:44:42.443375536 +0000 UTC m=+1.283658290" watchObservedRunningTime="2025-05-10 00:44:42.44368129 +0000 UTC m=+1.283964080" May 10 00:44:42.469087 kubelet[1968]: I0510 00:44:42.469016 1968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-2i5m2.gb1.brightbox.com" podStartSLOduration=1.4689925449999999 podStartE2EDuration="1.468992545s" podCreationTimestamp="2025-05-10 00:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:44:42.455398389 +0000 UTC m=+1.295681148" watchObservedRunningTime="2025-05-10 00:44:42.468992545 +0000 UTC m=+1.309275278" May 10 00:44:42.469407 kubelet[1968]: I0510 00:44:42.469136 1968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-2i5m2.gb1.brightbox.com" podStartSLOduration=1.469130501 podStartE2EDuration="1.469130501s" podCreationTimestamp="2025-05-10 00:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:44:42.467620144 +0000 UTC m=+1.307902878" watchObservedRunningTime="2025-05-10 00:44:42.469130501 +0000 UTC m=+1.309413258" May 10 00:44:44.204948 kubelet[1968]: I0510 00:44:44.204910 1968 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:44:44.205813 env[1194]: time="2025-05-10T00:44:44.205758257Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:44:44.206115 kubelet[1968]: I0510 00:44:44.205988 1968 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:44:44.546649 sudo[1348]: pam_unix(sudo:session): session closed for user root May 10 00:44:44.693530 sshd[1336]: pam_unix(sshd:session): session closed for user core May 10 00:44:44.700864 systemd[1]: sshd@6-10.244.93.58:22-139.178.68.195:57744.service: Deactivated successfully. May 10 00:44:44.702379 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:44:44.702656 systemd[1]: session-7.scope: Consumed 6.602s CPU time. May 10 00:44:44.703496 systemd-logind[1190]: Session 7 logged out. Waiting for processes to exit. May 10 00:44:44.705008 systemd-logind[1190]: Removed session 7. May 10 00:44:45.257225 systemd[1]: Created slice kubepods-besteffort-podc7adef98_a18b_4bc4_acf9_e26aae68cce5.slice. May 10 00:44:45.267339 kubelet[1968]: W0510 00:44:45.267305 1968 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-2i5m2.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-2i5m2.gb1.brightbox.com' and this object May 10 00:44:45.268138 kubelet[1968]: E0510 00:44:45.267987 1968 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:srv-2i5m2.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-2i5m2.gb1.brightbox.com' and this object" logger="UnhandledError" May 10 00:44:45.268856 systemd[1]: Created slice kubepods-burstable-podd3e6647e_6994_46e7_b646_0cac556f27a4.slice. May 10 00:44:45.273319 kubelet[1968]: I0510 00:44:45.273295 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-xtables-lock\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273442 kubelet[1968]: I0510 00:44:45.273333 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqcr9\" (UniqueName: \"kubernetes.io/projected/d3e6647e-6994-46e7-b646-0cac556f27a4-kube-api-access-kqcr9\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273442 kubelet[1968]: I0510 00:44:45.273357 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-run\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273442 kubelet[1968]: I0510 00:44:45.273378 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3e6647e-6994-46e7-b646-0cac556f27a4-clustermesh-secrets\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273442 kubelet[1968]: I0510 00:44:45.273406 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-host-proc-sys-net\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273442 kubelet[1968]: I0510 00:44:45.273425 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-host-proc-sys-kernel\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273653 kubelet[1968]: I0510 00:44:45.273444 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7adef98-a18b-4bc4-acf9-e26aae68cce5-xtables-lock\") pod \"kube-proxy-jxww4\" (UID: \"c7adef98-a18b-4bc4-acf9-e26aae68cce5\") " pod="kube-system/kube-proxy-jxww4" May 10 00:44:45.273653 kubelet[1968]: I0510 00:44:45.273472 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmwqf\" (UniqueName: \"kubernetes.io/projected/c7adef98-a18b-4bc4-acf9-e26aae68cce5-kube-api-access-bmwqf\") pod \"kube-proxy-jxww4\" (UID: \"c7adef98-a18b-4bc4-acf9-e26aae68cce5\") " pod="kube-system/kube-proxy-jxww4" May 10 00:44:45.273653 kubelet[1968]: I0510 00:44:45.273555 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7adef98-a18b-4bc4-acf9-e26aae68cce5-lib-modules\") pod \"kube-proxy-jxww4\" (UID: \"c7adef98-a18b-4bc4-acf9-e26aae68cce5\") " pod="kube-system/kube-proxy-jxww4" May 10 00:44:45.273653 kubelet[1968]: I0510 00:44:45.273604 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-bpf-maps\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273653 kubelet[1968]: I0510 00:44:45.273632 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cni-path\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273653 kubelet[1968]: I0510 00:44:45.273649 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3e6647e-6994-46e7-b646-0cac556f27a4-hubble-tls\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273849 kubelet[1968]: I0510 00:44:45.273666 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-etc-cni-netd\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273849 kubelet[1968]: I0510 00:44:45.273685 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-config-path\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273849 kubelet[1968]: I0510 00:44:45.273713 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-hostproc\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273849 kubelet[1968]: I0510 00:44:45.273733 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-cgroup\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273849 kubelet[1968]: I0510 00:44:45.273750 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-lib-modules\") pod \"cilium-hxqkb\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " pod="kube-system/cilium-hxqkb" May 10 00:44:45.273849 kubelet[1968]: I0510 00:44:45.273781 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c7adef98-a18b-4bc4-acf9-e26aae68cce5-kube-proxy\") pod \"kube-proxy-jxww4\" (UID: \"c7adef98-a18b-4bc4-acf9-e26aae68cce5\") " pod="kube-system/kube-proxy-jxww4" May 10 00:44:45.304363 systemd[1]: Created slice kubepods-besteffort-pod9511a17b_4a5d_4d4b_8317_73a7918e5d8e.slice. May 10 00:44:45.375249 kubelet[1968]: I0510 00:44:45.375193 1968 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 10 00:44:45.375717 kubelet[1968]: I0510 00:44:45.375683 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mldfd\" (UniqueName: \"kubernetes.io/projected/9511a17b-4a5d-4d4b-8317-73a7918e5d8e-kube-api-access-mldfd\") pod \"cilium-operator-6c4d7847fc-gcpv7\" (UID: \"9511a17b-4a5d-4d4b-8317-73a7918e5d8e\") " pod="kube-system/cilium-operator-6c4d7847fc-gcpv7" May 10 00:44:45.375844 kubelet[1968]: I0510 00:44:45.375830 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9511a17b-4a5d-4d4b-8317-73a7918e5d8e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gcpv7\" (UID: \"9511a17b-4a5d-4d4b-8317-73a7918e5d8e\") " pod="kube-system/cilium-operator-6c4d7847fc-gcpv7" May 10 00:44:45.569297 env[1194]: time="2025-05-10T00:44:45.568087471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxww4,Uid:c7adef98-a18b-4bc4-acf9-e26aae68cce5,Namespace:kube-system,Attempt:0,}" May 10 00:44:45.601496 env[1194]: time="2025-05-10T00:44:45.601367588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:45.601496 env[1194]: time="2025-05-10T00:44:45.601437524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:45.601496 env[1194]: time="2025-05-10T00:44:45.601449451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:45.602091 env[1194]: time="2025-05-10T00:44:45.602025095Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f537d33a5d917efd437349dac4abe2b45da826c84e6fdf77001f868e6e8e87a pid=2048 runtime=io.containerd.runc.v2 May 10 00:44:45.608759 env[1194]: time="2025-05-10T00:44:45.608718280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gcpv7,Uid:9511a17b-4a5d-4d4b-8317-73a7918e5d8e,Namespace:kube-system,Attempt:0,}" May 10 00:44:45.622820 systemd[1]: Started cri-containerd-3f537d33a5d917efd437349dac4abe2b45da826c84e6fdf77001f868e6e8e87a.scope. May 10 00:44:45.636596 env[1194]: time="2025-05-10T00:44:45.634536578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:45.636596 env[1194]: time="2025-05-10T00:44:45.634575455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:45.636596 env[1194]: time="2025-05-10T00:44:45.634586974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:45.636596 env[1194]: time="2025-05-10T00:44:45.634718521Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78 pid=2077 runtime=io.containerd.runc.v2 May 10 00:44:45.658812 systemd[1]: Started cri-containerd-db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78.scope. May 10 00:44:45.664437 env[1194]: time="2025-05-10T00:44:45.664399615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxww4,Uid:c7adef98-a18b-4bc4-acf9-e26aae68cce5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f537d33a5d917efd437349dac4abe2b45da826c84e6fdf77001f868e6e8e87a\"" May 10 00:44:45.669759 env[1194]: time="2025-05-10T00:44:45.669706192Z" level=info msg="CreateContainer within sandbox \"3f537d33a5d917efd437349dac4abe2b45da826c84e6fdf77001f868e6e8e87a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:44:45.697058 env[1194]: time="2025-05-10T00:44:45.697018946Z" level=info msg="CreateContainer within sandbox \"3f537d33a5d917efd437349dac4abe2b45da826c84e6fdf77001f868e6e8e87a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5e099cd143609ff937f6f22d2138070d5368c1ceb20665bc0fc0fb3fd90ea151\"" May 10 00:44:45.699351 env[1194]: time="2025-05-10T00:44:45.699308744Z" level=info msg="StartContainer for \"5e099cd143609ff937f6f22d2138070d5368c1ceb20665bc0fc0fb3fd90ea151\"" May 10 00:44:45.718966 env[1194]: time="2025-05-10T00:44:45.718922310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gcpv7,Uid:9511a17b-4a5d-4d4b-8317-73a7918e5d8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\"" May 10 00:44:45.722480 env[1194]: time="2025-05-10T00:44:45.722442315Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 00:44:45.732888 systemd[1]: Started cri-containerd-5e099cd143609ff937f6f22d2138070d5368c1ceb20665bc0fc0fb3fd90ea151.scope. May 10 00:44:45.766846 env[1194]: time="2025-05-10T00:44:45.766800234Z" level=info msg="StartContainer for \"5e099cd143609ff937f6f22d2138070d5368c1ceb20665bc0fc0fb3fd90ea151\" returns successfully" May 10 00:44:46.380641 kubelet[1968]: E0510 00:44:46.380600 1968 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 10 00:44:46.381162 kubelet[1968]: E0510 00:44:46.381144 1968 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-hxqkb: failed to sync secret cache: timed out waiting for the condition May 10 00:44:46.381389 kubelet[1968]: E0510 00:44:46.381366 1968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3e6647e-6994-46e7-b646-0cac556f27a4-hubble-tls podName:d3e6647e-6994-46e7-b646-0cac556f27a4 nodeName:}" failed. No retries permitted until 2025-05-10 00:44:46.881336648 +0000 UTC m=+5.721619396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/d3e6647e-6994-46e7-b646-0cac556f27a4-hubble-tls") pod "cilium-hxqkb" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4") : failed to sync secret cache: timed out waiting for the condition May 10 00:44:46.432547 kubelet[1968]: I0510 00:44:46.432492 1968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jxww4" podStartSLOduration=1.432465184 podStartE2EDuration="1.432465184s" podCreationTimestamp="2025-05-10 00:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:44:46.431435946 +0000 UTC m=+5.271718705" watchObservedRunningTime="2025-05-10 00:44:46.432465184 +0000 UTC m=+5.272747943" May 10 00:44:47.072750 env[1194]: time="2025-05-10T00:44:47.072664469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hxqkb,Uid:d3e6647e-6994-46e7-b646-0cac556f27a4,Namespace:kube-system,Attempt:0,}" May 10 00:44:47.093580 env[1194]: time="2025-05-10T00:44:47.093501666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:44:47.093580 env[1194]: time="2025-05-10T00:44:47.093547614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:44:47.093858 env[1194]: time="2025-05-10T00:44:47.093822159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:44:47.094145 env[1194]: time="2025-05-10T00:44:47.094109159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c pid=2297 runtime=io.containerd.runc.v2 May 10 00:44:47.110246 systemd[1]: Started cri-containerd-b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c.scope. May 10 00:44:47.145806 env[1194]: time="2025-05-10T00:44:47.145766774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hxqkb,Uid:d3e6647e-6994-46e7-b646-0cac556f27a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\"" May 10 00:44:47.509813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248271516.mount: Deactivated successfully. May 10 00:44:51.203946 env[1194]: time="2025-05-10T00:44:51.203870622Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:51.205223 env[1194]: time="2025-05-10T00:44:51.205184429Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:51.206877 env[1194]: time="2025-05-10T00:44:51.206847581Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:44:51.207556 env[1194]: time="2025-05-10T00:44:51.207529365Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 00:44:51.212307 env[1194]: time="2025-05-10T00:44:51.211024802Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 00:44:51.212307 env[1194]: time="2025-05-10T00:44:51.212288355Z" level=info msg="CreateContainer within sandbox \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 00:44:51.224906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1895299158.mount: Deactivated successfully. May 10 00:44:51.233567 env[1194]: time="2025-05-10T00:44:51.233479831Z" level=info msg="CreateContainer within sandbox \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\"" May 10 00:44:51.235953 env[1194]: time="2025-05-10T00:44:51.235897231Z" level=info msg="StartContainer for \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\"" May 10 00:44:51.288780 systemd[1]: Started cri-containerd-dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e.scope. May 10 00:44:51.330784 env[1194]: time="2025-05-10T00:44:51.330734060Z" level=info msg="StartContainer for \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\" returns successfully" May 10 00:44:53.163065 kubelet[1968]: I0510 00:44:53.162898 1968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gcpv7" podStartSLOduration=2.671870598 podStartE2EDuration="8.160789181s" podCreationTimestamp="2025-05-10 00:44:45 +0000 UTC" firstStartedPulling="2025-05-10 00:44:45.72050637 +0000 UTC m=+4.560789101" lastFinishedPulling="2025-05-10 00:44:51.209424941 +0000 UTC m=+10.049707684" observedRunningTime="2025-05-10 00:44:51.474670185 +0000 UTC m=+10.314952952" watchObservedRunningTime="2025-05-10 00:44:53.160789181 +0000 UTC m=+12.001072031" May 10 00:45:06.466378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1491858423.mount: Deactivated successfully. May 10 00:45:09.802395 env[1194]: time="2025-05-10T00:45:09.802336637Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:09.804940 env[1194]: time="2025-05-10T00:45:09.804909919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:09.806972 env[1194]: time="2025-05-10T00:45:09.806942080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:09.808510 env[1194]: time="2025-05-10T00:45:09.808478170Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 00:45:09.816408 env[1194]: time="2025-05-10T00:45:09.816348378Z" level=info msg="CreateContainer within sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:45:09.832149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347470954.mount: Deactivated successfully. May 10 00:45:09.838593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960767325.mount: Deactivated successfully. May 10 00:45:09.840459 env[1194]: time="2025-05-10T00:45:09.840391585Z" level=info msg="CreateContainer within sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888\"" May 10 00:45:09.843900 env[1194]: time="2025-05-10T00:45:09.843809943Z" level=info msg="StartContainer for \"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888\"" May 10 00:45:09.875394 systemd[1]: Started cri-containerd-a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888.scope. May 10 00:45:09.922163 env[1194]: time="2025-05-10T00:45:09.922112031Z" level=info msg="StartContainer for \"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888\" returns successfully" May 10 00:45:09.938737 systemd[1]: cri-containerd-a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888.scope: Deactivated successfully. May 10 00:45:10.013181 env[1194]: time="2025-05-10T00:45:10.013089610Z" level=info msg="shim disconnected" id=a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888 May 10 00:45:10.013181 env[1194]: time="2025-05-10T00:45:10.013161783Z" level=warning msg="cleaning up after shim disconnected" id=a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888 namespace=k8s.io May 10 00:45:10.013181 env[1194]: time="2025-05-10T00:45:10.013176045Z" level=info msg="cleaning up dead shim" May 10 00:45:10.025549 env[1194]: time="2025-05-10T00:45:10.025505838Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:45:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2421 runtime=io.containerd.runc.v2\n" May 10 00:45:10.508143 env[1194]: time="2025-05-10T00:45:10.507857521Z" level=info msg="CreateContainer within sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:45:10.523910 env[1194]: time="2025-05-10T00:45:10.520317676Z" level=info msg="CreateContainer within sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1\"" May 10 00:45:10.533200 env[1194]: time="2025-05-10T00:45:10.533147724Z" level=info msg="StartContainer for \"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1\"" May 10 00:45:10.557878 systemd[1]: Started cri-containerd-f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1.scope. May 10 00:45:10.588096 env[1194]: time="2025-05-10T00:45:10.588045285Z" level=info msg="StartContainer for \"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1\" returns successfully" May 10 00:45:10.622604 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:45:10.622828 systemd[1]: Stopped systemd-sysctl.service. May 10 00:45:10.623774 systemd[1]: Stopping systemd-sysctl.service... May 10 00:45:10.625837 systemd[1]: Starting systemd-sysctl.service... May 10 00:45:10.643197 systemd[1]: cri-containerd-f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1.scope: Deactivated successfully. May 10 00:45:10.676029 systemd[1]: Finished systemd-sysctl.service. May 10 00:45:10.695203 env[1194]: time="2025-05-10T00:45:10.695125597Z" level=info msg="shim disconnected" id=f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1 May 10 00:45:10.695611 env[1194]: time="2025-05-10T00:45:10.695577217Z" level=warning msg="cleaning up after shim disconnected" id=f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1 namespace=k8s.io May 10 00:45:10.695762 env[1194]: time="2025-05-10T00:45:10.695736863Z" level=info msg="cleaning up dead shim" May 10 00:45:10.705190 env[1194]: time="2025-05-10T00:45:10.705143037Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:45:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2488 runtime=io.containerd.runc.v2\n" May 10 00:45:10.831763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888-rootfs.mount: Deactivated successfully. May 10 00:45:11.517329 env[1194]: time="2025-05-10T00:45:11.514281210Z" level=info msg="CreateContainer within sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:45:11.535895 env[1194]: time="2025-05-10T00:45:11.535850293Z" level=info msg="CreateContainer within sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd\"" May 10 00:45:11.536748 env[1194]: time="2025-05-10T00:45:11.536721949Z" level=info msg="StartContainer for \"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd\"" May 10 00:45:11.570706 systemd[1]: Started cri-containerd-003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd.scope. May 10 00:45:11.619310 env[1194]: time="2025-05-10T00:45:11.615507893Z" level=info msg="StartContainer for \"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd\" returns successfully" May 10 00:45:11.623689 systemd[1]: cri-containerd-003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd.scope: Deactivated successfully. May 10 00:45:11.649000 env[1194]: time="2025-05-10T00:45:11.648948238Z" level=info msg="shim disconnected" id=003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd May 10 00:45:11.649000 env[1194]: time="2025-05-10T00:45:11.648996436Z" level=warning msg="cleaning up after shim disconnected" id=003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd namespace=k8s.io May 10 00:45:11.649000 env[1194]: time="2025-05-10T00:45:11.649006683Z" level=info msg="cleaning up dead shim" May 10 00:45:11.657505 env[1194]: time="2025-05-10T00:45:11.657468153Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:45:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2544 runtime=io.containerd.runc.v2\n" May 10 00:45:11.832590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd-rootfs.mount: Deactivated successfully. May 10 00:45:12.522351 env[1194]: time="2025-05-10T00:45:12.519948493Z" level=info msg="CreateContainer within sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:45:12.534640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1283446494.mount: Deactivated successfully. May 10 00:45:12.539777 env[1194]: time="2025-05-10T00:45:12.539743637Z" level=info msg="CreateContainer within sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976\"" May 10 00:45:12.540839 env[1194]: time="2025-05-10T00:45:12.540806055Z" level=info msg="StartContainer for \"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976\"" May 10 00:45:12.541407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1991750614.mount: Deactivated successfully. May 10 00:45:12.565023 systemd[1]: Started cri-containerd-f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976.scope. May 10 00:45:12.598160 systemd[1]: cri-containerd-f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976.scope: Deactivated successfully. May 10 00:45:12.600646 env[1194]: time="2025-05-10T00:45:12.600565744Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3e6647e_6994_46e7_b646_0cac556f27a4.slice/cri-containerd-f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976.scope/memory.events\": no such file or directory" May 10 00:45:12.601252 env[1194]: time="2025-05-10T00:45:12.601197433Z" level=info msg="StartContainer for \"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976\" returns successfully" May 10 00:45:12.627355 env[1194]: time="2025-05-10T00:45:12.627199602Z" level=info msg="shim disconnected" id=f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976 May 10 00:45:12.627355 env[1194]: time="2025-05-10T00:45:12.627354385Z" level=warning msg="cleaning up after shim disconnected" id=f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976 namespace=k8s.io May 10 00:45:12.627585 env[1194]: time="2025-05-10T00:45:12.627365580Z" level=info msg="cleaning up dead shim" May 10 00:45:12.636680 env[1194]: time="2025-05-10T00:45:12.636628939Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:45:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2600 runtime=io.containerd.runc.v2\n" May 10 00:45:13.521942 env[1194]: time="2025-05-10T00:45:13.521864433Z" level=info msg="CreateContainer within sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:45:13.536046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2483093946.mount: Deactivated successfully. May 10 00:45:13.542372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount538191778.mount: Deactivated successfully. May 10 00:45:13.543166 env[1194]: time="2025-05-10T00:45:13.543130577Z" level=info msg="CreateContainer within sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\"" May 10 00:45:13.552810 env[1194]: time="2025-05-10T00:45:13.551469857Z" level=info msg="StartContainer for \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\"" May 10 00:45:13.583318 systemd[1]: Started cri-containerd-937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb.scope. May 10 00:45:13.626455 env[1194]: time="2025-05-10T00:45:13.626402342Z" level=info msg="StartContainer for \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\" returns successfully" May 10 00:45:13.787378 kubelet[1968]: I0510 00:45:13.787250 1968 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 10 00:45:13.802596 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 10 00:45:13.883845 systemd[1]: Created slice kubepods-burstable-podd13bdef6_fb5b_4b3b_8d17_7a65fa797254.slice. May 10 00:45:13.889005 systemd[1]: Created slice kubepods-burstable-pod28a5f344_ef8e_4ed0_b7ea_aa3fd96a1371.slice. May 10 00:45:13.917812 kubelet[1968]: I0510 00:45:13.917779 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28a5f344-ef8e-4ed0-b7ea-aa3fd96a1371-config-volume\") pod \"coredns-668d6bf9bc-s27nw\" (UID: \"28a5f344-ef8e-4ed0-b7ea-aa3fd96a1371\") " pod="kube-system/coredns-668d6bf9bc-s27nw" May 10 00:45:13.920918 kubelet[1968]: I0510 00:45:13.920890 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d13bdef6-fb5b-4b3b-8d17-7a65fa797254-config-volume\") pod \"coredns-668d6bf9bc-ftnx4\" (UID: \"d13bdef6-fb5b-4b3b-8d17-7a65fa797254\") " pod="kube-system/coredns-668d6bf9bc-ftnx4" May 10 00:45:13.921043 kubelet[1968]: I0510 00:45:13.920926 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dssjt\" (UniqueName: \"kubernetes.io/projected/28a5f344-ef8e-4ed0-b7ea-aa3fd96a1371-kube-api-access-dssjt\") pod \"coredns-668d6bf9bc-s27nw\" (UID: \"28a5f344-ef8e-4ed0-b7ea-aa3fd96a1371\") " pod="kube-system/coredns-668d6bf9bc-s27nw" May 10 00:45:13.921043 kubelet[1968]: I0510 00:45:13.920950 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qtgj\" (UniqueName: \"kubernetes.io/projected/d13bdef6-fb5b-4b3b-8d17-7a65fa797254-kube-api-access-4qtgj\") pod \"coredns-668d6bf9bc-ftnx4\" (UID: \"d13bdef6-fb5b-4b3b-8d17-7a65fa797254\") " pod="kube-system/coredns-668d6bf9bc-ftnx4" May 10 00:45:14.188750 env[1194]: time="2025-05-10T00:45:14.188637633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ftnx4,Uid:d13bdef6-fb5b-4b3b-8d17-7a65fa797254,Namespace:kube-system,Attempt:0,}" May 10 00:45:14.193347 env[1194]: time="2025-05-10T00:45:14.193022435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s27nw,Uid:28a5f344-ef8e-4ed0-b7ea-aa3fd96a1371,Namespace:kube-system,Attempt:0,}" May 10 00:45:14.239258 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 10 00:45:14.563189 kubelet[1968]: I0510 00:45:14.560704 1968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hxqkb" podStartSLOduration=6.894346473 podStartE2EDuration="29.558424494s" podCreationTimestamp="2025-05-10 00:44:45 +0000 UTC" firstStartedPulling="2025-05-10 00:44:47.147285665 +0000 UTC m=+5.987568398" lastFinishedPulling="2025-05-10 00:45:09.811363671 +0000 UTC m=+28.651646419" observedRunningTime="2025-05-10 00:45:14.556119133 +0000 UTC m=+33.396401890" watchObservedRunningTime="2025-05-10 00:45:14.558424494 +0000 UTC m=+33.398707252" May 10 00:45:15.974367 systemd-networkd[1031]: cilium_host: Link UP May 10 00:45:15.996340 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 10 00:45:15.996447 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 10 00:45:15.976121 systemd-networkd[1031]: cilium_net: Link UP May 10 00:45:15.984680 systemd-networkd[1031]: cilium_net: Gained carrier May 10 00:45:15.984950 systemd-networkd[1031]: cilium_host: Gained carrier May 10 00:45:15.985100 systemd-networkd[1031]: cilium_net: Gained IPv6LL May 10 00:45:16.124385 systemd-networkd[1031]: cilium_vxlan: Link UP May 10 00:45:16.124393 systemd-networkd[1031]: cilium_vxlan: Gained carrier May 10 00:45:16.483697 kernel: NET: Registered PF_ALG protocol family May 10 00:45:16.854710 systemd-networkd[1031]: cilium_host: Gained IPv6LL May 10 00:45:17.321909 systemd-networkd[1031]: cilium_vxlan: Gained IPv6LL May 10 00:45:17.325859 systemd-networkd[1031]: lxc_health: Link UP May 10 00:45:17.337256 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:45:17.335383 systemd-networkd[1031]: lxc_health: Gained carrier May 10 00:45:17.767791 systemd-networkd[1031]: lxcc3789598cf74: Link UP May 10 00:45:17.776506 systemd-networkd[1031]: lxca5cd98cd2463: Link UP May 10 00:45:17.788520 kernel: eth0: renamed from tmpb4419 May 10 00:45:17.797601 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca5cd98cd2463: link becomes ready May 10 00:45:17.797695 kernel: eth0: renamed from tmp98b73 May 10 00:45:17.796868 systemd-networkd[1031]: lxca5cd98cd2463: Gained carrier May 10 00:45:17.806320 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc3789598cf74: link becomes ready May 10 00:45:17.806413 systemd-networkd[1031]: lxcc3789598cf74: Gained carrier May 10 00:45:18.409455 systemd-networkd[1031]: lxc_health: Gained IPv6LL May 10 00:45:19.175484 systemd-networkd[1031]: lxca5cd98cd2463: Gained IPv6LL May 10 00:45:19.542519 systemd-networkd[1031]: lxcc3789598cf74: Gained IPv6LL May 10 00:45:21.909682 env[1194]: time="2025-05-10T00:45:21.909453407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:45:21.909682 env[1194]: time="2025-05-10T00:45:21.909508973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:45:21.909682 env[1194]: time="2025-05-10T00:45:21.909522256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:45:21.910827 env[1194]: time="2025-05-10T00:45:21.909706504Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/98b7338316d67998e94ba86e2412fc0efd7b83611708a34a5796530e6afc8568 pid=3162 runtime=io.containerd.runc.v2 May 10 00:45:21.941360 systemd[1]: Started cri-containerd-98b7338316d67998e94ba86e2412fc0efd7b83611708a34a5796530e6afc8568.scope. May 10 00:45:22.007733 env[1194]: time="2025-05-10T00:45:22.007667046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s27nw,Uid:28a5f344-ef8e-4ed0-b7ea-aa3fd96a1371,Namespace:kube-system,Attempt:0,} returns sandbox id \"98b7338316d67998e94ba86e2412fc0efd7b83611708a34a5796530e6afc8568\"" May 10 00:45:22.012259 env[1194]: time="2025-05-10T00:45:22.012187500Z" level=info msg="CreateContainer within sandbox \"98b7338316d67998e94ba86e2412fc0efd7b83611708a34a5796530e6afc8568\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:45:22.024929 env[1194]: time="2025-05-10T00:45:22.023810856Z" level=info msg="CreateContainer within sandbox \"98b7338316d67998e94ba86e2412fc0efd7b83611708a34a5796530e6afc8568\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a933065d3721b31c6b0a4d322c8b0ac6c0ed6a7c52f1b058c8107e22daf945e\"" May 10 00:45:22.025309 env[1194]: time="2025-05-10T00:45:22.025284589Z" level=info msg="StartContainer for \"6a933065d3721b31c6b0a4d322c8b0ac6c0ed6a7c52f1b058c8107e22daf945e\"" May 10 00:45:22.051672 systemd[1]: Started cri-containerd-6a933065d3721b31c6b0a4d322c8b0ac6c0ed6a7c52f1b058c8107e22daf945e.scope. May 10 00:45:22.094104 env[1194]: time="2025-05-10T00:45:22.094007736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:45:22.094104 env[1194]: time="2025-05-10T00:45:22.094055721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:45:22.094425 env[1194]: time="2025-05-10T00:45:22.094074854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:45:22.094673 env[1194]: time="2025-05-10T00:45:22.094635572Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b44197d93594053cc83798d3c891e9fe71258a1a355d7bfbff0dd56217de85fc pid=3226 runtime=io.containerd.runc.v2 May 10 00:45:22.113835 systemd[1]: Started cri-containerd-b44197d93594053cc83798d3c891e9fe71258a1a355d7bfbff0dd56217de85fc.scope. May 10 00:45:22.133442 env[1194]: time="2025-05-10T00:45:22.133393109Z" level=info msg="StartContainer for \"6a933065d3721b31c6b0a4d322c8b0ac6c0ed6a7c52f1b058c8107e22daf945e\" returns successfully" May 10 00:45:22.176844 env[1194]: time="2025-05-10T00:45:22.176737410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ftnx4,Uid:d13bdef6-fb5b-4b3b-8d17-7a65fa797254,Namespace:kube-system,Attempt:0,} returns sandbox id \"b44197d93594053cc83798d3c891e9fe71258a1a355d7bfbff0dd56217de85fc\"" May 10 00:45:22.180893 env[1194]: time="2025-05-10T00:45:22.180851707Z" level=info msg="CreateContainer within sandbox \"b44197d93594053cc83798d3c891e9fe71258a1a355d7bfbff0dd56217de85fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:45:22.188259 env[1194]: time="2025-05-10T00:45:22.188202422Z" level=info msg="CreateContainer within sandbox \"b44197d93594053cc83798d3c891e9fe71258a1a355d7bfbff0dd56217de85fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0214414cd9d9e6216efd261ff61957aab354fecef560a7c607b38fc8f5d809e\"" May 10 00:45:22.188791 env[1194]: time="2025-05-10T00:45:22.188730095Z" level=info msg="StartContainer for \"f0214414cd9d9e6216efd261ff61957aab354fecef560a7c607b38fc8f5d809e\"" May 10 00:45:22.205785 systemd[1]: Started cri-containerd-f0214414cd9d9e6216efd261ff61957aab354fecef560a7c607b38fc8f5d809e.scope. May 10 00:45:22.251680 env[1194]: time="2025-05-10T00:45:22.251633034Z" level=info msg="StartContainer for \"f0214414cd9d9e6216efd261ff61957aab354fecef560a7c607b38fc8f5d809e\" returns successfully" May 10 00:45:22.578300 kubelet[1968]: I0510 00:45:22.578134 1968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s27nw" podStartSLOduration=37.578047983 podStartE2EDuration="37.578047983s" podCreationTimestamp="2025-05-10 00:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:45:22.576431006 +0000 UTC m=+41.416713784" watchObservedRunningTime="2025-05-10 00:45:22.578047983 +0000 UTC m=+41.418330765" May 10 00:45:22.596667 kubelet[1968]: I0510 00:45:22.596548 1968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ftnx4" podStartSLOduration=37.596512153 podStartE2EDuration="37.596512153s" podCreationTimestamp="2025-05-10 00:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:45:22.596486033 +0000 UTC m=+41.436768806" watchObservedRunningTime="2025-05-10 00:45:22.596512153 +0000 UTC m=+41.436794956" May 10 00:46:11.790739 systemd[1]: Started sshd@8-10.244.93.58:22-139.178.68.195:38710.service. May 10 00:46:12.723401 sshd[3326]: Accepted publickey for core from 139.178.68.195 port 38710 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:46:12.725584 sshd[3326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:12.733031 systemd[1]: Started session-8.scope. May 10 00:46:12.733445 systemd-logind[1190]: New session 8 of user core. May 10 00:46:13.582776 sshd[3326]: pam_unix(sshd:session): session closed for user core May 10 00:46:13.590468 systemd[1]: sshd@8-10.244.93.58:22-139.178.68.195:38710.service: Deactivated successfully. May 10 00:46:13.592557 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:46:13.594122 systemd-logind[1190]: Session 8 logged out. Waiting for processes to exit. May 10 00:46:13.596278 systemd-logind[1190]: Removed session 8. May 10 00:46:18.738100 systemd[1]: Started sshd@9-10.244.93.58:22-139.178.68.195:47800.service. May 10 00:46:19.634271 sshd[3341]: Accepted publickey for core from 139.178.68.195 port 47800 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:46:19.638750 sshd[3341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:19.648317 systemd-logind[1190]: New session 9 of user core. May 10 00:46:19.649286 systemd[1]: Started session-9.scope. May 10 00:46:20.354333 sshd[3341]: pam_unix(sshd:session): session closed for user core May 10 00:46:20.361774 systemd[1]: sshd@9-10.244.93.58:22-139.178.68.195:47800.service: Deactivated successfully. May 10 00:46:20.363730 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:46:20.365350 systemd-logind[1190]: Session 9 logged out. Waiting for processes to exit. May 10 00:46:20.367734 systemd-logind[1190]: Removed session 9. May 10 00:46:25.505130 systemd[1]: Started sshd@10-10.244.93.58:22-139.178.68.195:40416.service. May 10 00:46:26.403651 sshd[3355]: Accepted publickey for core from 139.178.68.195 port 40416 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:46:26.407487 sshd[3355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:26.418084 systemd[1]: Started session-10.scope. May 10 00:46:26.418897 systemd-logind[1190]: New session 10 of user core. May 10 00:46:27.125169 sshd[3355]: pam_unix(sshd:session): session closed for user core May 10 00:46:27.132423 systemd[1]: sshd@10-10.244.93.58:22-139.178.68.195:40416.service: Deactivated successfully. May 10 00:46:27.134370 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:46:27.135768 systemd-logind[1190]: Session 10 logged out. Waiting for processes to exit. May 10 00:46:27.138086 systemd-logind[1190]: Removed session 10. May 10 00:46:32.277245 systemd[1]: Started sshd@11-10.244.93.58:22-139.178.68.195:40418.service. May 10 00:46:33.180038 sshd[3368]: Accepted publickey for core from 139.178.68.195 port 40418 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:46:33.183636 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:33.194364 systemd-logind[1190]: New session 11 of user core. May 10 00:46:33.195430 systemd[1]: Started session-11.scope. May 10 00:46:33.915138 sshd[3368]: pam_unix(sshd:session): session closed for user core May 10 00:46:33.919777 systemd[1]: sshd@11-10.244.93.58:22-139.178.68.195:40418.service: Deactivated successfully. May 10 00:46:33.921794 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:46:33.923002 systemd-logind[1190]: Session 11 logged out. Waiting for processes to exit. May 10 00:46:33.923953 systemd-logind[1190]: Removed session 11. May 10 00:46:34.064298 systemd[1]: Started sshd@12-10.244.93.58:22-139.178.68.195:40426.service. May 10 00:46:34.972748 sshd[3381]: Accepted publickey for core from 139.178.68.195 port 40426 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:46:34.977036 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:34.988403 systemd-logind[1190]: New session 12 of user core. May 10 00:46:34.989300 systemd[1]: Started session-12.scope. May 10 00:46:35.739470 sshd[3381]: pam_unix(sshd:session): session closed for user core May 10 00:46:35.749115 systemd[1]: sshd@12-10.244.93.58:22-139.178.68.195:40426.service: Deactivated successfully. May 10 00:46:35.749385 systemd-logind[1190]: Session 12 logged out. Waiting for processes to exit. May 10 00:46:35.750744 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:46:35.752712 systemd-logind[1190]: Removed session 12. May 10 00:46:35.887833 systemd[1]: Started sshd@13-10.244.93.58:22-139.178.68.195:57530.service. May 10 00:46:36.789754 sshd[3391]: Accepted publickey for core from 139.178.68.195 port 57530 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:46:36.793705 sshd[3391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:36.805667 systemd-logind[1190]: New session 13 of user core. May 10 00:46:36.808543 systemd[1]: Started session-13.scope. May 10 00:46:37.523428 sshd[3391]: pam_unix(sshd:session): session closed for user core May 10 00:46:37.530834 systemd[1]: sshd@13-10.244.93.58:22-139.178.68.195:57530.service: Deactivated successfully. May 10 00:46:37.531002 systemd-logind[1190]: Session 13 logged out. Waiting for processes to exit. May 10 00:46:37.532376 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:46:37.534990 systemd-logind[1190]: Removed session 13. May 10 00:46:42.678479 systemd[1]: Started sshd@14-10.244.93.58:22-139.178.68.195:57534.service. May 10 00:46:43.580322 sshd[3405]: Accepted publickey for core from 139.178.68.195 port 57534 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:46:43.582119 sshd[3405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:43.591317 systemd-logind[1190]: New session 14 of user core. May 10 00:46:43.591664 systemd[1]: Started session-14.scope. May 10 00:46:44.311498 sshd[3405]: pam_unix(sshd:session): session closed for user core May 10 00:46:44.317270 systemd-logind[1190]: Session 14 logged out. Waiting for processes to exit. May 10 00:46:44.318199 systemd[1]: sshd@14-10.244.93.58:22-139.178.68.195:57534.service: Deactivated successfully. May 10 00:46:44.319198 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:46:44.320029 systemd-logind[1190]: Removed session 14. May 10 00:46:49.460009 systemd[1]: Started sshd@15-10.244.93.58:22-139.178.68.195:42852.service. May 10 00:46:50.353361 sshd[3418]: Accepted publickey for core from 139.178.68.195 port 42852 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:46:50.357641 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:50.367552 systemd[1]: Started session-15.scope. May 10 00:46:50.367920 systemd-logind[1190]: New session 15 of user core. May 10 00:46:51.060216 sshd[3418]: pam_unix(sshd:session): session closed for user core May 10 00:46:51.066411 systemd[1]: sshd@15-10.244.93.58:22-139.178.68.195:42852.service: Deactivated successfully. May 10 00:46:51.067840 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:46:51.069025 systemd-logind[1190]: Session 15 logged out. Waiting for processes to exit. May 10 00:46:51.070489 systemd-logind[1190]: Removed session 15. May 10 00:46:56.214942 systemd[1]: Started sshd@16-10.244.93.58:22-139.178.68.195:51732.service. May 10 00:46:57.116877 sshd[3431]: Accepted publickey for core from 139.178.68.195 port 51732 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:46:57.120086 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:57.129320 systemd-logind[1190]: New session 16 of user core. May 10 00:46:57.130183 systemd[1]: Started session-16.scope. May 10 00:46:57.821870 sshd[3431]: pam_unix(sshd:session): session closed for user core May 10 00:46:57.827215 systemd[1]: sshd@16-10.244.93.58:22-139.178.68.195:51732.service: Deactivated successfully. May 10 00:46:57.828177 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:46:57.829411 systemd-logind[1190]: Session 16 logged out. Waiting for processes to exit. May 10 00:46:57.830870 systemd-logind[1190]: Removed session 16. May 10 00:46:57.974568 systemd[1]: Started sshd@17-10.244.93.58:22-139.178.68.195:51738.service. May 10 00:46:58.871144 sshd[3443]: Accepted publickey for core from 139.178.68.195 port 51738 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:46:58.876503 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:58.885611 systemd[1]: Started session-17.scope. May 10 00:46:58.886496 systemd-logind[1190]: New session 17 of user core. May 10 00:46:59.882190 sshd[3443]: pam_unix(sshd:session): session closed for user core May 10 00:46:59.896537 systemd[1]: sshd@17-10.244.93.58:22-139.178.68.195:51738.service: Deactivated successfully. May 10 00:46:59.897470 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:46:59.898096 systemd-logind[1190]: Session 17 logged out. Waiting for processes to exit. May 10 00:46:59.898960 systemd-logind[1190]: Removed session 17. May 10 00:47:00.033298 systemd[1]: Started sshd@18-10.244.93.58:22-139.178.68.195:51754.service. May 10 00:47:00.938774 sshd[3453]: Accepted publickey for core from 139.178.68.195 port 51754 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:47:00.943145 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:00.957153 systemd[1]: Started session-18.scope. May 10 00:47:00.957750 systemd-logind[1190]: New session 18 of user core. May 10 00:47:02.750694 sshd[3453]: pam_unix(sshd:session): session closed for user core May 10 00:47:02.759283 systemd[1]: sshd@18-10.244.93.58:22-139.178.68.195:51754.service: Deactivated successfully. May 10 00:47:02.760460 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:47:02.761425 systemd-logind[1190]: Session 18 logged out. Waiting for processes to exit. May 10 00:47:02.762458 systemd-logind[1190]: Removed session 18. May 10 00:47:02.899985 systemd[1]: Started sshd@19-10.244.93.58:22-139.178.68.195:51770.service. May 10 00:47:03.802347 sshd[3470]: Accepted publickey for core from 139.178.68.195 port 51770 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:47:03.806007 sshd[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:03.818401 systemd-logind[1190]: New session 19 of user core. May 10 00:47:03.818987 systemd[1]: Started session-19.scope. May 10 00:47:04.687606 sshd[3470]: pam_unix(sshd:session): session closed for user core May 10 00:47:04.694395 systemd-logind[1190]: Session 19 logged out. Waiting for processes to exit. May 10 00:47:04.695429 systemd[1]: sshd@19-10.244.93.58:22-139.178.68.195:51770.service: Deactivated successfully. May 10 00:47:04.697045 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:47:04.698164 systemd-logind[1190]: Removed session 19. May 10 00:47:04.839690 systemd[1]: Started sshd@20-10.244.93.58:22-139.178.68.195:51776.service. May 10 00:47:05.743715 sshd[3480]: Accepted publickey for core from 139.178.68.195 port 51776 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:47:05.747381 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:05.759412 systemd-logind[1190]: New session 20 of user core. May 10 00:47:05.760566 systemd[1]: Started session-20.scope. May 10 00:47:06.461972 sshd[3480]: pam_unix(sshd:session): session closed for user core May 10 00:47:06.468770 systemd[1]: sshd@20-10.244.93.58:22-139.178.68.195:51776.service: Deactivated successfully. May 10 00:47:06.470340 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:47:06.471558 systemd-logind[1190]: Session 20 logged out. Waiting for processes to exit. May 10 00:47:06.473498 systemd-logind[1190]: Removed session 20. May 10 00:47:11.613665 systemd[1]: Started sshd@21-10.244.93.58:22-139.178.68.195:47296.service. May 10 00:47:12.507749 sshd[3494]: Accepted publickey for core from 139.178.68.195 port 47296 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:47:12.511466 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:12.522581 systemd-logind[1190]: New session 21 of user core. May 10 00:47:12.523688 systemd[1]: Started session-21.scope. May 10 00:47:13.212528 sshd[3494]: pam_unix(sshd:session): session closed for user core May 10 00:47:13.222400 systemd[1]: sshd@21-10.244.93.58:22-139.178.68.195:47296.service: Deactivated successfully. May 10 00:47:13.223521 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:47:13.224851 systemd-logind[1190]: Session 21 logged out. Waiting for processes to exit. May 10 00:47:13.226128 systemd-logind[1190]: Removed session 21. May 10 00:47:18.363794 systemd[1]: Started sshd@22-10.244.93.58:22-139.178.68.195:42200.service. May 10 00:47:19.258970 sshd[3508]: Accepted publickey for core from 139.178.68.195 port 42200 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:47:19.261968 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:19.270936 systemd-logind[1190]: New session 22 of user core. May 10 00:47:19.271174 systemd[1]: Started session-22.scope. May 10 00:47:19.960224 sshd[3508]: pam_unix(sshd:session): session closed for user core May 10 00:47:19.967107 systemd[1]: sshd@22-10.244.93.58:22-139.178.68.195:42200.service: Deactivated successfully. May 10 00:47:19.968866 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:47:19.970137 systemd-logind[1190]: Session 22 logged out. Waiting for processes to exit. May 10 00:47:19.971293 systemd-logind[1190]: Removed session 22. May 10 00:47:25.113673 systemd[1]: Started sshd@23-10.244.93.58:22-139.178.68.195:42206.service. May 10 00:47:26.017401 sshd[3520]: Accepted publickey for core from 139.178.68.195 port 42206 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:47:26.021921 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:26.031369 systemd-logind[1190]: New session 23 of user core. May 10 00:47:26.031910 systemd[1]: Started session-23.scope. May 10 00:47:26.727854 sshd[3520]: pam_unix(sshd:session): session closed for user core May 10 00:47:26.734633 systemd[1]: sshd@23-10.244.93.58:22-139.178.68.195:42206.service: Deactivated successfully. May 10 00:47:26.736117 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:47:26.737502 systemd-logind[1190]: Session 23 logged out. Waiting for processes to exit. May 10 00:47:26.739409 systemd-logind[1190]: Removed session 23. May 10 00:47:26.877279 systemd[1]: Started sshd@24-10.244.93.58:22-139.178.68.195:56626.service. May 10 00:47:27.778947 sshd[3532]: Accepted publickey for core from 139.178.68.195 port 56626 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:47:27.783556 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:27.794636 systemd-logind[1190]: New session 24 of user core. May 10 00:47:27.794900 systemd[1]: Started session-24.scope. May 10 00:47:29.675336 env[1194]: time="2025-05-10T00:47:29.674553512Z" level=info msg="StopContainer for \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\" with timeout 30 (s)" May 10 00:47:29.675336 env[1194]: time="2025-05-10T00:47:29.675014029Z" level=info msg="Stop container \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\" with signal terminated" May 10 00:47:29.705916 systemd[1]: run-containerd-runc-k8s.io-937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb-runc.DqV92U.mount: Deactivated successfully. May 10 00:47:29.708409 systemd[1]: cri-containerd-dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e.scope: Deactivated successfully. May 10 00:47:29.734687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e-rootfs.mount: Deactivated successfully. May 10 00:47:29.739998 env[1194]: time="2025-05-10T00:47:29.739948359Z" level=info msg="shim disconnected" id=dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e May 10 00:47:29.740288 env[1194]: time="2025-05-10T00:47:29.740266836Z" level=warning msg="cleaning up after shim disconnected" id=dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e namespace=k8s.io May 10 00:47:29.740422 env[1194]: time="2025-05-10T00:47:29.740408812Z" level=info msg="cleaning up dead shim" May 10 00:47:29.743477 env[1194]: time="2025-05-10T00:47:29.743428836Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:47:29.748139 env[1194]: time="2025-05-10T00:47:29.748112806Z" level=info msg="StopContainer for \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\" with timeout 2 (s)" May 10 00:47:29.748499 env[1194]: time="2025-05-10T00:47:29.748478269Z" level=info msg="Stop container \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\" with signal terminated" May 10 00:47:29.755423 systemd-networkd[1031]: lxc_health: Link DOWN May 10 00:47:29.755431 systemd-networkd[1031]: lxc_health: Lost carrier May 10 00:47:29.757622 env[1194]: time="2025-05-10T00:47:29.757594399Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3574 runtime=io.containerd.runc.v2\n" May 10 00:47:29.759544 env[1194]: time="2025-05-10T00:47:29.759517662Z" level=info msg="StopContainer for \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\" returns successfully" May 10 00:47:29.762058 env[1194]: time="2025-05-10T00:47:29.762033160Z" level=info msg="StopPodSandbox for \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\"" May 10 00:47:29.762263 env[1194]: time="2025-05-10T00:47:29.762233751Z" level=info msg="Container to stop \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:47:29.764194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78-shm.mount: Deactivated successfully. May 10 00:47:29.786014 systemd[1]: cri-containerd-db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78.scope: Deactivated successfully. May 10 00:47:29.805531 systemd[1]: cri-containerd-937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb.scope: Deactivated successfully. May 10 00:47:29.805793 systemd[1]: cri-containerd-937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb.scope: Consumed 7.739s CPU time. May 10 00:47:29.830241 env[1194]: time="2025-05-10T00:47:29.830141879Z" level=info msg="shim disconnected" id=db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78 May 10 00:47:29.830241 env[1194]: time="2025-05-10T00:47:29.830190783Z" level=warning msg="cleaning up after shim disconnected" id=db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78 namespace=k8s.io May 10 00:47:29.830241 env[1194]: time="2025-05-10T00:47:29.830201071Z" level=info msg="cleaning up dead shim" May 10 00:47:29.838552 env[1194]: time="2025-05-10T00:47:29.838507384Z" level=info msg="shim disconnected" id=937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb May 10 00:47:29.838552 env[1194]: time="2025-05-10T00:47:29.838549251Z" level=warning msg="cleaning up after shim disconnected" id=937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb namespace=k8s.io May 10 00:47:29.838779 env[1194]: time="2025-05-10T00:47:29.838559073Z" level=info msg="cleaning up dead shim" May 10 00:47:29.848850 env[1194]: time="2025-05-10T00:47:29.848810843Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3626 runtime=io.containerd.runc.v2\n" May 10 00:47:29.849954 env[1194]: time="2025-05-10T00:47:29.849917038Z" level=info msg="TearDown network for sandbox \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\" successfully" May 10 00:47:29.850069 env[1194]: time="2025-05-10T00:47:29.850048750Z" level=info msg="StopPodSandbox for \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\" returns successfully" May 10 00:47:29.853747 env[1194]: time="2025-05-10T00:47:29.853414129Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3633 runtime=io.containerd.runc.v2\n" May 10 00:47:29.855432 env[1194]: time="2025-05-10T00:47:29.855406901Z" level=info msg="StopContainer for \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\" returns successfully" May 10 00:47:29.855872 env[1194]: time="2025-05-10T00:47:29.855847953Z" level=info msg="StopPodSandbox for \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\"" May 10 00:47:29.856218 env[1194]: time="2025-05-10T00:47:29.856197234Z" level=info msg="Container to stop \"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:47:29.856345 env[1194]: time="2025-05-10T00:47:29.856327113Z" level=info msg="Container to stop \"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:47:29.856435 env[1194]: time="2025-05-10T00:47:29.856412516Z" level=info msg="Container to stop \"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:47:29.856503 env[1194]: time="2025-05-10T00:47:29.856488589Z" level=info msg="Container to stop \"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:47:29.856579 env[1194]: time="2025-05-10T00:47:29.856564557Z" level=info msg="Container to stop \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:47:29.865127 systemd[1]: cri-containerd-b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c.scope: Deactivated successfully. May 10 00:47:29.888484 env[1194]: time="2025-05-10T00:47:29.888422760Z" level=info msg="shim disconnected" id=b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c May 10 00:47:29.888484 env[1194]: time="2025-05-10T00:47:29.888474953Z" level=warning msg="cleaning up after shim disconnected" id=b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c namespace=k8s.io May 10 00:47:29.888484 env[1194]: time="2025-05-10T00:47:29.888485006Z" level=info msg="cleaning up dead shim" May 10 00:47:29.897019 env[1194]: time="2025-05-10T00:47:29.896966974Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3672 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T00:47:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 10 00:47:29.901468 env[1194]: time="2025-05-10T00:47:29.901439478Z" level=info msg="TearDown network for sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" successfully" May 10 00:47:29.901604 env[1194]: time="2025-05-10T00:47:29.901587096Z" level=info msg="StopPodSandbox for \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" returns successfully" May 10 00:47:29.943763 kubelet[1968]: I0510 00:47:29.940790 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mldfd\" (UniqueName: \"kubernetes.io/projected/9511a17b-4a5d-4d4b-8317-73a7918e5d8e-kube-api-access-mldfd\") pod \"9511a17b-4a5d-4d4b-8317-73a7918e5d8e\" (UID: \"9511a17b-4a5d-4d4b-8317-73a7918e5d8e\") " May 10 00:47:29.943763 kubelet[1968]: I0510 00:47:29.940959 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9511a17b-4a5d-4d4b-8317-73a7918e5d8e-cilium-config-path\") pod \"9511a17b-4a5d-4d4b-8317-73a7918e5d8e\" (UID: \"9511a17b-4a5d-4d4b-8317-73a7918e5d8e\") " May 10 00:47:29.958108 kubelet[1968]: I0510 00:47:29.957100 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9511a17b-4a5d-4d4b-8317-73a7918e5d8e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9511a17b-4a5d-4d4b-8317-73a7918e5d8e" (UID: "9511a17b-4a5d-4d4b-8317-73a7918e5d8e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 10 00:47:29.958979 kubelet[1968]: I0510 00:47:29.957091 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9511a17b-4a5d-4d4b-8317-73a7918e5d8e-kube-api-access-mldfd" (OuterVolumeSpecName: "kube-api-access-mldfd") pod "9511a17b-4a5d-4d4b-8317-73a7918e5d8e" (UID: "9511a17b-4a5d-4d4b-8317-73a7918e5d8e"). InnerVolumeSpecName "kube-api-access-mldfd". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:47:29.981987 systemd[1]: Removed slice kubepods-besteffort-pod9511a17b_4a5d_4d4b_8317_73a7918e5d8e.slice. May 10 00:47:29.985510 kubelet[1968]: I0510 00:47:29.985452 1968 scope.go:117] "RemoveContainer" containerID="dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e" May 10 00:47:29.989281 env[1194]: time="2025-05-10T00:47:29.989113013Z" level=info msg="RemoveContainer for \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\"" May 10 00:47:29.992282 env[1194]: time="2025-05-10T00:47:29.992126033Z" level=info msg="RemoveContainer for \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\" returns successfully" May 10 00:47:29.992496 kubelet[1968]: I0510 00:47:29.992479 1968 scope.go:117] "RemoveContainer" containerID="dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e" May 10 00:47:29.993982 env[1194]: time="2025-05-10T00:47:29.993748662Z" level=error msg="ContainerStatus for \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\": not found" May 10 00:47:29.994744 kubelet[1968]: E0510 00:47:29.994683 1968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\": not found" containerID="dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e" May 10 00:47:30.001570 kubelet[1968]: I0510 00:47:29.998529 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e"} err="failed to get container status \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\": rpc error: code = NotFound desc = an error occurred when try to find container \"dad62a3e76b1a46636fbe4bea2edd0fbe1bdfc3e361742ba224df1466dc4001e\": not found" May 10 00:47:30.001861 kubelet[1968]: I0510 00:47:30.001825 1968 scope.go:117] "RemoveContainer" containerID="937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb" May 10 00:47:30.005486 env[1194]: time="2025-05-10T00:47:30.005387893Z" level=info msg="RemoveContainer for \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\"" May 10 00:47:30.010514 env[1194]: time="2025-05-10T00:47:30.010462341Z" level=info msg="RemoveContainer for \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\" returns successfully" May 10 00:47:30.010909 kubelet[1968]: I0510 00:47:30.010886 1968 scope.go:117] "RemoveContainer" containerID="f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976" May 10 00:47:30.013485 env[1194]: time="2025-05-10T00:47:30.013146930Z" level=info msg="RemoveContainer for \"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976\"" May 10 00:47:30.016171 env[1194]: time="2025-05-10T00:47:30.016142615Z" level=info msg="RemoveContainer for \"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976\" returns successfully" May 10 00:47:30.016505 kubelet[1968]: I0510 00:47:30.016489 1968 scope.go:117] "RemoveContainer" containerID="003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd" May 10 00:47:30.018171 env[1194]: time="2025-05-10T00:47:30.018148599Z" level=info msg="RemoveContainer for \"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd\"" May 10 00:47:30.020537 env[1194]: time="2025-05-10T00:47:30.020510345Z" level=info msg="RemoveContainer for \"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd\" returns successfully" May 10 00:47:30.020887 kubelet[1968]: I0510 00:47:30.020870 1968 scope.go:117] "RemoveContainer" containerID="f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1" May 10 00:47:30.024292 env[1194]: time="2025-05-10T00:47:30.024268005Z" level=info msg="RemoveContainer for \"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1\"" May 10 00:47:30.028635 env[1194]: time="2025-05-10T00:47:30.028601749Z" level=info msg="RemoveContainer for \"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1\" returns successfully" May 10 00:47:30.029011 kubelet[1968]: I0510 00:47:30.028964 1968 scope.go:117] "RemoveContainer" containerID="a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888" May 10 00:47:30.032263 env[1194]: time="2025-05-10T00:47:30.032166582Z" level=info msg="RemoveContainer for \"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888\"" May 10 00:47:30.034289 env[1194]: time="2025-05-10T00:47:30.034263514Z" level=info msg="RemoveContainer for \"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888\" returns successfully" May 10 00:47:30.034531 kubelet[1968]: I0510 00:47:30.034509 1968 scope.go:117] "RemoveContainer" containerID="937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb" May 10 00:47:30.034764 env[1194]: time="2025-05-10T00:47:30.034708694Z" level=error msg="ContainerStatus for \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\": not found" May 10 00:47:30.034974 kubelet[1968]: E0510 00:47:30.034943 1968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\": not found" containerID="937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb" May 10 00:47:30.035027 kubelet[1968]: I0510 00:47:30.034978 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb"} err="failed to get container status \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\": rpc error: code = NotFound desc = an error occurred when try to find container \"937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb\": not found" May 10 00:47:30.035027 kubelet[1968]: I0510 00:47:30.035002 1968 scope.go:117] "RemoveContainer" containerID="f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976" May 10 00:47:30.035247 env[1194]: time="2025-05-10T00:47:30.035194942Z" level=error msg="ContainerStatus for \"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976\": not found" May 10 00:47:30.035436 kubelet[1968]: E0510 00:47:30.035413 1968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976\": not found" containerID="f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976" May 10 00:47:30.035490 kubelet[1968]: I0510 00:47:30.035460 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976"} err="failed to get container status \"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976\": rpc error: code = NotFound desc = an error occurred when try to find container \"f83948521531677220ee6b16d2189dce81bc585fd181932e75b1fd75c42a0976\": not found" May 10 00:47:30.035490 kubelet[1968]: I0510 00:47:30.035478 1968 scope.go:117] "RemoveContainer" containerID="003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd" May 10 00:47:30.035752 env[1194]: time="2025-05-10T00:47:30.035705642Z" level=error msg="ContainerStatus for \"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd\": not found" May 10 00:47:30.036020 kubelet[1968]: E0510 00:47:30.035986 1968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd\": not found" containerID="003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd" May 10 00:47:30.036097 kubelet[1968]: I0510 00:47:30.036023 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd"} err="failed to get container status \"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd\": rpc error: code = NotFound desc = an error occurred when try to find container \"003aa5d27962726658dafcec188347cf6740f7c7ab76dc0a22087caa02c2aabd\": not found" May 10 00:47:30.036097 kubelet[1968]: I0510 00:47:30.036040 1968 scope.go:117] "RemoveContainer" containerID="f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1" May 10 00:47:30.036363 env[1194]: time="2025-05-10T00:47:30.036274526Z" level=error msg="ContainerStatus for \"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1\": not found" May 10 00:47:30.036510 kubelet[1968]: E0510 00:47:30.036468 1968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1\": not found" containerID="f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1" May 10 00:47:30.036826 kubelet[1968]: I0510 00:47:30.036538 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1"} err="failed to get container status \"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1\": rpc error: code = NotFound desc = an error occurred when try to find container \"f276365b4ca88ae9ccd29e06647237c32fe882553acde124a8d2c4dbf090afb1\": not found" May 10 00:47:30.036826 kubelet[1968]: I0510 00:47:30.036557 1968 scope.go:117] "RemoveContainer" containerID="a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888" May 10 00:47:30.036910 env[1194]: time="2025-05-10T00:47:30.036724366Z" level=error msg="ContainerStatus for \"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888\": not found" May 10 00:47:30.037084 kubelet[1968]: E0510 00:47:30.037064 1968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888\": not found" containerID="a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888" May 10 00:47:30.037146 kubelet[1968]: I0510 00:47:30.037087 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888"} err="failed to get container status \"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888\": rpc error: code = NotFound desc = an error occurred when try to find container \"a55cade29f1171f1a8f928c2242b3591a8dd9a1a08c1f0946d64dd45f77bd888\": not found" May 10 00:47:30.041355 kubelet[1968]: I0510 00:47:30.041333 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-etc-cni-netd\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041424 kubelet[1968]: I0510 00:47:30.041364 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cni-path\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041424 kubelet[1968]: I0510 00:47:30.041389 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3e6647e-6994-46e7-b646-0cac556f27a4-hubble-tls\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041424 kubelet[1968]: I0510 00:47:30.041411 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-config-path\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041527 kubelet[1968]: I0510 00:47:30.041427 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-hostproc\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041527 kubelet[1968]: I0510 00:47:30.041444 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-host-proc-sys-kernel\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041527 kubelet[1968]: I0510 00:47:30.041461 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-xtables-lock\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041527 kubelet[1968]: I0510 00:47:30.041476 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-run\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041527 kubelet[1968]: I0510 00:47:30.041489 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-host-proc-sys-net\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041527 kubelet[1968]: I0510 00:47:30.041515 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqcr9\" (UniqueName: \"kubernetes.io/projected/d3e6647e-6994-46e7-b646-0cac556f27a4-kube-api-access-kqcr9\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041754 kubelet[1968]: I0510 00:47:30.041532 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3e6647e-6994-46e7-b646-0cac556f27a4-clustermesh-secrets\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041754 kubelet[1968]: I0510 00:47:30.041551 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-cgroup\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041754 kubelet[1968]: I0510 00:47:30.041568 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-lib-modules\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041754 kubelet[1968]: I0510 00:47:30.041586 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-bpf-maps\") pod \"d3e6647e-6994-46e7-b646-0cac556f27a4\" (UID: \"d3e6647e-6994-46e7-b646-0cac556f27a4\") " May 10 00:47:30.041754 kubelet[1968]: I0510 00:47:30.041629 1968 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mldfd\" (UniqueName: \"kubernetes.io/projected/9511a17b-4a5d-4d4b-8317-73a7918e5d8e-kube-api-access-mldfd\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.041754 kubelet[1968]: I0510 00:47:30.041642 1968 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9511a17b-4a5d-4d4b-8317-73a7918e5d8e-cilium-config-path\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.041973 kubelet[1968]: I0510 00:47:30.041918 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:30.041973 kubelet[1968]: I0510 00:47:30.041951 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:30.041973 kubelet[1968]: I0510 00:47:30.041966 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:30.042073 kubelet[1968]: I0510 00:47:30.042024 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:30.042073 kubelet[1968]: I0510 00:47:30.042046 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:30.042073 kubelet[1968]: I0510 00:47:30.042062 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cni-path" (OuterVolumeSpecName: "cni-path") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:30.042485 kubelet[1968]: I0510 00:47:30.042457 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:30.042558 kubelet[1968]: I0510 00:47:30.042486 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:30.042627 kubelet[1968]: I0510 00:47:30.042465 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-hostproc" (OuterVolumeSpecName: "hostproc") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:30.042745 kubelet[1968]: I0510 00:47:30.042727 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:30.045378 kubelet[1968]: I0510 00:47:30.045347 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 10 00:47:30.047960 kubelet[1968]: I0510 00:47:30.047923 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e6647e-6994-46e7-b646-0cac556f27a4-kube-api-access-kqcr9" (OuterVolumeSpecName: "kube-api-access-kqcr9") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "kube-api-access-kqcr9". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:47:30.048542 kubelet[1968]: I0510 00:47:30.048514 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e6647e-6994-46e7-b646-0cac556f27a4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:47:30.052119 kubelet[1968]: I0510 00:47:30.052057 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e6647e-6994-46e7-b646-0cac556f27a4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d3e6647e-6994-46e7-b646-0cac556f27a4" (UID: "d3e6647e-6994-46e7-b646-0cac556f27a4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 10 00:47:30.143043 kubelet[1968]: I0510 00:47:30.142963 1968 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-xtables-lock\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143043 kubelet[1968]: I0510 00:47:30.143039 1968 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-run\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143331 kubelet[1968]: I0510 00:47:30.143069 1968 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-host-proc-sys-net\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143331 kubelet[1968]: I0510 00:47:30.143094 1968 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-cgroup\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143331 kubelet[1968]: I0510 00:47:30.143127 1968 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-lib-modules\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143331 kubelet[1968]: I0510 00:47:30.143156 1968 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kqcr9\" (UniqueName: \"kubernetes.io/projected/d3e6647e-6994-46e7-b646-0cac556f27a4-kube-api-access-kqcr9\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143331 kubelet[1968]: I0510 00:47:30.143180 1968 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3e6647e-6994-46e7-b646-0cac556f27a4-clustermesh-secrets\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143331 kubelet[1968]: I0510 00:47:30.143204 1968 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-bpf-maps\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143331 kubelet[1968]: I0510 00:47:30.143271 1968 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-etc-cni-netd\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143331 kubelet[1968]: I0510 00:47:30.143299 1968 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-cni-path\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143691 kubelet[1968]: I0510 00:47:30.143320 1968 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3e6647e-6994-46e7-b646-0cac556f27a4-hubble-tls\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143691 kubelet[1968]: I0510 00:47:30.143343 1968 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3e6647e-6994-46e7-b646-0cac556f27a4-cilium-config-path\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143691 kubelet[1968]: I0510 00:47:30.143365 1968 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-hostproc\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.143691 kubelet[1968]: I0510 00:47:30.143389 1968 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3e6647e-6994-46e7-b646-0cac556f27a4-host-proc-sys-kernel\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:30.294507 systemd[1]: Removed slice kubepods-burstable-podd3e6647e_6994_46e7_b646_0cac556f27a4.slice. May 10 00:47:30.294788 systemd[1]: kubepods-burstable-podd3e6647e_6994_46e7_b646_0cac556f27a4.slice: Consumed 7.877s CPU time. May 10 00:47:30.701655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-937ecaf0c892b712a2bf4062e1c1b6e9875dbd1c7b2c73bdf341de3bc80ccaeb-rootfs.mount: Deactivated successfully. May 10 00:47:30.702548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c-rootfs.mount: Deactivated successfully. May 10 00:47:30.702906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c-shm.mount: Deactivated successfully. May 10 00:47:30.703239 systemd[1]: var-lib-kubelet-pods-d3e6647e\x2d6994\x2d46e7\x2db646\x2d0cac556f27a4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:47:30.703528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78-rootfs.mount: Deactivated successfully. May 10 00:47:30.703805 systemd[1]: var-lib-kubelet-pods-9511a17b\x2d4a5d\x2d4d4b\x2d8317\x2d73a7918e5d8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmldfd.mount: Deactivated successfully. May 10 00:47:30.704054 systemd[1]: var-lib-kubelet-pods-d3e6647e\x2d6994\x2d46e7\x2db646\x2d0cac556f27a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkqcr9.mount: Deactivated successfully. May 10 00:47:30.704334 systemd[1]: var-lib-kubelet-pods-d3e6647e\x2d6994\x2d46e7\x2db646\x2d0cac556f27a4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:47:31.379626 kubelet[1968]: I0510 00:47:31.379550 1968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9511a17b-4a5d-4d4b-8317-73a7918e5d8e" path="/var/lib/kubelet/pods/9511a17b-4a5d-4d4b-8317-73a7918e5d8e/volumes" May 10 00:47:31.384277 kubelet[1968]: I0510 00:47:31.384196 1968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e6647e-6994-46e7-b646-0cac556f27a4" path="/var/lib/kubelet/pods/d3e6647e-6994-46e7-b646-0cac556f27a4/volumes" May 10 00:47:31.496331 kubelet[1968]: E0510 00:47:31.496218 1968 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:47:31.760389 sshd[3532]: pam_unix(sshd:session): session closed for user core May 10 00:47:31.768265 systemd[1]: sshd@24-10.244.93.58:22-139.178.68.195:56626.service: Deactivated successfully. May 10 00:47:31.770520 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:47:31.772017 systemd-logind[1190]: Session 24 logged out. Waiting for processes to exit. May 10 00:47:31.773582 systemd-logind[1190]: Removed session 24. May 10 00:47:31.914546 systemd[1]: Started sshd@25-10.244.93.58:22-139.178.68.195:56628.service. May 10 00:47:32.828958 sshd[3692]: Accepted publickey for core from 139.178.68.195 port 56628 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:47:32.834068 sshd[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:32.846622 systemd-logind[1190]: New session 25 of user core. May 10 00:47:32.848037 systemd[1]: Started session-25.scope. May 10 00:47:33.872507 kubelet[1968]: I0510 00:47:33.872454 1968 memory_manager.go:355] "RemoveStaleState removing state" podUID="9511a17b-4a5d-4d4b-8317-73a7918e5d8e" containerName="cilium-operator" May 10 00:47:33.872507 kubelet[1968]: I0510 00:47:33.872490 1968 memory_manager.go:355] "RemoveStaleState removing state" podUID="d3e6647e-6994-46e7-b646-0cac556f27a4" containerName="cilium-agent" May 10 00:47:33.883134 systemd[1]: Created slice kubepods-burstable-pod18b01533_1e08_41a2_a46d_705e9c85a62c.slice. May 10 00:47:33.889465 kubelet[1968]: I0510 00:47:33.889416 1968 status_manager.go:890] "Failed to get status for pod" podUID="18b01533-1e08-41a2-a46d-705e9c85a62c" pod="kube-system/cilium-k5tmh" err="pods \"cilium-k5tmh\" is forbidden: User \"system:node:srv-2i5m2.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-2i5m2.gb1.brightbox.com' and this object" May 10 00:47:33.889960 kubelet[1968]: W0510 00:47:33.889935 1968 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:srv-2i5m2.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-2i5m2.gb1.brightbox.com' and this object May 10 00:47:33.890067 kubelet[1968]: E0510 00:47:33.889985 1968 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:srv-2i5m2.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-2i5m2.gb1.brightbox.com' and this object" logger="UnhandledError" May 10 00:47:33.970807 kubelet[1968]: I0510 00:47:33.970704 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-config-path\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.970807 kubelet[1968]: I0510 00:47:33.970821 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-cgroup\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971202 kubelet[1968]: I0510 00:47:33.970867 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-lib-modules\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971202 kubelet[1968]: I0510 00:47:33.970908 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-hostproc\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971202 kubelet[1968]: I0510 00:47:33.970957 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-run\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971202 kubelet[1968]: I0510 00:47:33.971072 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18b01533-1e08-41a2-a46d-705e9c85a62c-hubble-tls\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971202 kubelet[1968]: I0510 00:47:33.971114 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-bpf-maps\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971202 kubelet[1968]: I0510 00:47:33.971153 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-host-proc-sys-net\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971571 kubelet[1968]: I0510 00:47:33.971195 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-ipsec-secrets\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971571 kubelet[1968]: I0510 00:47:33.971279 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-host-proc-sys-kernel\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971571 kubelet[1968]: I0510 00:47:33.971341 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cni-path\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971571 kubelet[1968]: I0510 00:47:33.971380 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-etc-cni-netd\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971571 kubelet[1968]: I0510 00:47:33.971475 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18b01533-1e08-41a2-a46d-705e9c85a62c-clustermesh-secrets\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971859 kubelet[1968]: I0510 00:47:33.971517 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml2rq\" (UniqueName: \"kubernetes.io/projected/18b01533-1e08-41a2-a46d-705e9c85a62c-kube-api-access-ml2rq\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:33.971859 kubelet[1968]: I0510 00:47:33.971578 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-xtables-lock\") pod \"cilium-k5tmh\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " pod="kube-system/cilium-k5tmh" May 10 00:47:34.020853 sshd[3692]: pam_unix(sshd:session): session closed for user core May 10 00:47:34.027601 systemd[1]: sshd@25-10.244.93.58:22-139.178.68.195:56628.service: Deactivated successfully. May 10 00:47:34.028921 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:47:34.029852 systemd-logind[1190]: Session 25 logged out. Waiting for processes to exit. May 10 00:47:34.031418 systemd-logind[1190]: Removed session 25. May 10 00:47:34.173102 systemd[1]: Started sshd@26-10.244.93.58:22-139.178.68.195:56632.service. May 10 00:47:35.066965 sshd[3706]: Accepted publickey for core from 139.178.68.195 port 56632 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:47:35.070681 sshd[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:35.074381 kubelet[1968]: E0510 00:47:35.074297 1968 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 10 00:47:35.076863 kubelet[1968]: E0510 00:47:35.076436 1968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-ipsec-secrets podName:18b01533-1e08-41a2-a46d-705e9c85a62c nodeName:}" failed. No retries permitted until 2025-05-10 00:47:35.576384665 +0000 UTC m=+174.416667423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-ipsec-secrets") pod "cilium-k5tmh" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c") : failed to sync secret cache: timed out waiting for the condition May 10 00:47:35.081471 systemd-logind[1190]: New session 26 of user core. May 10 00:47:35.082536 systemd[1]: Started session-26.scope. May 10 00:47:35.133399 kubelet[1968]: I0510 00:47:35.133313 1968 setters.go:602] "Node became not ready" node="srv-2i5m2.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:47:35Z","lastTransitionTime":"2025-05-10T00:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:47:35.690108 env[1194]: time="2025-05-10T00:47:35.689974828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5tmh,Uid:18b01533-1e08-41a2-a46d-705e9c85a62c,Namespace:kube-system,Attempt:0,}" May 10 00:47:35.712345 env[1194]: time="2025-05-10T00:47:35.712208837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:35.712345 env[1194]: time="2025-05-10T00:47:35.712317314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:35.712345 env[1194]: time="2025-05-10T00:47:35.712341746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:35.712558 env[1194]: time="2025-05-10T00:47:35.712533856Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d pid=3724 runtime=io.containerd.runc.v2 May 10 00:47:35.748021 systemd[1]: Started cri-containerd-8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d.scope. May 10 00:47:35.775624 env[1194]: time="2025-05-10T00:47:35.775570056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5tmh,Uid:18b01533-1e08-41a2-a46d-705e9c85a62c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\"" May 10 00:47:35.778162 env[1194]: time="2025-05-10T00:47:35.778133884Z" level=info msg="CreateContainer within sandbox \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:47:35.789436 env[1194]: time="2025-05-10T00:47:35.789393219Z" level=info msg="CreateContainer within sandbox \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba\"" May 10 00:47:35.792247 env[1194]: time="2025-05-10T00:47:35.790292730Z" level=info msg="StartContainer for \"1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba\"" May 10 00:47:35.827085 systemd[1]: Started cri-containerd-1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba.scope. May 10 00:47:35.848947 systemd[1]: cri-containerd-1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba.scope: Deactivated successfully. May 10 00:47:35.863073 env[1194]: time="2025-05-10T00:47:35.863006993Z" level=info msg="shim disconnected" id=1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba May 10 00:47:35.863299 env[1194]: time="2025-05-10T00:47:35.863076104Z" level=warning msg="cleaning up after shim disconnected" id=1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba namespace=k8s.io May 10 00:47:35.863299 env[1194]: time="2025-05-10T00:47:35.863087844Z" level=info msg="cleaning up dead shim" May 10 00:47:35.874525 env[1194]: time="2025-05-10T00:47:35.874448799Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3782 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T00:47:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 10 00:47:35.874955 env[1194]: time="2025-05-10T00:47:35.874808248Z" level=error msg="copy shim log" error="read /proc/self/fd/34: file already closed" May 10 00:47:35.877364 env[1194]: time="2025-05-10T00:47:35.877322094Z" level=error msg="Failed to pipe stderr of container \"1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba\"" error="reading from a closed fifo" May 10 00:47:35.877543 env[1194]: time="2025-05-10T00:47:35.877491555Z" level=error msg="Failed to pipe stdout of container \"1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba\"" error="reading from a closed fifo" May 10 00:47:35.878679 env[1194]: time="2025-05-10T00:47:35.878615551Z" level=error msg="StartContainer for \"1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 10 00:47:35.879099 kubelet[1968]: E0510 00:47:35.879046 1968 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba" May 10 00:47:35.881494 kubelet[1968]: E0510 00:47:35.881457 1968 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 10 00:47:35.881494 kubelet[1968]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 10 00:47:35.881494 kubelet[1968]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 10 00:47:35.881494 kubelet[1968]: rm /hostbin/cilium-mount May 10 00:47:35.881708 kubelet[1968]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ml2rq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-k5tmh_kube-system(18b01533-1e08-41a2-a46d-705e9c85a62c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 10 00:47:35.881708 kubelet[1968]: > logger="UnhandledError" May 10 00:47:35.883621 kubelet[1968]: E0510 00:47:35.883559 1968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k5tmh" podUID="18b01533-1e08-41a2-a46d-705e9c85a62c" May 10 00:47:35.914021 sshd[3706]: pam_unix(sshd:session): session closed for user core May 10 00:47:35.921515 systemd[1]: sshd@26-10.244.93.58:22-139.178.68.195:56632.service: Deactivated successfully. May 10 00:47:35.922493 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:47:35.923552 systemd-logind[1190]: Session 26 logged out. Waiting for processes to exit. May 10 00:47:35.925033 systemd-logind[1190]: Removed session 26. May 10 00:47:36.024078 env[1194]: time="2025-05-10T00:47:36.022891107Z" level=info msg="StopPodSandbox for \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\"" May 10 00:47:36.024078 env[1194]: time="2025-05-10T00:47:36.023121446Z" level=info msg="Container to stop \"1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:47:36.040524 systemd[1]: cri-containerd-8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d.scope: Deactivated successfully. May 10 00:47:36.062978 systemd[1]: Started sshd@27-10.244.93.58:22-139.178.68.195:57058.service. May 10 00:47:36.077422 env[1194]: time="2025-05-10T00:47:36.077375467Z" level=info msg="shim disconnected" id=8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d May 10 00:47:36.077601 env[1194]: time="2025-05-10T00:47:36.077423436Z" level=warning msg="cleaning up after shim disconnected" id=8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d namespace=k8s.io May 10 00:47:36.077601 env[1194]: time="2025-05-10T00:47:36.077436298Z" level=info msg="cleaning up dead shim" May 10 00:47:36.087356 env[1194]: time="2025-05-10T00:47:36.087307905Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3817 runtime=io.containerd.runc.v2\n" May 10 00:47:36.087826 env[1194]: time="2025-05-10T00:47:36.087796800Z" level=info msg="TearDown network for sandbox \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\" successfully" May 10 00:47:36.087932 env[1194]: time="2025-05-10T00:47:36.087915005Z" level=info msg="StopPodSandbox for \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\" returns successfully" May 10 00:47:36.196582 kubelet[1968]: I0510 00:47:36.196470 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-ipsec-secrets\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.197783 kubelet[1968]: I0510 00:47:36.197717 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18b01533-1e08-41a2-a46d-705e9c85a62c-clustermesh-secrets\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.197783 kubelet[1968]: I0510 00:47:36.197783 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-etc-cni-netd\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198024 kubelet[1968]: I0510 00:47:36.197823 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-hostproc\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198024 kubelet[1968]: I0510 00:47:36.197868 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-lib-modules\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198024 kubelet[1968]: I0510 00:47:36.197905 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-run\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198024 kubelet[1968]: I0510 00:47:36.197943 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-xtables-lock\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198024 kubelet[1968]: I0510 00:47:36.197990 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-cgroup\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198024 kubelet[1968]: I0510 00:47:36.198026 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-bpf-maps\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198685 kubelet[1968]: I0510 00:47:36.198062 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-host-proc-sys-kernel\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198685 kubelet[1968]: I0510 00:47:36.198110 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml2rq\" (UniqueName: \"kubernetes.io/projected/18b01533-1e08-41a2-a46d-705e9c85a62c-kube-api-access-ml2rq\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198685 kubelet[1968]: I0510 00:47:36.198153 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-config-path\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198685 kubelet[1968]: I0510 00:47:36.198197 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18b01533-1e08-41a2-a46d-705e9c85a62c-hubble-tls\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198685 kubelet[1968]: I0510 00:47:36.198258 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-host-proc-sys-net\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198685 kubelet[1968]: I0510 00:47:36.198297 1968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cni-path\") pod \"18b01533-1e08-41a2-a46d-705e9c85a62c\" (UID: \"18b01533-1e08-41a2-a46d-705e9c85a62c\") " May 10 00:47:36.198685 kubelet[1968]: I0510 00:47:36.198405 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cni-path" (OuterVolumeSpecName: "cni-path") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:36.200638 kubelet[1968]: I0510 00:47:36.199715 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:36.200638 kubelet[1968]: I0510 00:47:36.199810 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:36.200638 kubelet[1968]: I0510 00:47:36.199907 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-hostproc" (OuterVolumeSpecName: "hostproc") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:36.200638 kubelet[1968]: I0510 00:47:36.199958 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:36.200638 kubelet[1968]: I0510 00:47:36.200005 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:36.200638 kubelet[1968]: I0510 00:47:36.200049 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:36.202694 kubelet[1968]: I0510 00:47:36.202617 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:36.202876 kubelet[1968]: I0510 00:47:36.202706 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:36.204002 kubelet[1968]: I0510 00:47:36.203947 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:47:36.209825 kubelet[1968]: I0510 00:47:36.209785 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 10 00:47:36.209999 kubelet[1968]: I0510 00:47:36.209900 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b01533-1e08-41a2-a46d-705e9c85a62c-kube-api-access-ml2rq" (OuterVolumeSpecName: "kube-api-access-ml2rq") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "kube-api-access-ml2rq". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:47:36.211053 kubelet[1968]: I0510 00:47:36.210974 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b01533-1e08-41a2-a46d-705e9c85a62c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:47:36.212465 kubelet[1968]: I0510 00:47:36.212428 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 10 00:47:36.214097 kubelet[1968]: I0510 00:47:36.214063 1968 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b01533-1e08-41a2-a46d-705e9c85a62c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "18b01533-1e08-41a2-a46d-705e9c85a62c" (UID: "18b01533-1e08-41a2-a46d-705e9c85a62c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 10 00:47:36.299688 kubelet[1968]: I0510 00:47:36.299453 1968 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-cgroup\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.300095 kubelet[1968]: I0510 00:47:36.300061 1968 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-bpf-maps\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.300352 kubelet[1968]: I0510 00:47:36.300301 1968 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-host-proc-sys-kernel\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.300552 kubelet[1968]: I0510 00:47:36.300522 1968 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ml2rq\" (UniqueName: \"kubernetes.io/projected/18b01533-1e08-41a2-a46d-705e9c85a62c-kube-api-access-ml2rq\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.300776 kubelet[1968]: I0510 00:47:36.300745 1968 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-xtables-lock\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.300959 kubelet[1968]: I0510 00:47:36.300932 1968 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18b01533-1e08-41a2-a46d-705e9c85a62c-hubble-tls\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.301607 kubelet[1968]: I0510 00:47:36.301570 1968 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-host-proc-sys-net\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.302079 kubelet[1968]: I0510 00:47:36.301864 1968 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-config-path\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.302375 kubelet[1968]: I0510 00:47:36.302354 1968 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cni-path\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.302524 kubelet[1968]: I0510 00:47:36.302504 1968 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-ipsec-secrets\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.302820 kubelet[1968]: I0510 00:47:36.302801 1968 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-etc-cni-netd\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.303006 kubelet[1968]: I0510 00:47:36.302987 1968 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18b01533-1e08-41a2-a46d-705e9c85a62c-clustermesh-secrets\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.303187 kubelet[1968]: I0510 00:47:36.303168 1968 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-hostproc\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.303390 kubelet[1968]: I0510 00:47:36.303371 1968 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-lib-modules\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.303587 kubelet[1968]: I0510 00:47:36.303569 1968 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18b01533-1e08-41a2-a46d-705e9c85a62c-cilium-run\") on node \"srv-2i5m2.gb1.brightbox.com\" DevicePath \"\"" May 10 00:47:36.498497 kubelet[1968]: E0510 00:47:36.498371 1968 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:47:36.600176 systemd[1]: run-containerd-runc-k8s.io-8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d-runc.74Sa9X.mount: Deactivated successfully. May 10 00:47:36.600602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d-rootfs.mount: Deactivated successfully. May 10 00:47:36.600724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d-shm.mount: Deactivated successfully. May 10 00:47:36.600818 systemd[1]: var-lib-kubelet-pods-18b01533\x2d1e08\x2d41a2\x2da46d\x2d705e9c85a62c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 10 00:47:36.600910 systemd[1]: var-lib-kubelet-pods-18b01533\x2d1e08\x2d41a2\x2da46d\x2d705e9c85a62c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dml2rq.mount: Deactivated successfully. May 10 00:47:36.601002 systemd[1]: var-lib-kubelet-pods-18b01533\x2d1e08\x2d41a2\x2da46d\x2d705e9c85a62c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:47:36.601104 systemd[1]: var-lib-kubelet-pods-18b01533\x2d1e08\x2d41a2\x2da46d\x2d705e9c85a62c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:47:36.960158 sshd[3810]: Accepted publickey for core from 139.178.68.195 port 57058 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:47:36.964686 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:36.974979 systemd[1]: Started session-27.scope. May 10 00:47:36.975585 systemd-logind[1190]: New session 27 of user core. May 10 00:47:37.028902 kubelet[1968]: I0510 00:47:37.028831 1968 scope.go:117] "RemoveContainer" containerID="1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba" May 10 00:47:37.035014 env[1194]: time="2025-05-10T00:47:37.034945156Z" level=info msg="RemoveContainer for \"1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba\"" May 10 00:47:37.039823 env[1194]: time="2025-05-10T00:47:37.039745428Z" level=info msg="RemoveContainer for \"1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba\" returns successfully" May 10 00:47:37.043120 systemd[1]: Removed slice kubepods-burstable-pod18b01533_1e08_41a2_a46d_705e9c85a62c.slice. May 10 00:47:37.091069 kubelet[1968]: I0510 00:47:37.091021 1968 memory_manager.go:355] "RemoveStaleState removing state" podUID="18b01533-1e08-41a2-a46d-705e9c85a62c" containerName="mount-cgroup" May 10 00:47:37.097126 systemd[1]: Created slice kubepods-burstable-pod5fc7f11a_432d_4112_8385_2d257e83acad.slice. May 10 00:47:37.210494 kubelet[1968]: I0510 00:47:37.209996 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5fc7f11a-432d-4112-8385-2d257e83acad-cni-path\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.211802 kubelet[1968]: I0510 00:47:37.211584 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fc7f11a-432d-4112-8385-2d257e83acad-etc-cni-netd\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.211802 kubelet[1968]: I0510 00:47:37.211706 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5fc7f11a-432d-4112-8385-2d257e83acad-hubble-tls\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.211802 kubelet[1968]: I0510 00:47:37.211781 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5fc7f11a-432d-4112-8385-2d257e83acad-bpf-maps\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.212255 kubelet[1968]: I0510 00:47:37.211952 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fc7f11a-432d-4112-8385-2d257e83acad-lib-modules\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.212255 kubelet[1968]: I0510 00:47:37.212040 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5fc7f11a-432d-4112-8385-2d257e83acad-cilium-config-path\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.212255 kubelet[1968]: I0510 00:47:37.212118 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5fc7f11a-432d-4112-8385-2d257e83acad-cilium-run\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.212522 kubelet[1968]: I0510 00:47:37.212294 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fc7f11a-432d-4112-8385-2d257e83acad-xtables-lock\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.212522 kubelet[1968]: I0510 00:47:37.212369 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5fc7f11a-432d-4112-8385-2d257e83acad-clustermesh-secrets\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.212522 kubelet[1968]: I0510 00:47:37.212415 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sr7b\" (UniqueName: \"kubernetes.io/projected/5fc7f11a-432d-4112-8385-2d257e83acad-kube-api-access-5sr7b\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.212522 kubelet[1968]: I0510 00:47:37.212493 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5fc7f11a-432d-4112-8385-2d257e83acad-host-proc-sys-kernel\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.212862 kubelet[1968]: I0510 00:47:37.212559 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5fc7f11a-432d-4112-8385-2d257e83acad-cilium-cgroup\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.212862 kubelet[1968]: I0510 00:47:37.212621 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5fc7f11a-432d-4112-8385-2d257e83acad-cilium-ipsec-secrets\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.212862 kubelet[1968]: I0510 00:47:37.212747 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5fc7f11a-432d-4112-8385-2d257e83acad-hostproc\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.212862 kubelet[1968]: I0510 00:47:37.212828 1968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5fc7f11a-432d-4112-8385-2d257e83acad-host-proc-sys-net\") pod \"cilium-hlq9t\" (UID: \"5fc7f11a-432d-4112-8385-2d257e83acad\") " pod="kube-system/cilium-hlq9t" May 10 00:47:37.373774 kubelet[1968]: E0510 00:47:37.373632 1968 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-ftnx4" podUID="d13bdef6-fb5b-4b3b-8d17-7a65fa797254" May 10 00:47:37.380550 kubelet[1968]: I0510 00:47:37.380506 1968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b01533-1e08-41a2-a46d-705e9c85a62c" path="/var/lib/kubelet/pods/18b01533-1e08-41a2-a46d-705e9c85a62c/volumes" May 10 00:47:37.400270 env[1194]: time="2025-05-10T00:47:37.400186312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hlq9t,Uid:5fc7f11a-432d-4112-8385-2d257e83acad,Namespace:kube-system,Attempt:0,}" May 10 00:47:37.420763 env[1194]: time="2025-05-10T00:47:37.420522681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:37.420763 env[1194]: time="2025-05-10T00:47:37.420575109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:37.420763 env[1194]: time="2025-05-10T00:47:37.420593251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:37.421254 env[1194]: time="2025-05-10T00:47:37.421150980Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787 pid=3847 runtime=io.containerd.runc.v2 May 10 00:47:37.443195 systemd[1]: Started cri-containerd-ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787.scope. May 10 00:47:37.480745 env[1194]: time="2025-05-10T00:47:37.480614511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hlq9t,Uid:5fc7f11a-432d-4112-8385-2d257e83acad,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787\"" May 10 00:47:37.485986 env[1194]: time="2025-05-10T00:47:37.485947685Z" level=info msg="CreateContainer within sandbox \"ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:47:37.493467 env[1194]: time="2025-05-10T00:47:37.493422683Z" level=info msg="CreateContainer within sandbox \"ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b9252e57d81a8158530ec747a8805e8eb60672f879726056d2993e5da8d6ed06\"" May 10 00:47:37.495507 env[1194]: time="2025-05-10T00:47:37.495471384Z" level=info msg="StartContainer for \"b9252e57d81a8158530ec747a8805e8eb60672f879726056d2993e5da8d6ed06\"" May 10 00:47:37.515922 systemd[1]: Started cri-containerd-b9252e57d81a8158530ec747a8805e8eb60672f879726056d2993e5da8d6ed06.scope. May 10 00:47:37.576259 env[1194]: time="2025-05-10T00:47:37.575553569Z" level=info msg="StartContainer for \"b9252e57d81a8158530ec747a8805e8eb60672f879726056d2993e5da8d6ed06\" returns successfully" May 10 00:47:37.620881 systemd[1]: cri-containerd-b9252e57d81a8158530ec747a8805e8eb60672f879726056d2993e5da8d6ed06.scope: Deactivated successfully. May 10 00:47:37.647635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9252e57d81a8158530ec747a8805e8eb60672f879726056d2993e5da8d6ed06-rootfs.mount: Deactivated successfully. May 10 00:47:37.657534 env[1194]: time="2025-05-10T00:47:37.657485647Z" level=info msg="shim disconnected" id=b9252e57d81a8158530ec747a8805e8eb60672f879726056d2993e5da8d6ed06 May 10 00:47:37.657534 env[1194]: time="2025-05-10T00:47:37.657533145Z" level=warning msg="cleaning up after shim disconnected" id=b9252e57d81a8158530ec747a8805e8eb60672f879726056d2993e5da8d6ed06 namespace=k8s.io May 10 00:47:37.657534 env[1194]: time="2025-05-10T00:47:37.657543713Z" level=info msg="cleaning up dead shim" May 10 00:47:37.675386 env[1194]: time="2025-05-10T00:47:37.675336622Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3932 runtime=io.containerd.runc.v2\n" May 10 00:47:38.041931 env[1194]: time="2025-05-10T00:47:38.041863839Z" level=info msg="CreateContainer within sandbox \"ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:47:38.059817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2513426826.mount: Deactivated successfully. May 10 00:47:38.064770 env[1194]: time="2025-05-10T00:47:38.064691757Z" level=info msg="CreateContainer within sandbox \"ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"81ca203736c27b30fe86b71b4aad61a88c13a5b9f6f49c2b97f63f92a8bc5af6\"" May 10 00:47:38.065633 env[1194]: time="2025-05-10T00:47:38.065604496Z" level=info msg="StartContainer for \"81ca203736c27b30fe86b71b4aad61a88c13a5b9f6f49c2b97f63f92a8bc5af6\"" May 10 00:47:38.102957 systemd[1]: Started cri-containerd-81ca203736c27b30fe86b71b4aad61a88c13a5b9f6f49c2b97f63f92a8bc5af6.scope. May 10 00:47:38.148325 env[1194]: time="2025-05-10T00:47:38.148274811Z" level=info msg="StartContainer for \"81ca203736c27b30fe86b71b4aad61a88c13a5b9f6f49c2b97f63f92a8bc5af6\" returns successfully" May 10 00:47:38.158736 systemd[1]: cri-containerd-81ca203736c27b30fe86b71b4aad61a88c13a5b9f6f49c2b97f63f92a8bc5af6.scope: Deactivated successfully. May 10 00:47:38.193713 env[1194]: time="2025-05-10T00:47:38.193640011Z" level=info msg="shim disconnected" id=81ca203736c27b30fe86b71b4aad61a88c13a5b9f6f49c2b97f63f92a8bc5af6 May 10 00:47:38.194060 env[1194]: time="2025-05-10T00:47:38.194040142Z" level=warning msg="cleaning up after shim disconnected" id=81ca203736c27b30fe86b71b4aad61a88c13a5b9f6f49c2b97f63f92a8bc5af6 namespace=k8s.io May 10 00:47:38.194136 env[1194]: time="2025-05-10T00:47:38.194123101Z" level=info msg="cleaning up dead shim" May 10 00:47:38.212501 env[1194]: time="2025-05-10T00:47:38.212425137Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3990 runtime=io.containerd.runc.v2\n" May 10 00:47:38.599844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81ca203736c27b30fe86b71b4aad61a88c13a5b9f6f49c2b97f63f92a8bc5af6-rootfs.mount: Deactivated successfully. May 10 00:47:38.971927 kubelet[1968]: W0510 00:47:38.971823 1968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18b01533_1e08_41a2_a46d_705e9c85a62c.slice/cri-containerd-1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba.scope WatchSource:0}: container "1665dd5ef559961abccd6758892d6d302c142bc94cba4683f064f81718cd20ba" in namespace "k8s.io": not found May 10 00:47:39.051028 env[1194]: time="2025-05-10T00:47:39.049903594Z" level=info msg="CreateContainer within sandbox \"ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:47:39.068164 env[1194]: time="2025-05-10T00:47:39.068106983Z" level=info msg="CreateContainer within sandbox \"ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a8960cd846c7184b002fd5b4ceed091762ee7d5657702de54b59a4a929ffbde4\"" May 10 00:47:39.068944 env[1194]: time="2025-05-10T00:47:39.068923062Z" level=info msg="StartContainer for \"a8960cd846c7184b002fd5b4ceed091762ee7d5657702de54b59a4a929ffbde4\"" May 10 00:47:39.115441 systemd[1]: Started cri-containerd-a8960cd846c7184b002fd5b4ceed091762ee7d5657702de54b59a4a929ffbde4.scope. May 10 00:47:39.150547 env[1194]: time="2025-05-10T00:47:39.150478555Z" level=info msg="StartContainer for \"a8960cd846c7184b002fd5b4ceed091762ee7d5657702de54b59a4a929ffbde4\" returns successfully" May 10 00:47:39.154746 systemd[1]: cri-containerd-a8960cd846c7184b002fd5b4ceed091762ee7d5657702de54b59a4a929ffbde4.scope: Deactivated successfully. May 10 00:47:39.183363 env[1194]: time="2025-05-10T00:47:39.183178538Z" level=info msg="shim disconnected" id=a8960cd846c7184b002fd5b4ceed091762ee7d5657702de54b59a4a929ffbde4 May 10 00:47:39.183363 env[1194]: time="2025-05-10T00:47:39.183243806Z" level=warning msg="cleaning up after shim disconnected" id=a8960cd846c7184b002fd5b4ceed091762ee7d5657702de54b59a4a929ffbde4 namespace=k8s.io May 10 00:47:39.183363 env[1194]: time="2025-05-10T00:47:39.183255854Z" level=info msg="cleaning up dead shim" May 10 00:47:39.198124 env[1194]: time="2025-05-10T00:47:39.198075567Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4047 runtime=io.containerd.runc.v2\n" May 10 00:47:39.374099 kubelet[1968]: E0510 00:47:39.373879 1968 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-ftnx4" podUID="d13bdef6-fb5b-4b3b-8d17-7a65fa797254" May 10 00:47:39.600227 systemd[1]: run-containerd-runc-k8s.io-a8960cd846c7184b002fd5b4ceed091762ee7d5657702de54b59a4a929ffbde4-runc.X2THsi.mount: Deactivated successfully. May 10 00:47:39.600341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8960cd846c7184b002fd5b4ceed091762ee7d5657702de54b59a4a929ffbde4-rootfs.mount: Deactivated successfully. May 10 00:47:40.056960 env[1194]: time="2025-05-10T00:47:40.056642882Z" level=info msg="CreateContainer within sandbox \"ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:47:40.070821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263154777.mount: Deactivated successfully. May 10 00:47:40.079848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644826945.mount: Deactivated successfully. May 10 00:47:40.083773 env[1194]: time="2025-05-10T00:47:40.083731148Z" level=info msg="CreateContainer within sandbox \"ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d8d9b8ee0cc059e2630c982a3341d047cf17edcf3e829298fdf22404cccbc515\"" May 10 00:47:40.084666 env[1194]: time="2025-05-10T00:47:40.084628888Z" level=info msg="StartContainer for \"d8d9b8ee0cc059e2630c982a3341d047cf17edcf3e829298fdf22404cccbc515\"" May 10 00:47:40.108656 systemd[1]: Started cri-containerd-d8d9b8ee0cc059e2630c982a3341d047cf17edcf3e829298fdf22404cccbc515.scope. May 10 00:47:40.157458 systemd[1]: cri-containerd-d8d9b8ee0cc059e2630c982a3341d047cf17edcf3e829298fdf22404cccbc515.scope: Deactivated successfully. May 10 00:47:40.162591 env[1194]: time="2025-05-10T00:47:40.162318628Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fc7f11a_432d_4112_8385_2d257e83acad.slice/cri-containerd-d8d9b8ee0cc059e2630c982a3341d047cf17edcf3e829298fdf22404cccbc515.scope/memory.events\": no such file or directory" May 10 00:47:40.163432 env[1194]: time="2025-05-10T00:47:40.163373220Z" level=info msg="StartContainer for \"d8d9b8ee0cc059e2630c982a3341d047cf17edcf3e829298fdf22404cccbc515\" returns successfully" May 10 00:47:40.189372 env[1194]: time="2025-05-10T00:47:40.189313694Z" level=info msg="shim disconnected" id=d8d9b8ee0cc059e2630c982a3341d047cf17edcf3e829298fdf22404cccbc515 May 10 00:47:40.189659 env[1194]: time="2025-05-10T00:47:40.189642291Z" level=warning msg="cleaning up after shim disconnected" id=d8d9b8ee0cc059e2630c982a3341d047cf17edcf3e829298fdf22404cccbc515 namespace=k8s.io May 10 00:47:40.189765 env[1194]: time="2025-05-10T00:47:40.189749942Z" level=info msg="cleaning up dead shim" May 10 00:47:40.200140 env[1194]: time="2025-05-10T00:47:40.200093957Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4101 runtime=io.containerd.runc.v2\n" May 10 00:47:41.067622 env[1194]: time="2025-05-10T00:47:41.067548101Z" level=info msg="CreateContainer within sandbox \"ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:47:41.087647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233193783.mount: Deactivated successfully. May 10 00:47:41.094030 env[1194]: time="2025-05-10T00:47:41.093966727Z" level=info msg="CreateContainer within sandbox \"ab1115aa571d9cdb71b776a5375e471aae2e534ab1dfc59425e7323c0c9f0787\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b1b89102b18fe1dc7a2b2e03f7619c2f45dbadef5d41437efc59bebb29beb2de\"" May 10 00:47:41.096261 env[1194]: time="2025-05-10T00:47:41.095249388Z" level=info msg="StartContainer for \"b1b89102b18fe1dc7a2b2e03f7619c2f45dbadef5d41437efc59bebb29beb2de\"" May 10 00:47:41.121331 systemd[1]: Started cri-containerd-b1b89102b18fe1dc7a2b2e03f7619c2f45dbadef5d41437efc59bebb29beb2de.scope. May 10 00:47:41.162659 env[1194]: time="2025-05-10T00:47:41.162601796Z" level=info msg="StartContainer for \"b1b89102b18fe1dc7a2b2e03f7619c2f45dbadef5d41437efc59bebb29beb2de\" returns successfully" May 10 00:47:41.373579 kubelet[1968]: E0510 00:47:41.373420 1968 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-ftnx4" podUID="d13bdef6-fb5b-4b3b-8d17-7a65fa797254" May 10 00:47:41.390322 env[1194]: time="2025-05-10T00:47:41.390057067Z" level=info msg="StopPodSandbox for \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\"" May 10 00:47:41.390322 env[1194]: time="2025-05-10T00:47:41.390178467Z" level=info msg="TearDown network for sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" successfully" May 10 00:47:41.390322 env[1194]: time="2025-05-10T00:47:41.390249228Z" level=info msg="StopPodSandbox for \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" returns successfully" May 10 00:47:41.391094 env[1194]: time="2025-05-10T00:47:41.391065295Z" level=info msg="RemovePodSandbox for \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\"" May 10 00:47:41.391186 env[1194]: time="2025-05-10T00:47:41.391098808Z" level=info msg="Forcibly stopping sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\"" May 10 00:47:41.391244 env[1194]: time="2025-05-10T00:47:41.391184305Z" level=info msg="TearDown network for sandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" successfully" May 10 00:47:41.394536 env[1194]: time="2025-05-10T00:47:41.394492067Z" level=info msg="RemovePodSandbox \"b9ede0a8e91d0ab33d28bb836d871c0021392d9e6ecc44d9878f04087ae7e41c\" returns successfully" May 10 00:47:41.395135 env[1194]: time="2025-05-10T00:47:41.394968967Z" level=info msg="StopPodSandbox for \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\"" May 10 00:47:41.395135 env[1194]: time="2025-05-10T00:47:41.395044758Z" level=info msg="TearDown network for sandbox \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\" successfully" May 10 00:47:41.395135 env[1194]: time="2025-05-10T00:47:41.395076979Z" level=info msg="StopPodSandbox for \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\" returns successfully" May 10 00:47:41.396712 env[1194]: time="2025-05-10T00:47:41.395538729Z" level=info msg="RemovePodSandbox for \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\"" May 10 00:47:41.396712 env[1194]: time="2025-05-10T00:47:41.395561624Z" level=info msg="Forcibly stopping sandbox \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\"" May 10 00:47:41.396712 env[1194]: time="2025-05-10T00:47:41.395627419Z" level=info msg="TearDown network for sandbox \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\" successfully" May 10 00:47:41.397962 env[1194]: time="2025-05-10T00:47:41.397926947Z" level=info msg="RemovePodSandbox \"8b64d562fd4fedcd47623b5394776431fa4cfbe617bcc6e8516ebdff63a4815d\" returns successfully" May 10 00:47:41.398399 env[1194]: time="2025-05-10T00:47:41.398377271Z" level=info msg="StopPodSandbox for \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\"" May 10 00:47:41.398603 env[1194]: time="2025-05-10T00:47:41.398537690Z" level=info msg="TearDown network for sandbox \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\" successfully" May 10 00:47:41.398680 env[1194]: time="2025-05-10T00:47:41.398664210Z" level=info msg="StopPodSandbox for \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\" returns successfully" May 10 00:47:41.399095 env[1194]: time="2025-05-10T00:47:41.399076112Z" level=info msg="RemovePodSandbox for \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\"" May 10 00:47:41.399218 env[1194]: time="2025-05-10T00:47:41.399187426Z" level=info msg="Forcibly stopping sandbox \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\"" May 10 00:47:41.399348 env[1194]: time="2025-05-10T00:47:41.399332218Z" level=info msg="TearDown network for sandbox \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\" successfully" May 10 00:47:41.402127 env[1194]: time="2025-05-10T00:47:41.402083295Z" level=info msg="RemovePodSandbox \"db9a312db111016371dc4a2331ad181bdaeb001a7e1c961a208835b8429f8f78\" returns successfully" May 10 00:47:41.688368 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 10 00:47:42.089528 kubelet[1968]: W0510 00:47:42.089462 1968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fc7f11a_432d_4112_8385_2d257e83acad.slice/cri-containerd-b9252e57d81a8158530ec747a8805e8eb60672f879726056d2993e5da8d6ed06.scope WatchSource:0}: task b9252e57d81a8158530ec747a8805e8eb60672f879726056d2993e5da8d6ed06 not found: not found May 10 00:47:42.108627 kubelet[1968]: I0510 00:47:42.108536 1968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hlq9t" podStartSLOduration=5.108500141 podStartE2EDuration="5.108500141s" podCreationTimestamp="2025-05-10 00:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:47:42.102093829 +0000 UTC m=+180.942376586" watchObservedRunningTime="2025-05-10 00:47:42.108500141 +0000 UTC m=+180.948782901" May 10 00:47:43.799396 systemd[1]: run-containerd-runc-k8s.io-b1b89102b18fe1dc7a2b2e03f7619c2f45dbadef5d41437efc59bebb29beb2de-runc.jG3e8o.mount: Deactivated successfully. May 10 00:47:44.939351 systemd-networkd[1031]: lxc_health: Link UP May 10 00:47:44.974271 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:47:44.974264 systemd-networkd[1031]: lxc_health: Gained carrier May 10 00:47:45.204792 kubelet[1968]: W0510 00:47:45.204551 1968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fc7f11a_432d_4112_8385_2d257e83acad.slice/cri-containerd-81ca203736c27b30fe86b71b4aad61a88c13a5b9f6f49c2b97f63f92a8bc5af6.scope WatchSource:0}: task 81ca203736c27b30fe86b71b4aad61a88c13a5b9f6f49c2b97f63f92a8bc5af6 not found: not found May 10 00:47:46.000257 systemd[1]: run-containerd-runc-k8s.io-b1b89102b18fe1dc7a2b2e03f7619c2f45dbadef5d41437efc59bebb29beb2de-runc.n7klhu.mount: Deactivated successfully. May 10 00:47:46.230441 systemd-networkd[1031]: lxc_health: Gained IPv6LL May 10 00:47:48.283812 systemd[1]: run-containerd-runc-k8s.io-b1b89102b18fe1dc7a2b2e03f7619c2f45dbadef5d41437efc59bebb29beb2de-runc.OzboAz.mount: Deactivated successfully. May 10 00:47:48.329251 kubelet[1968]: W0510 00:47:48.326862 1968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fc7f11a_432d_4112_8385_2d257e83acad.slice/cri-containerd-a8960cd846c7184b002fd5b4ceed091762ee7d5657702de54b59a4a929ffbde4.scope WatchSource:0}: task a8960cd846c7184b002fd5b4ceed091762ee7d5657702de54b59a4a929ffbde4 not found: not found May 10 00:47:50.533808 kubelet[1968]: E0510 00:47:50.533572 1968 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48324->127.0.0.1:45619: write tcp 127.0.0.1:48324->127.0.0.1:45619: write: broken pipe May 10 00:47:51.436810 kubelet[1968]: W0510 00:47:51.436709 1968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fc7f11a_432d_4112_8385_2d257e83acad.slice/cri-containerd-d8d9b8ee0cc059e2630c982a3341d047cf17edcf3e829298fdf22404cccbc515.scope WatchSource:0}: task d8d9b8ee0cc059e2630c982a3341d047cf17edcf3e829298fdf22404cccbc515 not found: not found May 10 00:47:52.922978 sshd[3810]: pam_unix(sshd:session): session closed for user core May 10 00:47:52.927803 systemd[1]: sshd@27-10.244.93.58:22-139.178.68.195:57058.service: Deactivated successfully. May 10 00:47:52.928609 systemd[1]: session-27.scope: Deactivated successfully. May 10 00:47:52.929677 systemd-logind[1190]: Session 27 logged out. Waiting for processes to exit. May 10 00:47:52.930665 systemd-logind[1190]: Removed session 27.