Nov 1 03:49:48.892763 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 03:49:48.892786 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 03:49:48.892799 kernel: BIOS-provided physical RAM map: Nov 1 03:49:48.892807 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 03:49:48.892813 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 03:49:48.892820 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 03:49:48.892829 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 1 03:49:48.892836 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 1 03:49:48.892843 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 03:49:48.892850 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 03:49:48.892860 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 03:49:48.892867 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 03:49:48.892874 kernel: NX (Execute Disable) protection: active Nov 1 03:49:48.892881 kernel: SMBIOS 2.8 present. Nov 1 03:49:48.892890 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 1 03:49:48.892898 kernel: Hypervisor detected: KVM Nov 1 03:49:48.892908 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 03:49:48.892916 kernel: kvm-clock: cpu 0, msr 331a0001, primary cpu clock Nov 1 03:49:48.892924 kernel: kvm-clock: using sched offset of 4137216331 cycles Nov 1 03:49:48.892933 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 03:49:48.892941 kernel: tsc: Detected 2294.576 MHz processor Nov 1 03:49:48.892949 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 03:49:48.892964 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 03:49:48.892972 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 1 03:49:48.892980 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 03:49:48.892991 kernel: Using GB pages for direct mapping Nov 1 03:49:48.892998 kernel: ACPI: Early table checksum verification disabled Nov 1 03:49:48.893006 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 1 03:49:48.893014 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 03:49:48.893022 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 03:49:48.893030 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 03:49:48.893038 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 1 03:49:48.893045 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 03:49:48.893053 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 03:49:48.893063 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 03:49:48.893071 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 03:49:48.893079 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 1 03:49:48.893087 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 1 03:49:48.893094 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 1 03:49:48.893102 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 1 03:49:48.893114 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 1 03:49:48.893125 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 1 03:49:48.893134 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 1 03:49:48.893142 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 03:49:48.893150 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 03:49:48.893170 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 1 03:49:48.900176 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Nov 1 03:49:48.900187 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 1 03:49:48.900200 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Nov 1 03:49:48.900208 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 1 03:49:48.900216 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Nov 1 03:49:48.900224 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 1 03:49:48.900233 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Nov 1 03:49:48.900241 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 1 03:49:48.900250 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Nov 1 03:49:48.900258 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 1 03:49:48.900266 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Nov 1 03:49:48.900275 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 1 03:49:48.900285 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Nov 1 03:49:48.900294 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 03:49:48.900303 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 03:49:48.900311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 1 03:49:48.900320 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Nov 1 03:49:48.900328 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Nov 1 03:49:48.900337 kernel: Zone ranges: Nov 1 03:49:48.900346 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 03:49:48.900355 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 1 03:49:48.900365 kernel: Normal empty Nov 1 03:49:48.900374 kernel: Movable zone start for each node Nov 1 03:49:48.900383 kernel: Early memory node ranges Nov 1 03:49:48.900391 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 03:49:48.900400 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 1 03:49:48.900408 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 1 03:49:48.900417 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 03:49:48.900425 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 03:49:48.900434 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 1 03:49:48.900445 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 03:49:48.900454 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 03:49:48.900462 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 03:49:48.900471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 03:49:48.900479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 03:49:48.900488 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 03:49:48.900496 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 03:49:48.900505 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 03:49:48.900513 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 03:49:48.900524 kernel: TSC deadline timer available Nov 1 03:49:48.900532 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Nov 1 03:49:48.900541 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 03:49:48.900550 kernel: Booting paravirtualized kernel on KVM Nov 1 03:49:48.900559 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 03:49:48.900567 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Nov 1 03:49:48.900576 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Nov 1 03:49:48.900585 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Nov 1 03:49:48.900593 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 03:49:48.900603 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Nov 1 03:49:48.900612 kernel: kvm-guest: PV spinlocks enabled Nov 1 03:49:48.900621 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 03:49:48.900630 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Nov 1 03:49:48.900638 kernel: Policy zone: DMA32 Nov 1 03:49:48.900648 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 03:49:48.900657 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 03:49:48.900666 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 03:49:48.900677 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 03:49:48.900685 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 03:49:48.900694 kernel: Memory: 1903832K/2096616K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 192524K reserved, 0K cma-reserved) Nov 1 03:49:48.900703 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 03:49:48.900711 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 03:49:48.900720 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 03:49:48.900729 kernel: rcu: Hierarchical RCU implementation. Nov 1 03:49:48.900738 kernel: rcu: RCU event tracing is enabled. Nov 1 03:49:48.900747 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 03:49:48.900758 kernel: Rude variant of Tasks RCU enabled. Nov 1 03:49:48.900767 kernel: Tracing variant of Tasks RCU enabled. Nov 1 03:49:48.900776 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 03:49:48.900784 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 03:49:48.900793 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 1 03:49:48.900801 kernel: random: crng init done Nov 1 03:49:48.900811 kernel: Console: colour VGA+ 80x25 Nov 1 03:49:48.900829 kernel: printk: console [tty0] enabled Nov 1 03:49:48.900838 kernel: printk: console [ttyS0] enabled Nov 1 03:49:48.900847 kernel: ACPI: Core revision 20210730 Nov 1 03:49:48.900856 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 03:49:48.900865 kernel: x2apic enabled Nov 1 03:49:48.900877 kernel: Switched APIC routing to physical x2apic. Nov 1 03:49:48.900886 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Nov 1 03:49:48.900895 kernel: Calibrating delay loop (skipped) preset value.. 4589.15 BogoMIPS (lpj=2294576) Nov 1 03:49:48.900904 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 03:49:48.900914 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 03:49:48.900925 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 03:49:48.900934 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 03:49:48.900943 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Nov 1 03:49:48.900952 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 03:49:48.900967 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 03:49:48.900976 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 03:49:48.900985 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 03:49:48.900994 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 03:49:48.901003 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 03:49:48.901012 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 03:49:48.901023 kernel: TAA: Mitigation: Clear CPU buffers Nov 1 03:49:48.901032 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 03:49:48.901041 kernel: GDS: Unknown: Dependent on hypervisor status Nov 1 03:49:48.901050 kernel: active return thunk: its_return_thunk Nov 1 03:49:48.901059 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 03:49:48.901068 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 03:49:48.901077 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 03:49:48.901086 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 03:49:48.901095 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 1 03:49:48.901105 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 1 03:49:48.901114 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 1 03:49:48.901125 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 1 03:49:48.901134 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 03:49:48.901143 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 1 03:49:48.901152 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 1 03:49:48.901172 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 1 03:49:48.901181 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Nov 1 03:49:48.901191 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Nov 1 03:49:48.901200 kernel: Freeing SMP alternatives memory: 32K Nov 1 03:49:48.901209 kernel: pid_max: default: 32768 minimum: 301 Nov 1 03:49:48.901218 kernel: LSM: Security Framework initializing Nov 1 03:49:48.901226 kernel: SELinux: Initializing. Nov 1 03:49:48.901236 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 03:49:48.901248 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 03:49:48.901257 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Nov 1 03:49:48.901266 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 1 03:49:48.901275 kernel: signal: max sigframe size: 3632 Nov 1 03:49:48.901284 kernel: rcu: Hierarchical SRCU implementation. Nov 1 03:49:48.901294 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 03:49:48.901303 kernel: smp: Bringing up secondary CPUs ... Nov 1 03:49:48.901312 kernel: x86: Booting SMP configuration: Nov 1 03:49:48.901321 kernel: .... node #0, CPUs: #1 Nov 1 03:49:48.901330 kernel: kvm-clock: cpu 1, msr 331a0041, secondary cpu clock Nov 1 03:49:48.901342 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 1 03:49:48.901351 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Nov 1 03:49:48.901360 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 03:49:48.901369 kernel: smpboot: Max logical packages: 16 Nov 1 03:49:48.901378 kernel: smpboot: Total of 2 processors activated (9178.30 BogoMIPS) Nov 1 03:49:48.901388 kernel: devtmpfs: initialized Nov 1 03:49:48.901397 kernel: x86/mm: Memory block size: 128MB Nov 1 03:49:48.901406 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 03:49:48.901416 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 03:49:48.901427 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 03:49:48.901436 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 03:49:48.901446 kernel: audit: initializing netlink subsys (disabled) Nov 1 03:49:48.901455 kernel: audit: type=2000 audit(1761968988.359:1): state=initialized audit_enabled=0 res=1 Nov 1 03:49:48.901464 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 03:49:48.901473 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 03:49:48.901482 kernel: cpuidle: using governor menu Nov 1 03:49:48.901491 kernel: ACPI: bus type PCI registered Nov 1 03:49:48.901500 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 03:49:48.901511 kernel: dca service started, version 1.12.1 Nov 1 03:49:48.901520 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 03:49:48.901530 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Nov 1 03:49:48.901539 kernel: PCI: Using configuration type 1 for base access Nov 1 03:49:48.901548 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 03:49:48.901557 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 03:49:48.901566 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 03:49:48.901575 kernel: ACPI: Added _OSI(Module Device) Nov 1 03:49:48.901584 kernel: ACPI: Added _OSI(Processor Device) Nov 1 03:49:48.901596 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 03:49:48.901605 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 03:49:48.901614 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 03:49:48.901623 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 03:49:48.901632 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 03:49:48.901641 kernel: ACPI: Interpreter enabled Nov 1 03:49:48.901650 kernel: ACPI: PM: (supports S0 S5) Nov 1 03:49:48.901659 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 03:49:48.901668 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 03:49:48.901680 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 03:49:48.901689 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 03:49:48.901839 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 03:49:48.901933 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 1 03:49:48.902028 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 1 03:49:48.902041 kernel: PCI host bridge to bus 0000:00 Nov 1 03:49:48.902231 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 03:49:48.902316 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 03:49:48.902393 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 03:49:48.902470 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 1 03:49:48.902546 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 03:49:48.902621 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 1 03:49:48.902720 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 03:49:48.909238 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 03:49:48.909376 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Nov 1 03:49:48.909470 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Nov 1 03:49:48.909559 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Nov 1 03:49:48.909646 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Nov 1 03:49:48.909733 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 03:49:48.909833 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 1 03:49:48.909928 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Nov 1 03:49:48.910040 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 1 03:49:48.910135 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Nov 1 03:49:48.910247 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 1 03:49:48.910339 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Nov 1 03:49:48.910431 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 1 03:49:48.910525 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Nov 1 03:49:48.910634 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 1 03:49:48.910729 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Nov 1 03:49:48.910822 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 1 03:49:48.910915 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Nov 1 03:49:48.911021 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 1 03:49:48.911115 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Nov 1 03:49:48.911219 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 1 03:49:48.911314 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Nov 1 03:49:48.911409 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 03:49:48.911498 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 03:49:48.911586 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Nov 1 03:49:48.911673 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 1 03:49:48.911768 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Nov 1 03:49:48.911865 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 1 03:49:48.911953 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 03:49:48.912085 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Nov 1 03:49:48.912183 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Nov 1 03:49:48.912318 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 03:49:48.912416 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 03:49:48.912520 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 03:49:48.912610 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Nov 1 03:49:48.912697 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Nov 1 03:49:48.912795 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 03:49:48.912888 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 03:49:48.912994 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Nov 1 03:49:48.913091 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Nov 1 03:49:48.913191 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 03:49:48.913280 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 03:49:48.913370 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 03:49:48.913469 kernel: pci_bus 0000:02: extended config space not accessible Nov 1 03:49:48.913574 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Nov 1 03:49:48.913676 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Nov 1 03:49:48.913769 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 03:49:48.913865 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 03:49:48.913969 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 1 03:49:48.914061 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Nov 1 03:49:48.914153 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 03:49:48.914284 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 03:49:48.914375 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 03:49:48.914474 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 1 03:49:48.914566 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 1 03:49:48.914677 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 03:49:48.914765 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 03:49:48.916215 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 03:49:48.916328 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 03:49:48.916424 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 03:49:48.916525 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 03:49:48.916623 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 03:49:48.916717 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 03:49:48.916812 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 03:49:48.916907 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 03:49:48.917013 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 03:49:48.917107 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 03:49:48.917319 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 03:49:48.917413 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 03:49:48.917499 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 03:49:48.917587 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 03:49:48.917672 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 03:49:48.917758 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 03:49:48.917771 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 03:49:48.917781 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 03:49:48.917790 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 03:49:48.917803 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 03:49:48.917812 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 03:49:48.917821 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 03:49:48.917830 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 03:49:48.917839 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 03:49:48.917848 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 03:49:48.917857 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 03:49:48.917866 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 03:49:48.917875 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 03:49:48.917887 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 03:49:48.917896 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 03:49:48.917906 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 03:49:48.917915 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 03:49:48.917924 kernel: iommu: Default domain type: Translated Nov 1 03:49:48.917933 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 03:49:48.918027 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 03:49:48.918115 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 03:49:48.918216 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 03:49:48.918231 kernel: vgaarb: loaded Nov 1 03:49:48.918241 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 03:49:48.918251 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 03:49:48.918260 kernel: PTP clock support registered Nov 1 03:49:48.918269 kernel: PCI: Using ACPI for IRQ routing Nov 1 03:49:48.918278 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 03:49:48.918287 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 03:49:48.918296 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 1 03:49:48.918308 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 03:49:48.918317 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 03:49:48.918327 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 03:49:48.918336 kernel: pnp: PnP ACPI init Nov 1 03:49:48.918434 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 03:49:48.918447 kernel: pnp: PnP ACPI: found 5 devices Nov 1 03:49:48.918457 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 03:49:48.918466 kernel: NET: Registered PF_INET protocol family Nov 1 03:49:48.918475 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 03:49:48.918487 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 03:49:48.918497 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 03:49:48.918506 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 03:49:48.918515 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 03:49:48.918524 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 03:49:48.918533 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 03:49:48.918543 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 03:49:48.918552 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 03:49:48.918564 kernel: NET: Registered PF_XDP protocol family Nov 1 03:49:48.918652 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 1 03:49:48.918741 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 1 03:49:48.918828 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 1 03:49:48.918916 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 1 03:49:48.919009 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 1 03:49:48.919098 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 1 03:49:48.921249 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 1 03:49:48.921344 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 1 03:49:48.921432 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 1 03:49:48.921520 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 1 03:49:48.921606 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 1 03:49:48.921693 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 1 03:49:48.921799 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 1 03:49:48.921889 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 1 03:49:48.921984 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 1 03:49:48.922072 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 1 03:49:48.922177 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 03:49:48.922270 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 03:49:48.922357 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 03:49:48.922446 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 1 03:49:48.922532 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 03:49:48.922622 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 03:49:48.922710 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 03:49:48.922797 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 1 03:49:48.922885 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 03:49:48.922979 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 03:49:48.923071 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 03:49:48.923169 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 1 03:49:48.923257 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 03:49:48.923344 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 03:49:48.923431 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 03:49:48.923519 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 1 03:49:48.923606 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 03:49:48.923695 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 03:49:48.923782 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 03:49:48.923869 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 1 03:49:48.923970 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 03:49:48.924058 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 03:49:48.924146 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 03:49:48.924246 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 1 03:49:48.924336 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 03:49:48.924423 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 03:49:48.924512 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 03:49:48.924604 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 1 03:49:48.924691 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 03:49:48.924779 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 03:49:48.924869 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 03:49:48.924963 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 1 03:49:48.925054 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 03:49:48.925143 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 03:49:48.933270 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 03:49:48.933354 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 03:49:48.933433 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 03:49:48.933510 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 1 03:49:48.933588 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 03:49:48.933665 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 1 03:49:48.933757 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 1 03:49:48.933846 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 1 03:49:48.933929 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 03:49:48.934029 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 1 03:49:48.934120 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 1 03:49:48.934217 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 1 03:49:48.934300 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 03:49:48.934390 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 1 03:49:48.934476 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 1 03:49:48.934560 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 03:49:48.934650 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 1 03:49:48.934732 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 1 03:49:48.934816 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 03:49:48.934904 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 1 03:49:48.934995 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 1 03:49:48.935082 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 03:49:48.935179 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 1 03:49:48.935263 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 1 03:49:48.935349 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 03:49:48.935445 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 1 03:49:48.935530 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 1 03:49:48.935617 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 03:49:48.935705 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 1 03:49:48.935788 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 1 03:49:48.935870 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 03:49:48.935884 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 03:49:48.935894 kernel: PCI: CLS 0 bytes, default 64 Nov 1 03:49:48.935905 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 03:49:48.935915 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 1 03:49:48.935928 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 03:49:48.935938 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Nov 1 03:49:48.935948 kernel: Initialise system trusted keyrings Nov 1 03:49:48.935964 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 03:49:48.935974 kernel: Key type asymmetric registered Nov 1 03:49:48.935984 kernel: Asymmetric key parser 'x509' registered Nov 1 03:49:48.935994 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 03:49:48.936003 kernel: io scheduler mq-deadline registered Nov 1 03:49:48.936013 kernel: io scheduler kyber registered Nov 1 03:49:48.936025 kernel: io scheduler bfq registered Nov 1 03:49:48.936117 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 1 03:49:48.936219 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 1 03:49:48.936309 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 03:49:48.936398 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 1 03:49:48.936486 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 1 03:49:48.936574 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 03:49:48.936665 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 1 03:49:48.936754 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 1 03:49:48.936842 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 03:49:48.936931 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 1 03:49:48.937024 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 1 03:49:48.937110 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 03:49:48.937236 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 1 03:49:48.937325 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 1 03:49:48.937411 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 03:49:48.937499 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 1 03:49:48.937585 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 1 03:49:48.937671 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 03:49:48.937763 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 1 03:49:48.937849 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 1 03:49:48.937936 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 03:49:48.938031 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 1 03:49:48.938118 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 1 03:49:48.940247 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 03:49:48.940269 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 03:49:48.940280 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 03:49:48.940290 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 03:49:48.940300 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 03:49:48.940310 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 03:49:48.940321 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 03:49:48.940331 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 03:49:48.940341 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 03:49:48.940353 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 03:49:48.940445 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 03:49:48.940528 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 03:49:48.940608 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T03:49:48 UTC (1761968988) Nov 1 03:49:48.940687 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 03:49:48.940699 kernel: intel_pstate: CPU model not supported Nov 1 03:49:48.940710 kernel: NET: Registered PF_INET6 protocol family Nov 1 03:49:48.940720 kernel: Segment Routing with IPv6 Nov 1 03:49:48.940732 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 03:49:48.940742 kernel: NET: Registered PF_PACKET protocol family Nov 1 03:49:48.940752 kernel: Key type dns_resolver registered Nov 1 03:49:48.940762 kernel: IPI shorthand broadcast: enabled Nov 1 03:49:48.940771 kernel: sched_clock: Marking stable (772002277, 118570658)->(1143460789, -252887854) Nov 1 03:49:48.940781 kernel: registered taskstats version 1 Nov 1 03:49:48.940794 kernel: Loading compiled-in X.509 certificates Nov 1 03:49:48.940804 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 03:49:48.940813 kernel: Key type .fscrypt registered Nov 1 03:49:48.940826 kernel: Key type fscrypt-provisioning registered Nov 1 03:49:48.940835 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 03:49:48.940846 kernel: ima: Allocated hash algorithm: sha1 Nov 1 03:49:48.940856 kernel: ima: No architecture policies found Nov 1 03:49:48.940866 kernel: clk: Disabling unused clocks Nov 1 03:49:48.940876 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 03:49:48.940885 kernel: Write protecting the kernel read-only data: 28672k Nov 1 03:49:48.940895 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 03:49:48.940905 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 03:49:48.940917 kernel: Run /init as init process Nov 1 03:49:48.940927 kernel: with arguments: Nov 1 03:49:48.940937 kernel: /init Nov 1 03:49:48.940946 kernel: with environment: Nov 1 03:49:48.940962 kernel: HOME=/ Nov 1 03:49:48.940972 kernel: TERM=linux Nov 1 03:49:48.940982 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 03:49:48.940995 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 03:49:48.941011 systemd[1]: Detected virtualization kvm. Nov 1 03:49:48.941022 systemd[1]: Detected architecture x86-64. Nov 1 03:49:48.941032 systemd[1]: Running in initrd. Nov 1 03:49:48.941042 systemd[1]: No hostname configured, using default hostname. Nov 1 03:49:48.941052 systemd[1]: Hostname set to . Nov 1 03:49:48.941064 systemd[1]: Initializing machine ID from VM UUID. Nov 1 03:49:48.941074 systemd[1]: Queued start job for default target initrd.target. Nov 1 03:49:48.941084 systemd[1]: Started systemd-ask-password-console.path. Nov 1 03:49:48.941097 systemd[1]: Reached target cryptsetup.target. Nov 1 03:49:48.941107 systemd[1]: Reached target paths.target. Nov 1 03:49:48.941118 systemd[1]: Reached target slices.target. Nov 1 03:49:48.941128 systemd[1]: Reached target swap.target. Nov 1 03:49:48.941138 systemd[1]: Reached target timers.target. Nov 1 03:49:48.941149 systemd[1]: Listening on iscsid.socket. Nov 1 03:49:48.941171 systemd[1]: Listening on iscsiuio.socket. Nov 1 03:49:48.941185 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 03:49:48.941196 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 03:49:48.941206 systemd[1]: Listening on systemd-journald.socket. Nov 1 03:49:48.941217 systemd[1]: Listening on systemd-networkd.socket. Nov 1 03:49:48.941228 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 03:49:48.941238 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 03:49:48.941249 systemd[1]: Reached target sockets.target. Nov 1 03:49:48.941260 systemd[1]: Starting kmod-static-nodes.service... Nov 1 03:49:48.941270 systemd[1]: Finished network-cleanup.service. Nov 1 03:49:48.941283 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 03:49:48.941294 systemd[1]: Starting systemd-journald.service... Nov 1 03:49:48.941304 systemd[1]: Starting systemd-modules-load.service... Nov 1 03:49:48.941315 systemd[1]: Starting systemd-resolved.service... Nov 1 03:49:48.941325 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 03:49:48.941336 systemd[1]: Finished kmod-static-nodes.service. Nov 1 03:49:48.941346 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 03:49:48.941363 systemd-journald[201]: Journal started Nov 1 03:49:48.941420 systemd-journald[201]: Runtime Journal (/run/log/journal/2ef918f7a0dd4942ab0ad535cc13f698) is 4.7M, max 38.1M, 33.3M free. Nov 1 03:49:48.909337 systemd-modules-load[202]: Inserted module 'overlay' Nov 1 03:49:48.958351 systemd[1]: Started systemd-resolved.service. Nov 1 03:49:48.958381 kernel: audit: type=1130 audit(1761968988.943:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.958396 systemd[1]: Started systemd-journald.service. Nov 1 03:49:48.958409 kernel: audit: type=1130 audit(1761968988.948:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.958422 kernel: audit: type=1130 audit(1761968988.949:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.958436 kernel: audit: type=1130 audit(1761968988.955:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.931593 systemd-resolved[203]: Positive Trust Anchors: Nov 1 03:49:48.931608 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 03:49:48.931644 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 03:49:48.964540 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 03:49:48.934898 systemd-resolved[203]: Defaulting to hostname 'linux'. Nov 1 03:49:48.950130 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 03:49:48.969334 kernel: Bridge firewalling registered Nov 1 03:49:48.955733 systemd[1]: Reached target nss-lookup.target. Nov 1 03:49:48.964000 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 03:49:48.966365 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 03:49:48.967283 systemd-modules-load[202]: Inserted module 'br_netfilter' Nov 1 03:49:48.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.977457 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 03:49:48.982000 kernel: audit: type=1130 audit(1761968988.977:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.987086 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 03:49:48.988275 systemd[1]: Starting dracut-cmdline.service... Nov 1 03:49:48.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.994175 kernel: audit: type=1130 audit(1761968988.987:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:48.997177 kernel: SCSI subsystem initialized Nov 1 03:49:49.002251 dracut-cmdline[218]: dracut-dracut-053 Nov 1 03:49:49.006206 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 03:49:49.024238 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 03:49:49.024262 kernel: device-mapper: uevent: version 1.0.3 Nov 1 03:49:49.024275 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 03:49:49.024982 systemd-modules-load[202]: Inserted module 'dm_multipath' Nov 1 03:49:49.025698 systemd[1]: Finished systemd-modules-load.service. Nov 1 03:49:49.029168 kernel: audit: type=1130 audit(1761968989.025:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:49.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:49.026894 systemd[1]: Starting systemd-sysctl.service... Nov 1 03:49:49.037035 systemd[1]: Finished systemd-sysctl.service. Nov 1 03:49:49.040210 kernel: audit: type=1130 audit(1761968989.037:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:49.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:49.087196 kernel: Loading iSCSI transport class v2.0-870. Nov 1 03:49:49.107223 kernel: iscsi: registered transport (tcp) Nov 1 03:49:49.135308 kernel: iscsi: registered transport (qla4xxx) Nov 1 03:49:49.135410 kernel: QLogic iSCSI HBA Driver Nov 1 03:49:49.176640 systemd[1]: Finished dracut-cmdline.service. Nov 1 03:49:49.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:49.178006 systemd[1]: Starting dracut-pre-udev.service... Nov 1 03:49:49.181457 kernel: audit: type=1130 audit(1761968989.176:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:49.231286 kernel: raid6: avx512x4 gen() 17869 MB/s Nov 1 03:49:49.248236 kernel: raid6: avx512x4 xor() 7836 MB/s Nov 1 03:49:49.265237 kernel: raid6: avx512x2 gen() 17772 MB/s Nov 1 03:49:49.282245 kernel: raid6: avx512x2 xor() 22224 MB/s Nov 1 03:49:49.300307 kernel: raid6: avx512x1 gen() 17796 MB/s Nov 1 03:49:49.317288 kernel: raid6: avx512x1 xor() 20020 MB/s Nov 1 03:49:49.334228 kernel: raid6: avx2x4 gen() 17658 MB/s Nov 1 03:49:49.351282 kernel: raid6: avx2x4 xor() 7267 MB/s Nov 1 03:49:49.368249 kernel: raid6: avx2x2 gen() 17702 MB/s Nov 1 03:49:49.385245 kernel: raid6: avx2x2 xor() 15861 MB/s Nov 1 03:49:49.402238 kernel: raid6: avx2x1 gen() 13428 MB/s Nov 1 03:49:49.419230 kernel: raid6: avx2x1 xor() 13786 MB/s Nov 1 03:49:49.437265 kernel: raid6: sse2x4 gen() 5594 MB/s Nov 1 03:49:49.454259 kernel: raid6: sse2x4 xor() 4717 MB/s Nov 1 03:49:49.471243 kernel: raid6: sse2x2 gen() 8455 MB/s Nov 1 03:49:49.488279 kernel: raid6: sse2x2 xor() 5218 MB/s Nov 1 03:49:49.505257 kernel: raid6: sse2x1 gen() 7744 MB/s Nov 1 03:49:49.522843 kernel: raid6: sse2x1 xor() 4100 MB/s Nov 1 03:49:49.522972 kernel: raid6: using algorithm avx512x4 gen() 17869 MB/s Nov 1 03:49:49.523011 kernel: raid6: .... xor() 7836 MB/s, rmw enabled Nov 1 03:49:49.523540 kernel: raid6: using avx512x2 recovery algorithm Nov 1 03:49:49.539228 kernel: xor: automatically using best checksumming function avx Nov 1 03:49:49.650454 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 03:49:49.666709 systemd[1]: Finished dracut-pre-udev.service. Nov 1 03:49:49.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:49.668000 audit: BPF prog-id=7 op=LOAD Nov 1 03:49:49.669000 audit: BPF prog-id=8 op=LOAD Nov 1 03:49:49.670293 systemd[1]: Starting systemd-udevd.service... Nov 1 03:49:49.683828 systemd-udevd[402]: Using default interface naming scheme 'v252'. Nov 1 03:49:49.689360 systemd[1]: Started systemd-udevd.service. Nov 1 03:49:49.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:49.695866 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 03:49:49.711563 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Nov 1 03:49:49.759199 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 03:49:49.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:49.760739 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 03:49:49.817494 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 03:49:49.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:49.890233 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 03:49:49.925025 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 03:49:49.925049 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 03:49:49.925063 kernel: GPT:17805311 != 125829119 Nov 1 03:49:49.925075 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 03:49:49.925086 kernel: GPT:17805311 != 125829119 Nov 1 03:49:49.925097 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 03:49:49.925109 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 03:49:49.929183 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 03:49:49.937178 kernel: AES CTR mode by8 optimization enabled Nov 1 03:49:49.940181 kernel: libata version 3.00 loaded. Nov 1 03:49:49.949545 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 03:49:49.950064 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 03:49:49.954489 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 03:49:49.956294 systemd[1]: Starting disk-uuid.service... Nov 1 03:49:49.961591 disk-uuid[471]: Primary Header is updated. Nov 1 03:49:49.961591 disk-uuid[471]: Secondary Entries is updated. Nov 1 03:49:49.961591 disk-uuid[471]: Secondary Header is updated. Nov 1 03:49:49.973184 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (459) Nov 1 03:49:49.983526 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 03:49:50.023785 kernel: ACPI: bus type USB registered Nov 1 03:49:50.023810 kernel: usbcore: registered new interface driver usbfs Nov 1 03:49:50.023823 kernel: usbcore: registered new interface driver hub Nov 1 03:49:50.023835 kernel: usbcore: registered new device driver usb Nov 1 03:49:50.026805 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 03:49:50.029395 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 03:49:50.065620 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 03:49:50.065641 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 03:49:50.065760 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 03:49:50.065857 kernel: scsi host0: ahci Nov 1 03:49:50.065983 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 03:49:50.066085 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 1 03:49:50.066202 kernel: scsi host1: ahci Nov 1 03:49:50.066314 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 1 03:49:50.066417 kernel: scsi host2: ahci Nov 1 03:49:50.066522 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 03:49:50.066620 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 1 03:49:50.066719 kernel: scsi host3: ahci Nov 1 03:49:50.066821 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 1 03:49:50.066930 kernel: scsi host4: ahci Nov 1 03:49:50.067040 kernel: hub 1-0:1.0: USB hub found Nov 1 03:49:50.067165 kernel: scsi host5: ahci Nov 1 03:49:50.067274 kernel: hub 1-0:1.0: 4 ports detected Nov 1 03:49:50.067383 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Nov 1 03:49:50.067397 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 1 03:49:50.067579 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Nov 1 03:49:50.067596 kernel: hub 2-0:1.0: USB hub found Nov 1 03:49:50.067715 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Nov 1 03:49:50.067728 kernel: hub 2-0:1.0: 4 ports detected Nov 1 03:49:50.067839 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Nov 1 03:49:50.067851 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Nov 1 03:49:50.067863 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Nov 1 03:49:50.302205 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 1 03:49:50.381460 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 03:49:50.381583 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 03:49:50.384206 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 03:49:50.388820 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 03:49:50.388846 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 03:49:50.390096 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 03:49:50.447186 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 03:49:50.453176 kernel: usbcore: registered new interface driver usbhid Nov 1 03:49:50.453223 kernel: usbhid: USB HID core driver Nov 1 03:49:50.459094 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 1 03:49:50.459142 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 1 03:49:50.973221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 03:49:50.974270 disk-uuid[472]: The operation has completed successfully. Nov 1 03:49:51.029969 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 03:49:51.030693 systemd[1]: Finished disk-uuid.service. Nov 1 03:49:51.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.032440 systemd[1]: Starting verity-setup.service... Nov 1 03:49:51.063256 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 03:49:51.097630 systemd[1]: Found device dev-mapper-usr.device. Nov 1 03:49:51.098793 systemd[1]: Mounting sysusr-usr.mount... Nov 1 03:49:51.100401 systemd[1]: Finished verity-setup.service. Nov 1 03:49:51.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.190204 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 03:49:51.190581 systemd[1]: Mounted sysusr-usr.mount. Nov 1 03:49:51.192037 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 03:49:51.193561 systemd[1]: Starting ignition-setup.service... Nov 1 03:49:51.197269 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 03:49:51.214211 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 03:49:51.214250 kernel: BTRFS info (device vda6): using free space tree Nov 1 03:49:51.214263 kernel: BTRFS info (device vda6): has skinny extents Nov 1 03:49:51.233212 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 03:49:51.241487 systemd[1]: Finished ignition-setup.service. Nov 1 03:49:51.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.242820 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 03:49:51.293215 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 03:49:51.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.294000 audit: BPF prog-id=9 op=LOAD Nov 1 03:49:51.295332 systemd[1]: Starting systemd-networkd.service... Nov 1 03:49:51.332820 systemd-networkd[710]: lo: Link UP Nov 1 03:49:51.332831 systemd-networkd[710]: lo: Gained carrier Nov 1 03:49:51.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.333399 systemd-networkd[710]: Enumeration completed Nov 1 03:49:51.333499 systemd[1]: Started systemd-networkd.service. Nov 1 03:49:51.333747 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 03:49:51.335407 systemd-networkd[710]: eth0: Link UP Nov 1 03:49:51.335412 systemd-networkd[710]: eth0: Gained carrier Nov 1 03:49:51.338041 systemd[1]: Reached target network.target. Nov 1 03:49:51.340544 systemd[1]: Starting iscsiuio.service... Nov 1 03:49:51.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.358548 systemd[1]: Started iscsiuio.service. Nov 1 03:49:51.360867 systemd[1]: Starting iscsid.service... Nov 1 03:49:51.365200 iscsid[716]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 03:49:51.365200 iscsid[716]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 03:49:51.365200 iscsid[716]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 03:49:51.365200 iscsid[716]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 03:49:51.365200 iscsid[716]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 03:49:51.365200 iscsid[716]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 03:49:51.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.367282 systemd[1]: Started iscsid.service. Nov 1 03:49:51.369572 systemd[1]: Starting dracut-initqueue.service... Nov 1 03:49:51.384098 systemd[1]: Finished dracut-initqueue.service. Nov 1 03:49:51.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.384590 systemd[1]: Reached target remote-fs-pre.target. Nov 1 03:49:51.385177 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 03:49:51.385921 systemd[1]: Reached target remote-fs.target. Nov 1 03:49:51.387362 systemd[1]: Starting dracut-pre-mount.service... Nov 1 03:49:51.392045 ignition[657]: Ignition 2.14.0 Nov 1 03:49:51.392264 systemd-networkd[710]: eth0: DHCPv4 address 10.244.101.254/30, gateway 10.244.101.253 acquired from 10.244.101.253 Nov 1 03:49:51.392072 ignition[657]: Stage: fetch-offline Nov 1 03:49:51.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.398319 systemd[1]: Finished dracut-pre-mount.service. Nov 1 03:49:51.392275 ignition[657]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 03:49:51.392366 ignition[657]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 03:49:51.402894 ignition[657]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 03:49:51.403074 ignition[657]: parsed url from cmdline: "" Nov 1 03:49:51.403081 ignition[657]: no config URL provided Nov 1 03:49:51.403090 ignition[657]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 03:49:51.403104 ignition[657]: no config at "/usr/lib/ignition/user.ign" Nov 1 03:49:51.403113 ignition[657]: failed to fetch config: resource requires networking Nov 1 03:49:51.403504 ignition[657]: Ignition finished successfully Nov 1 03:49:51.406128 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 03:49:51.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.408725 systemd[1]: Starting ignition-fetch.service... Nov 1 03:49:51.417277 ignition[730]: Ignition 2.14.0 Nov 1 03:49:51.417286 ignition[730]: Stage: fetch Nov 1 03:49:51.417396 ignition[730]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 03:49:51.417413 ignition[730]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 03:49:51.418427 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 03:49:51.418525 ignition[730]: parsed url from cmdline: "" Nov 1 03:49:51.418529 ignition[730]: no config URL provided Nov 1 03:49:51.418535 ignition[730]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 03:49:51.418543 ignition[730]: no config at "/usr/lib/ignition/user.ign" Nov 1 03:49:51.421967 ignition[730]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 1 03:49:51.421997 ignition[730]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 1 03:49:51.422798 ignition[730]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 1 03:49:51.443991 ignition[730]: GET result: OK Nov 1 03:49:51.444200 ignition[730]: parsing config with SHA512: aba7198f265fead952e0db67df1aa7f2d543d2a58f2ccccfb14ce74c9a8cd323da260d4cef718464c862ceee6b41520442ff2780d4aa52f0f00282e74fc317c9 Nov 1 03:49:51.461370 unknown[730]: fetched base config from "system" Nov 1 03:49:51.461387 unknown[730]: fetched base config from "system" Nov 1 03:49:51.462241 ignition[730]: fetch: fetch complete Nov 1 03:49:51.461396 unknown[730]: fetched user config from "openstack" Nov 1 03:49:51.462249 ignition[730]: fetch: fetch passed Nov 1 03:49:51.465198 systemd[1]: Finished ignition-fetch.service. Nov 1 03:49:51.462302 ignition[730]: Ignition finished successfully Nov 1 03:49:51.468356 systemd[1]: Starting ignition-kargs.service... Nov 1 03:49:51.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.480241 ignition[736]: Ignition 2.14.0 Nov 1 03:49:51.480757 ignition[736]: Stage: kargs Nov 1 03:49:51.481278 ignition[736]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 03:49:51.481758 ignition[736]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 03:49:51.482864 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 03:49:51.484639 ignition[736]: kargs: kargs passed Nov 1 03:49:51.485103 ignition[736]: Ignition finished successfully Nov 1 03:49:51.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.486759 systemd[1]: Finished ignition-kargs.service. Nov 1 03:49:51.490081 systemd[1]: Starting ignition-disks.service... Nov 1 03:49:51.499915 ignition[741]: Ignition 2.14.0 Nov 1 03:49:51.499926 ignition[741]: Stage: disks Nov 1 03:49:51.500031 ignition[741]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 03:49:51.500047 ignition[741]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 03:49:51.500946 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 03:49:51.502041 ignition[741]: disks: disks passed Nov 1 03:49:51.502079 ignition[741]: Ignition finished successfully Nov 1 03:49:51.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.502722 systemd[1]: Finished ignition-disks.service. Nov 1 03:49:51.503306 systemd[1]: Reached target initrd-root-device.target. Nov 1 03:49:51.503982 systemd[1]: Reached target local-fs-pre.target. Nov 1 03:49:51.504740 systemd[1]: Reached target local-fs.target. Nov 1 03:49:51.505515 systemd[1]: Reached target sysinit.target. Nov 1 03:49:51.506234 systemd[1]: Reached target basic.target. Nov 1 03:49:51.507763 systemd[1]: Starting systemd-fsck-root.service... Nov 1 03:49:51.522223 systemd-fsck[749]: ROOT: clean, 637/1628000 files, 124069/1617920 blocks Nov 1 03:49:51.526189 systemd[1]: Finished systemd-fsck-root.service. Nov 1 03:49:51.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.527790 systemd[1]: Mounting sysroot.mount... Nov 1 03:49:51.538174 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 03:49:51.538942 systemd[1]: Mounted sysroot.mount. Nov 1 03:49:51.539425 systemd[1]: Reached target initrd-root-fs.target. Nov 1 03:49:51.541565 systemd[1]: Mounting sysroot-usr.mount... Nov 1 03:49:51.542511 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 03:49:51.543372 systemd[1]: Starting flatcar-openstack-hostname.service... Nov 1 03:49:51.543859 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 03:49:51.543919 systemd[1]: Reached target ignition-diskful.target. Nov 1 03:49:51.547765 systemd[1]: Mounted sysroot-usr.mount. Nov 1 03:49:51.549427 systemd[1]: Starting initrd-setup-root.service... Nov 1 03:49:51.555415 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 03:49:51.562820 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Nov 1 03:49:51.574685 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 03:49:51.583013 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 03:49:51.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.636837 systemd[1]: Finished initrd-setup-root.service. Nov 1 03:49:51.638705 systemd[1]: Starting ignition-mount.service... Nov 1 03:49:51.639880 systemd[1]: Starting sysroot-boot.service... Nov 1 03:49:51.652848 bash[804]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 03:49:51.666028 coreos-metadata[755]: Nov 01 03:49:51.664 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 1 03:49:51.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.674044 systemd[1]: Finished sysroot-boot.service. Nov 1 03:49:51.675334 ignition[805]: INFO : Ignition 2.14.0 Nov 1 03:49:51.675850 ignition[805]: INFO : Stage: mount Nov 1 03:49:51.675850 ignition[805]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 03:49:51.675850 ignition[805]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 03:49:51.677391 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 03:49:51.678580 ignition[805]: INFO : mount: mount passed Nov 1 03:49:51.678966 ignition[805]: INFO : Ignition finished successfully Nov 1 03:49:51.679367 systemd[1]: Finished ignition-mount.service. Nov 1 03:49:51.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.683277 coreos-metadata[755]: Nov 01 03:49:51.683 INFO Fetch successful Nov 1 03:49:51.683844 coreos-metadata[755]: Nov 01 03:49:51.683 INFO wrote hostname srv-n2oyf.gb1.brightbox.com to /sysroot/etc/hostname Nov 1 03:49:51.686150 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 1 03:49:51.686291 systemd[1]: Finished flatcar-openstack-hostname.service. Nov 1 03:49:51.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:51.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:52.119821 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 03:49:52.132196 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (812) Nov 1 03:49:52.136186 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 03:49:52.136273 kernel: BTRFS info (device vda6): using free space tree Nov 1 03:49:52.136302 kernel: BTRFS info (device vda6): has skinny extents Nov 1 03:49:52.143130 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 03:49:52.145833 systemd[1]: Starting ignition-files.service... Nov 1 03:49:52.173446 ignition[832]: INFO : Ignition 2.14.0 Nov 1 03:49:52.173446 ignition[832]: INFO : Stage: files Nov 1 03:49:52.174786 ignition[832]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 03:49:52.174786 ignition[832]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 03:49:52.176396 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 03:49:52.177438 ignition[832]: DEBUG : files: compiled without relabeling support, skipping Nov 1 03:49:52.177947 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 03:49:52.177947 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 03:49:52.180370 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 03:49:52.181052 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 03:49:52.182271 unknown[832]: wrote ssh authorized keys file for user: core Nov 1 03:49:52.184156 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 03:49:52.184156 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 03:49:52.184156 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 03:49:52.367731 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 03:49:52.505546 systemd-networkd[710]: eth0: Gained IPv6LL Nov 1 03:49:52.602277 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 03:49:52.604729 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 03:49:52.606665 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 03:49:52.831383 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 03:49:53.701882 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 03:49:53.704129 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 03:49:53.704129 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 03:49:53.704129 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 03:49:53.708738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 03:49:53.708738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 03:49:53.708738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 03:49:53.708738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 03:49:53.708738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 03:49:53.708738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 03:49:53.708738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 03:49:53.708738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 03:49:53.708738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 03:49:53.708738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 03:49:53.708738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 03:49:53.970975 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 03:49:54.019088 systemd-networkd[710]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:197f:24:19ff:fef4:65fe/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:197f:24:19ff:fef4:65fe/64 assigned by NDisc. Nov 1 03:49:54.019115 systemd-networkd[710]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 1 03:49:57.148118 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 03:49:57.151028 ignition[832]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 03:49:57.151028 ignition[832]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 03:49:57.151028 ignition[832]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Nov 1 03:49:57.151028 ignition[832]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 03:49:57.161364 ignition[832]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 03:49:57.161364 ignition[832]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Nov 1 03:49:57.161364 ignition[832]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 03:49:57.161364 ignition[832]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 03:49:57.161364 ignition[832]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 1 03:49:57.161364 ignition[832]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 03:49:57.182862 kernel: kauditd_printk_skb: 27 callbacks suppressed Nov 1 03:49:57.182903 kernel: audit: type=1130 audit(1761968997.173:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.169533 systemd[1]: Finished ignition-files.service. Nov 1 03:49:57.183521 ignition[832]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 03:49:57.183521 ignition[832]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 03:49:57.183521 ignition[832]: INFO : files: files passed Nov 1 03:49:57.183521 ignition[832]: INFO : Ignition finished successfully Nov 1 03:49:57.180734 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 03:49:57.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.185460 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 03:49:57.198355 kernel: audit: type=1130 audit(1761968997.191:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.198396 kernel: audit: type=1131 audit(1761968997.191:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.186469 systemd[1]: Starting ignition-quench.service... Nov 1 03:49:57.201876 kernel: audit: type=1130 audit(1761968997.198:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.201952 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 03:49:57.190924 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 03:49:57.191029 systemd[1]: Finished ignition-quench.service. Nov 1 03:49:57.195883 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 03:49:57.198846 systemd[1]: Reached target ignition-complete.target. Nov 1 03:49:57.203101 systemd[1]: Starting initrd-parse-etc.service... Nov 1 03:49:57.225304 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 03:49:57.225931 systemd[1]: Finished initrd-parse-etc.service. Nov 1 03:49:57.226855 systemd[1]: Reached target initrd-fs.target. Nov 1 03:49:57.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.233185 kernel: audit: type=1130 audit(1761968997.226:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.233223 kernel: audit: type=1131 audit(1761968997.226:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.234015 systemd[1]: Reached target initrd.target. Nov 1 03:49:57.235625 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 03:49:57.237946 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 03:49:57.254758 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 03:49:57.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.258269 systemd[1]: Starting initrd-cleanup.service... Nov 1 03:49:57.263187 kernel: audit: type=1130 audit(1761968997.255:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.273424 systemd[1]: Stopped target nss-lookup.target. Nov 1 03:49:57.274380 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 03:49:57.275282 systemd[1]: Stopped target timers.target. Nov 1 03:49:57.276087 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 03:49:57.276665 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 03:49:57.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.280456 systemd[1]: Stopped target initrd.target. Nov 1 03:49:57.282117 kernel: audit: type=1131 audit(1761968997.277:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.281061 systemd[1]: Stopped target basic.target. Nov 1 03:49:57.282452 systemd[1]: Stopped target ignition-complete.target. Nov 1 03:49:57.283798 systemd[1]: Stopped target ignition-diskful.target. Nov 1 03:49:57.285248 systemd[1]: Stopped target initrd-root-device.target. Nov 1 03:49:57.286622 systemd[1]: Stopped target remote-fs.target. Nov 1 03:49:57.287947 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 03:49:57.289304 systemd[1]: Stopped target sysinit.target. Nov 1 03:49:57.290284 systemd[1]: Stopped target local-fs.target. Nov 1 03:49:57.291116 systemd[1]: Stopped target local-fs-pre.target. Nov 1 03:49:57.291954 systemd[1]: Stopped target swap.target. Nov 1 03:49:57.292782 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 03:49:57.298087 kernel: audit: type=1131 audit(1761968997.293:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.292898 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 03:49:57.293738 systemd[1]: Stopped target cryptsetup.target. Nov 1 03:49:57.302390 kernel: audit: type=1131 audit(1761968997.299:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.298488 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 03:49:57.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.298592 systemd[1]: Stopped dracut-initqueue.service. Nov 1 03:49:57.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.299504 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 03:49:57.299610 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 03:49:57.302886 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 03:49:57.303017 systemd[1]: Stopped ignition-files.service. Nov 1 03:49:57.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.314328 iscsid[716]: iscsid shutting down. Nov 1 03:49:57.304572 systemd[1]: Stopping ignition-mount.service... Nov 1 03:49:57.322421 ignition[870]: INFO : Ignition 2.14.0 Nov 1 03:49:57.322421 ignition[870]: INFO : Stage: umount Nov 1 03:49:57.322421 ignition[870]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 03:49:57.322421 ignition[870]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 03:49:57.322421 ignition[870]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 03:49:57.322421 ignition[870]: INFO : umount: umount passed Nov 1 03:49:57.322421 ignition[870]: INFO : Ignition finished successfully Nov 1 03:49:57.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.305272 systemd[1]: Stopping iscsid.service... Nov 1 03:49:57.310035 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 03:49:57.310239 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 03:49:57.311861 systemd[1]: Stopping sysroot-boot.service... Nov 1 03:49:57.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.312321 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 03:49:57.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.312545 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 03:49:57.319613 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 03:49:57.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.319711 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 03:49:57.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.321523 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 03:49:57.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.324563 systemd[1]: Stopped iscsid.service. Nov 1 03:49:57.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.327579 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 03:49:57.329303 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 03:49:57.329417 systemd[1]: Stopped ignition-mount.service. Nov 1 03:49:57.330583 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 03:49:57.330666 systemd[1]: Stopped sysroot-boot.service. Nov 1 03:49:57.331481 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 03:49:57.331582 systemd[1]: Stopped ignition-disks.service. Nov 1 03:49:57.332704 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 03:49:57.332739 systemd[1]: Stopped ignition-kargs.service. Nov 1 03:49:57.333295 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 03:49:57.333327 systemd[1]: Stopped ignition-fetch.service. Nov 1 03:49:57.333949 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 03:49:57.333985 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 03:49:57.334629 systemd[1]: Stopped target paths.target. Nov 1 03:49:57.335265 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 03:49:57.339208 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 03:49:57.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.339566 systemd[1]: Stopped target slices.target. Nov 1 03:49:57.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.340258 systemd[1]: Stopped target sockets.target. Nov 1 03:49:57.340863 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 03:49:57.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.340892 systemd[1]: Closed iscsid.socket. Nov 1 03:49:57.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.341413 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 03:49:57.341456 systemd[1]: Stopped ignition-setup.service. Nov 1 03:49:57.342073 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 03:49:57.342104 systemd[1]: Stopped initrd-setup-root.service. Nov 1 03:49:57.342725 systemd[1]: Stopping iscsiuio.service... Nov 1 03:49:57.345572 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 03:49:57.345663 systemd[1]: Stopped iscsiuio.service. Nov 1 03:49:57.346247 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 03:49:57.346328 systemd[1]: Finished initrd-cleanup.service. Nov 1 03:49:57.347482 systemd[1]: Stopped target network.target. Nov 1 03:49:57.348162 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 03:49:57.348204 systemd[1]: Closed iscsiuio.socket. Nov 1 03:49:57.348855 systemd[1]: Stopping systemd-networkd.service... Nov 1 03:49:57.349815 systemd[1]: Stopping systemd-resolved.service... Nov 1 03:49:57.353218 systemd-networkd[710]: eth0: DHCPv6 lease lost Nov 1 03:49:57.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.354261 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 03:49:57.354367 systemd[1]: Stopped systemd-networkd.service. Nov 1 03:49:57.355442 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 03:49:57.355476 systemd[1]: Closed systemd-networkd.socket. Nov 1 03:49:57.358000 audit: BPF prog-id=9 op=UNLOAD Nov 1 03:49:57.356921 systemd[1]: Stopping network-cleanup.service... Nov 1 03:49:57.358684 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 03:49:57.358748 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 03:49:57.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.368558 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 03:49:57.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.368603 systemd[1]: Stopped systemd-sysctl.service. Nov 1 03:49:57.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.369424 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 03:49:57.369472 systemd[1]: Stopped systemd-modules-load.service. Nov 1 03:49:57.370067 systemd[1]: Stopping systemd-udevd.service... Nov 1 03:49:57.371957 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 03:49:57.372585 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 03:49:57.372703 systemd[1]: Stopped systemd-resolved.service. Nov 1 03:49:57.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.375214 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 03:49:57.375387 systemd[1]: Stopped systemd-udevd.service. Nov 1 03:49:57.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.378000 audit: BPF prog-id=6 op=UNLOAD Nov 1 03:49:57.378537 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 03:49:57.378592 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 03:49:57.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.379083 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 03:49:57.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.379135 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 03:49:57.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.379855 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 03:49:57.379901 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 03:49:57.380705 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 03:49:57.380740 systemd[1]: Stopped dracut-cmdline.service. Nov 1 03:49:57.381338 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 03:49:57.381379 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 03:49:57.382883 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 03:49:57.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.388542 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 03:49:57.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.388594 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 03:49:57.389304 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 03:49:57.389397 systemd[1]: Stopped network-cleanup.service. Nov 1 03:49:57.389961 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 03:49:57.390038 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 03:49:57.390579 systemd[1]: Reached target initrd-switch-root.target. Nov 1 03:49:57.391855 systemd[1]: Starting initrd-switch-root.service... Nov 1 03:49:57.403903 systemd[1]: Switching root. Nov 1 03:49:57.425329 systemd-journald[201]: Journal stopped Nov 1 03:50:00.565904 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Nov 1 03:50:00.565988 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 03:50:00.566006 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 03:50:00.566020 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 03:50:00.566033 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 03:50:00.566046 kernel: SELinux: policy capability open_perms=1 Nov 1 03:50:00.566059 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 03:50:00.566077 kernel: SELinux: policy capability always_check_network=0 Nov 1 03:50:00.566089 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 03:50:00.566108 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 03:50:00.566121 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 03:50:00.566144 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 03:50:00.566167 systemd[1]: Successfully loaded SELinux policy in 46.986ms. Nov 1 03:50:00.566192 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.456ms. Nov 1 03:50:00.566212 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 03:50:00.566226 systemd[1]: Detected virtualization kvm. Nov 1 03:50:00.566241 systemd[1]: Detected architecture x86-64. Nov 1 03:50:00.566262 systemd[1]: Detected first boot. Nov 1 03:50:00.566277 systemd[1]: Hostname set to . Nov 1 03:50:00.566293 systemd[1]: Initializing machine ID from VM UUID. Nov 1 03:50:00.566307 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 03:50:00.566321 systemd[1]: Populated /etc with preset unit settings. Nov 1 03:50:00.566336 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 03:50:00.566355 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 03:50:00.566377 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 03:50:00.566393 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 03:50:00.566410 systemd[1]: Stopped initrd-switch-root.service. Nov 1 03:50:00.566424 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 03:50:00.566438 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 03:50:00.566453 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 03:50:00.566468 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 03:50:00.566487 systemd[1]: Created slice system-getty.slice. Nov 1 03:50:00.566502 systemd[1]: Created slice system-modprobe.slice. Nov 1 03:50:00.566516 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 03:50:00.566530 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 03:50:00.566545 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 03:50:00.566562 systemd[1]: Created slice user.slice. Nov 1 03:50:00.566576 systemd[1]: Started systemd-ask-password-console.path. Nov 1 03:50:00.566590 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 03:50:00.566604 systemd[1]: Set up automount boot.automount. Nov 1 03:50:00.566625 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 03:50:00.566640 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 03:50:00.566654 systemd[1]: Stopped target initrd-fs.target. Nov 1 03:50:00.566672 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 03:50:00.566686 systemd[1]: Reached target integritysetup.target. Nov 1 03:50:00.566699 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 03:50:00.566713 systemd[1]: Reached target remote-fs.target. Nov 1 03:50:00.566735 systemd[1]: Reached target slices.target. Nov 1 03:50:00.566749 systemd[1]: Reached target swap.target. Nov 1 03:50:00.566763 systemd[1]: Reached target torcx.target. Nov 1 03:50:00.566777 systemd[1]: Reached target veritysetup.target. Nov 1 03:50:00.566791 systemd[1]: Listening on systemd-coredump.socket. Nov 1 03:50:00.566805 systemd[1]: Listening on systemd-initctl.socket. Nov 1 03:50:00.566825 systemd[1]: Listening on systemd-networkd.socket. Nov 1 03:50:00.566839 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 03:50:00.566854 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 03:50:00.566874 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 03:50:00.566887 systemd[1]: Mounting dev-hugepages.mount... Nov 1 03:50:00.566901 systemd[1]: Mounting dev-mqueue.mount... Nov 1 03:50:00.566915 systemd[1]: Mounting media.mount... Nov 1 03:50:00.566929 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 03:50:00.566943 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 03:50:00.566957 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 03:50:00.566971 systemd[1]: Mounting tmp.mount... Nov 1 03:50:00.566984 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 03:50:00.567004 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 03:50:00.567018 systemd[1]: Starting kmod-static-nodes.service... Nov 1 03:50:00.567032 systemd[1]: Starting modprobe@configfs.service... Nov 1 03:50:00.567048 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 03:50:00.567061 systemd[1]: Starting modprobe@drm.service... Nov 1 03:50:00.567075 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 03:50:00.567088 systemd[1]: Starting modprobe@fuse.service... Nov 1 03:50:00.567103 systemd[1]: Starting modprobe@loop.service... Nov 1 03:50:00.567117 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 03:50:00.567146 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 03:50:00.567171 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 03:50:00.567185 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 03:50:00.573742 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 03:50:00.573769 systemd[1]: Stopped systemd-journald.service. Nov 1 03:50:00.573794 systemd[1]: Starting systemd-journald.service... Nov 1 03:50:00.573815 systemd[1]: Starting systemd-modules-load.service... Nov 1 03:50:00.573832 systemd[1]: Starting systemd-network-generator.service... Nov 1 03:50:00.573851 systemd[1]: Starting systemd-remount-fs.service... Nov 1 03:50:00.573882 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 03:50:00.573899 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 03:50:00.573917 systemd[1]: Stopped verity-setup.service. Nov 1 03:50:00.573933 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 03:50:00.573947 systemd[1]: Mounted dev-hugepages.mount. Nov 1 03:50:00.573961 systemd[1]: Mounted dev-mqueue.mount. Nov 1 03:50:00.573985 systemd[1]: Mounted media.mount. Nov 1 03:50:00.574004 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 03:50:00.574021 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 03:50:00.574048 systemd[1]: Mounted tmp.mount. Nov 1 03:50:00.574065 systemd[1]: Finished kmod-static-nodes.service. Nov 1 03:50:00.574079 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 03:50:00.574094 systemd[1]: Finished modprobe@configfs.service. Nov 1 03:50:00.578565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 03:50:00.578602 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 03:50:00.578627 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 03:50:00.578641 systemd[1]: Finished modprobe@drm.service. Nov 1 03:50:00.578661 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 03:50:00.578675 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 03:50:00.578692 systemd[1]: Finished systemd-modules-load.service. Nov 1 03:50:00.578710 systemd[1]: Finished systemd-network-generator.service. Nov 1 03:50:00.578724 systemd[1]: Finished systemd-remount-fs.service. Nov 1 03:50:00.578738 kernel: fuse: init (API version 7.34) Nov 1 03:50:00.578764 systemd[1]: Reached target network-pre.target. Nov 1 03:50:00.578782 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 03:50:00.578798 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 03:50:00.578826 systemd-journald[979]: Journal started Nov 1 03:50:00.578910 systemd-journald[979]: Runtime Journal (/run/log/journal/2ef918f7a0dd4942ab0ad535cc13f698) is 4.7M, max 38.1M, 33.3M free. Nov 1 03:49:57.571000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 03:49:57.634000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 03:49:57.634000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 03:49:57.634000 audit: BPF prog-id=10 op=LOAD Nov 1 03:49:57.634000 audit: BPF prog-id=10 op=UNLOAD Nov 1 03:49:57.635000 audit: BPF prog-id=11 op=LOAD Nov 1 03:49:57.635000 audit: BPF prog-id=11 op=UNLOAD Nov 1 03:49:57.728000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 03:49:57.728000 audit[902]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d892 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 03:49:57.728000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 03:49:57.729000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 03:49:57.729000 audit[902]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d969 a2=1ed a3=0 items=2 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 03:49:57.729000 audit: CWD cwd="/" Nov 1 03:49:57.729000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:49:57.729000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:49:57.729000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 03:50:00.372000 audit: BPF prog-id=12 op=LOAD Nov 1 03:50:00.372000 audit: BPF prog-id=3 op=UNLOAD Nov 1 03:50:00.372000 audit: BPF prog-id=13 op=LOAD Nov 1 03:50:00.373000 audit: BPF prog-id=14 op=LOAD Nov 1 03:50:00.373000 audit: BPF prog-id=4 op=UNLOAD Nov 1 03:50:00.373000 audit: BPF prog-id=5 op=UNLOAD Nov 1 03:50:00.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.382000 audit: BPF prog-id=12 op=UNLOAD Nov 1 03:50:00.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.491000 audit: BPF prog-id=15 op=LOAD Nov 1 03:50:00.491000 audit: BPF prog-id=16 op=LOAD Nov 1 03:50:00.491000 audit: BPF prog-id=17 op=LOAD Nov 1 03:50:00.491000 audit: BPF prog-id=13 op=UNLOAD Nov 1 03:50:00.491000 audit: BPF prog-id=14 op=UNLOAD Nov 1 03:50:00.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.605728 kernel: loop: module loaded Nov 1 03:50:00.605787 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 03:50:00.605816 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 03:50:00.605837 systemd[1]: Starting systemd-random-seed.service... Nov 1 03:50:00.605858 systemd[1]: Starting systemd-sysctl.service... Nov 1 03:50:00.605957 systemd[1]: Started systemd-journald.service. Nov 1 03:50:00.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.564000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 03:50:00.564000 audit[979]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffca24fbe50 a2=4000 a3=7ffca24fbeec items=0 ppid=1 pid=979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 03:50:00.564000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 03:50:00.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.725587 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 03:50:00.369936 systemd[1]: Queued start job for default target multi-user.target. Nov 1 03:49:57.726026 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 03:50:00.369951 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 03:49:57.726048 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 03:50:00.374150 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 03:50:00.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.726085 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 03:50:00.605284 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 03:49:57.726096 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 03:50:00.607686 systemd[1]: Finished modprobe@fuse.service. Nov 1 03:49:57.726137 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 03:50:00.608373 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 03:50:00.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.726152 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 03:50:00.608494 systemd[1]: Finished modprobe@loop.service. Nov 1 03:49:57.726407 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 03:50:00.609054 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 03:49:57.726461 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 03:49:57.726477 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 03:49:57.727759 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 03:49:57.727797 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 03:50:00.609705 systemd[1]: Finished systemd-random-seed.service. Nov 1 03:49:57.727819 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 03:49:57.727834 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 03:50:00.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:49:57.727852 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 03:49:57.727867 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 03:49:59.981433 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 03:49:59.981829 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 03:49:59.981976 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 03:50:00.610554 systemd[1]: Reached target first-boot-complete.target. Nov 1 03:49:59.982225 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 03:49:59.982296 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 03:49:59.982383 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-11-01T03:49:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 03:50:00.618965 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 03:50:00.626237 systemd[1]: Starting systemd-journal-flush.service... Nov 1 03:50:00.626905 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 03:50:00.629303 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 03:50:00.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.630043 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 03:50:00.633089 systemd[1]: Starting systemd-sysusers.service... Nov 1 03:50:00.639336 systemd[1]: Finished systemd-sysctl.service. Nov 1 03:50:00.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.642401 systemd-journald[979]: Time spent on flushing to /var/log/journal/2ef918f7a0dd4942ab0ad535cc13f698 is 52.629ms for 1312 entries. Nov 1 03:50:00.642401 systemd-journald[979]: System Journal (/var/log/journal/2ef918f7a0dd4942ab0ad535cc13f698) is 8.0M, max 584.8M, 576.8M free. Nov 1 03:50:00.705416 systemd-journald[979]: Received client request to flush runtime journal. Nov 1 03:50:00.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.661334 systemd[1]: Finished systemd-sysusers.service. Nov 1 03:50:00.706468 systemd[1]: Finished systemd-journal-flush.service. Nov 1 03:50:00.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.732527 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 03:50:00.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:00.734344 systemd[1]: Starting systemd-udev-settle.service... Nov 1 03:50:00.745043 udevadm[1015]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 03:50:01.223522 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 03:50:01.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.226000 audit: BPF prog-id=18 op=LOAD Nov 1 03:50:01.226000 audit: BPF prog-id=19 op=LOAD Nov 1 03:50:01.226000 audit: BPF prog-id=7 op=UNLOAD Nov 1 03:50:01.226000 audit: BPF prog-id=8 op=UNLOAD Nov 1 03:50:01.228322 systemd[1]: Starting systemd-udevd.service... Nov 1 03:50:01.248774 systemd-udevd[1016]: Using default interface naming scheme 'v252'. Nov 1 03:50:01.278192 systemd[1]: Started systemd-udevd.service. Nov 1 03:50:01.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.282000 audit: BPF prog-id=20 op=LOAD Nov 1 03:50:01.283831 systemd[1]: Starting systemd-networkd.service... Nov 1 03:50:01.291000 audit: BPF prog-id=21 op=LOAD Nov 1 03:50:01.291000 audit: BPF prog-id=22 op=LOAD Nov 1 03:50:01.291000 audit: BPF prog-id=23 op=LOAD Nov 1 03:50:01.292758 systemd[1]: Starting systemd-userdbd.service... Nov 1 03:50:01.328595 systemd[1]: Started systemd-userdbd.service. Nov 1 03:50:01.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.359493 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Nov 1 03:50:01.418426 systemd-networkd[1029]: lo: Link UP Nov 1 03:50:01.419414 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 03:50:01.418435 systemd-networkd[1029]: lo: Gained carrier Nov 1 03:50:01.419481 systemd-networkd[1029]: Enumeration completed Nov 1 03:50:01.419603 systemd[1]: Started systemd-networkd.service. Nov 1 03:50:01.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.420220 systemd-networkd[1029]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 03:50:01.421889 systemd-networkd[1029]: eth0: Link UP Nov 1 03:50:01.421898 systemd-networkd[1029]: eth0: Gained carrier Nov 1 03:50:01.428216 kernel: ACPI: button: Power Button [PWRF] Nov 1 03:50:01.433388 systemd-networkd[1029]: eth0: DHCPv4 address 10.244.101.254/30, gateway 10.244.101.253 acquired from 10.244.101.253 Nov 1 03:50:01.450191 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 03:50:01.464029 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 03:50:01.471000 audit[1023]: AVC avc: denied { confidentiality } for pid=1023 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 03:50:01.471000 audit[1023]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5563b8d38280 a1=338ec a2=7f2a78404bc5 a3=5 items=110 ppid=1016 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 03:50:01.471000 audit: CWD cwd="/" Nov 1 03:50:01.471000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=1 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=2 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=3 name=(null) inode=15875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=4 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=5 name=(null) inode=15876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=6 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=7 name=(null) inode=15877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=8 name=(null) inode=15877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=9 name=(null) inode=15878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=10 name=(null) inode=15877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=11 name=(null) inode=15879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=12 name=(null) inode=15877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=13 name=(null) inode=15880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=14 name=(null) inode=15877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=15 name=(null) inode=15881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=16 name=(null) inode=15877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=17 name=(null) inode=15882 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=18 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=19 name=(null) inode=15883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=20 name=(null) inode=15883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=21 name=(null) inode=15884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=22 name=(null) inode=15883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=23 name=(null) inode=15885 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=24 name=(null) inode=15883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=25 name=(null) inode=15886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=26 name=(null) inode=15883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=27 name=(null) inode=15887 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=28 name=(null) inode=15883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=29 name=(null) inode=15888 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=30 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=31 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=32 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=33 name=(null) inode=15890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=34 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=35 name=(null) inode=15891 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=36 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=37 name=(null) inode=15892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=38 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=39 name=(null) inode=15893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=40 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=41 name=(null) inode=15894 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=42 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=43 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=44 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=45 name=(null) inode=15896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=46 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=47 name=(null) inode=15897 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=48 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=49 name=(null) inode=15898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=50 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=51 name=(null) inode=15899 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=52 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=53 name=(null) inode=15900 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=55 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=56 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=57 name=(null) inode=15902 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=58 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=59 name=(null) inode=15903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=60 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=61 name=(null) inode=15904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=62 name=(null) inode=15904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=63 name=(null) inode=15905 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=64 name=(null) inode=15904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=65 name=(null) inode=15906 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=66 name=(null) inode=15904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=67 name=(null) inode=15907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=68 name=(null) inode=15904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=69 name=(null) inode=15908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=70 name=(null) inode=15904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=71 name=(null) inode=15909 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=72 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=73 name=(null) inode=15910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=74 name=(null) inode=15910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=75 name=(null) inode=15911 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=76 name=(null) inode=15910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=77 name=(null) inode=15912 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=78 name=(null) inode=15910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=79 name=(null) inode=15913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=80 name=(null) inode=15910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=81 name=(null) inode=15914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=82 name=(null) inode=15910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=83 name=(null) inode=15915 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=84 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=85 name=(null) inode=15916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=86 name=(null) inode=15916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=87 name=(null) inode=15917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=88 name=(null) inode=15916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=89 name=(null) inode=15918 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=90 name=(null) inode=15916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=91 name=(null) inode=15919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=92 name=(null) inode=15916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=93 name=(null) inode=15920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=94 name=(null) inode=15916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=95 name=(null) inode=15921 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=96 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=97 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=98 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=99 name=(null) inode=15923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=100 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=101 name=(null) inode=15924 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=102 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=103 name=(null) inode=15925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=104 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=105 name=(null) inode=15926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=106 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=107 name=(null) inode=15927 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PATH item=109 name=(null) inode=15928 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 03:50:01.471000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 03:50:01.520185 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 03:50:01.528107 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 03:50:01.528752 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 03:50:01.529184 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 03:50:01.674423 systemd[1]: Finished systemd-udev-settle.service. Nov 1 03:50:01.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.678469 systemd[1]: Starting lvm2-activation-early.service... Nov 1 03:50:01.703813 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 03:50:01.730934 systemd[1]: Finished lvm2-activation-early.service. Nov 1 03:50:01.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.732125 systemd[1]: Reached target cryptsetup.target. Nov 1 03:50:01.735371 systemd[1]: Starting lvm2-activation.service... Nov 1 03:50:01.741391 lvm[1046]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 03:50:01.763990 systemd[1]: Finished lvm2-activation.service. Nov 1 03:50:01.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.765404 systemd[1]: Reached target local-fs-pre.target. Nov 1 03:50:01.766439 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 03:50:01.766507 systemd[1]: Reached target local-fs.target. Nov 1 03:50:01.767479 systemd[1]: Reached target machines.target. Nov 1 03:50:01.771014 systemd[1]: Starting ldconfig.service... Nov 1 03:50:01.772409 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 03:50:01.772501 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 03:50:01.774667 systemd[1]: Starting systemd-boot-update.service... Nov 1 03:50:01.782233 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 03:50:01.786520 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 03:50:01.788156 systemd[1]: Starting systemd-sysext.service... Nov 1 03:50:01.791724 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1048 (bootctl) Nov 1 03:50:01.792893 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 03:50:01.805566 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 03:50:01.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.809651 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 03:50:01.814241 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 03:50:01.814428 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 03:50:01.838202 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 03:50:01.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.870386 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 03:50:01.871044 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 03:50:01.893437 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 03:50:01.900285 systemd-fsck[1057]: fsck.fat 4.2 (2021-01-31) Nov 1 03:50:01.900285 systemd-fsck[1057]: /dev/vda1: 790 files, 120773/258078 clusters Nov 1 03:50:01.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.905475 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 03:50:01.907485 systemd[1]: Mounting boot.mount... Nov 1 03:50:01.912182 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 03:50:01.915740 systemd[1]: Mounted boot.mount. Nov 1 03:50:01.925264 systemd[1]: Finished systemd-boot-update.service. Nov 1 03:50:01.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.930801 (sd-sysext)[1061]: Using extensions 'kubernetes'. Nov 1 03:50:01.933329 (sd-sysext)[1061]: Merged extensions into '/usr'. Nov 1 03:50:01.961266 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 03:50:01.963248 systemd[1]: Mounting usr-share-oem.mount... Nov 1 03:50:01.964601 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 03:50:01.966105 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 03:50:01.967920 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 03:50:01.970210 systemd[1]: Starting modprobe@loop.service... Nov 1 03:50:01.970688 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 03:50:01.970849 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 03:50:01.971004 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 03:50:01.972091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 03:50:01.972729 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 03:50:01.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.980074 systemd[1]: Mounted usr-share-oem.mount. Nov 1 03:50:01.982848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 03:50:01.983066 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 03:50:01.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.986120 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 03:50:01.986881 systemd[1]: Finished modprobe@loop.service. Nov 1 03:50:01.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:01.988912 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 03:50:01.989238 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 03:50:01.991870 systemd[1]: Finished systemd-sysext.service. Nov 1 03:50:01.995809 systemd[1]: Starting ensure-sysext.service... Nov 1 03:50:02.001854 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 03:50:02.009958 systemd[1]: Reloading. Nov 1 03:50:02.029968 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 03:50:02.039653 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 03:50:02.048443 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 03:50:02.091523 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2025-11-01T03:50:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 03:50:02.093704 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2025-11-01T03:50:02Z" level=info msg="torcx already run" Nov 1 03:50:02.136227 ldconfig[1047]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 03:50:02.227416 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 03:50:02.227439 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 03:50:02.246814 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 03:50:02.315683 kernel: kauditd_printk_skb: 234 callbacks suppressed Nov 1 03:50:02.315840 kernel: audit: type=1334 audit(1761969002.306:160): prog-id=24 op=LOAD Nov 1 03:50:02.315874 kernel: audit: type=1334 audit(1761969002.306:161): prog-id=21 op=UNLOAD Nov 1 03:50:02.315902 kernel: audit: type=1334 audit(1761969002.306:162): prog-id=25 op=LOAD Nov 1 03:50:02.306000 audit: BPF prog-id=24 op=LOAD Nov 1 03:50:02.306000 audit: BPF prog-id=21 op=UNLOAD Nov 1 03:50:02.306000 audit: BPF prog-id=25 op=LOAD Nov 1 03:50:02.308000 audit: BPF prog-id=26 op=LOAD Nov 1 03:50:02.319287 kernel: audit: type=1334 audit(1761969002.308:163): prog-id=26 op=LOAD Nov 1 03:50:02.319357 kernel: audit: type=1334 audit(1761969002.308:164): prog-id=22 op=UNLOAD Nov 1 03:50:02.308000 audit: BPF prog-id=22 op=UNLOAD Nov 1 03:50:02.320916 kernel: audit: type=1334 audit(1761969002.308:165): prog-id=23 op=UNLOAD Nov 1 03:50:02.308000 audit: BPF prog-id=23 op=UNLOAD Nov 1 03:50:02.322486 kernel: audit: type=1334 audit(1761969002.311:166): prog-id=27 op=LOAD Nov 1 03:50:02.311000 audit: BPF prog-id=27 op=LOAD Nov 1 03:50:02.324060 kernel: audit: type=1334 audit(1761969002.311:167): prog-id=15 op=UNLOAD Nov 1 03:50:02.311000 audit: BPF prog-id=15 op=UNLOAD Nov 1 03:50:02.314000 audit: BPF prog-id=28 op=LOAD Nov 1 03:50:02.326257 kernel: audit: type=1334 audit(1761969002.314:168): prog-id=28 op=LOAD Nov 1 03:50:02.326292 kernel: audit: type=1334 audit(1761969002.315:169): prog-id=29 op=LOAD Nov 1 03:50:02.315000 audit: BPF prog-id=29 op=LOAD Nov 1 03:50:02.315000 audit: BPF prog-id=16 op=UNLOAD Nov 1 03:50:02.315000 audit: BPF prog-id=17 op=UNLOAD Nov 1 03:50:02.319000 audit: BPF prog-id=30 op=LOAD Nov 1 03:50:02.319000 audit: BPF prog-id=31 op=LOAD Nov 1 03:50:02.319000 audit: BPF prog-id=18 op=UNLOAD Nov 1 03:50:02.319000 audit: BPF prog-id=19 op=UNLOAD Nov 1 03:50:02.320000 audit: BPF prog-id=32 op=LOAD Nov 1 03:50:02.320000 audit: BPF prog-id=20 op=UNLOAD Nov 1 03:50:02.328168 systemd[1]: Finished ldconfig.service. Nov 1 03:50:02.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.329917 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 03:50:02.333499 systemd[1]: Starting audit-rules.service... Nov 1 03:50:02.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.341000 audit: BPF prog-id=33 op=LOAD Nov 1 03:50:02.343000 audit: BPF prog-id=34 op=LOAD Nov 1 03:50:02.335178 systemd[1]: Starting clean-ca-certificates.service... Nov 1 03:50:02.336942 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 03:50:02.342157 systemd[1]: Starting systemd-resolved.service... Nov 1 03:50:02.344610 systemd[1]: Starting systemd-timesyncd.service... Nov 1 03:50:02.355000 audit[1141]: SYSTEM_BOOT pid=1141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.348938 systemd[1]: Starting systemd-update-utmp.service... Nov 1 03:50:02.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.360455 systemd[1]: Finished clean-ca-certificates.service. Nov 1 03:50:02.369316 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 03:50:02.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.373955 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 03:50:02.375712 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 03:50:02.378029 systemd[1]: Starting modprobe@loop.service... Nov 1 03:50:02.380078 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 03:50:02.380217 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 03:50:02.380333 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 03:50:02.381444 systemd[1]: Finished systemd-update-utmp.service. Nov 1 03:50:02.382246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 03:50:02.382367 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 03:50:02.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.384324 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 03:50:02.384440 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 03:50:02.385384 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 03:50:02.385507 systemd[1]: Finished modprobe@loop.service. Nov 1 03:50:02.390191 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 03:50:02.391652 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 03:50:02.394921 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 03:50:02.397617 systemd[1]: Starting modprobe@loop.service... Nov 1 03:50:02.398073 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 03:50:02.398283 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 03:50:02.398435 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 03:50:02.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.401181 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 03:50:02.402032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 03:50:02.402153 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 03:50:02.403093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 03:50:02.403372 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 03:50:02.404216 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 03:50:02.406778 systemd[1]: Starting systemd-update-done.service... Nov 1 03:50:02.412426 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 03:50:02.413912 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 03:50:02.416412 systemd[1]: Starting modprobe@drm.service... Nov 1 03:50:02.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.420980 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 03:50:02.421540 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 03:50:02.421687 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 03:50:02.423282 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 03:50:02.423779 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 03:50:02.424895 systemd[1]: Finished systemd-update-done.service. Nov 1 03:50:02.426330 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 03:50:02.426450 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 03:50:02.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.428413 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 03:50:02.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.430658 systemd[1]: Finished ensure-sysext.service. Nov 1 03:50:02.433407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 03:50:02.433538 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 03:50:02.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.448150 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 03:50:02.448317 systemd[1]: Finished modprobe@loop.service. Nov 1 03:50:02.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 03:50:02.451000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 03:50:02.451000 audit[1166]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec607f610 a2=420 a3=0 items=0 ppid=1135 pid=1166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 03:50:02.451000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 03:50:02.451548 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 03:50:02.453405 augenrules[1166]: No rules Nov 1 03:50:02.451674 systemd[1]: Finished modprobe@drm.service. Nov 1 03:50:02.452388 systemd[1]: Finished audit-rules.service. Nov 1 03:50:02.452889 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 03:50:02.468560 systemd[1]: Started systemd-timesyncd.service. Nov 1 03:50:02.469064 systemd[1]: Reached target time-set.target. Nov 1 03:50:02.487672 systemd-resolved[1138]: Positive Trust Anchors: Nov 1 03:50:02.487695 systemd-resolved[1138]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 03:50:02.487747 systemd-resolved[1138]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 03:50:02.493801 systemd-resolved[1138]: Using system hostname 'srv-n2oyf.gb1.brightbox.com'. Nov 1 03:50:02.495775 systemd[1]: Started systemd-resolved.service. Nov 1 03:50:02.496984 systemd[1]: Reached target network.target. Nov 1 03:50:02.497949 systemd[1]: Reached target nss-lookup.target. Nov 1 03:50:02.498958 systemd[1]: Reached target sysinit.target. Nov 1 03:50:02.500094 systemd[1]: Started motdgen.path. Nov 1 03:50:02.501060 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 03:50:02.502640 systemd[1]: Started logrotate.timer. Nov 1 03:50:02.503804 systemd[1]: Started mdadm.timer. Nov 1 03:50:02.504712 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 03:50:02.505723 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 03:50:02.505800 systemd[1]: Reached target paths.target. Nov 1 03:50:02.506495 systemd[1]: Reached target timers.target. Nov 1 03:50:02.507656 systemd[1]: Listening on dbus.socket. Nov 1 03:50:02.509504 systemd[1]: Starting docker.socket... Nov 1 03:50:02.514396 systemd[1]: Listening on sshd.socket. Nov 1 03:50:02.515067 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 03:50:02.515634 systemd[1]: Listening on docker.socket. Nov 1 03:50:02.516316 systemd[1]: Reached target sockets.target. Nov 1 03:50:02.516776 systemd[1]: Reached target basic.target. Nov 1 03:50:02.517266 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 03:50:02.517376 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 03:50:02.518644 systemd[1]: Starting containerd.service... Nov 1 03:50:02.520581 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 03:50:02.523445 systemd[1]: Starting dbus.service... Nov 1 03:50:02.526132 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 03:50:02.529057 systemd[1]: Starting extend-filesystems.service... Nov 1 03:50:02.547148 jq[1178]: false Nov 1 03:50:02.529593 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 03:50:02.532016 systemd[1]: Starting motdgen.service... Nov 1 03:50:02.533922 systemd[1]: Starting prepare-helm.service... Nov 1 03:50:02.537530 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 03:50:02.541477 systemd[1]: Starting sshd-keygen.service... Nov 1 03:50:02.547641 systemd[1]: Starting systemd-logind.service... Nov 1 03:50:02.549310 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 03:50:02.549382 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 03:50:02.549902 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 03:50:02.553842 systemd[1]: Starting update-engine.service... Nov 1 03:50:02.558348 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 03:50:02.561364 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 03:50:02.561619 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 03:50:02.563058 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 03:50:02.563341 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 03:50:02.576284 jq[1191]: true Nov 1 03:50:02.582253 tar[1195]: linux-amd64/LICENSE Nov 1 03:50:02.582253 tar[1195]: linux-amd64/helm Nov 1 03:50:02.603765 jq[1202]: true Nov 1 03:50:02.613613 dbus-daemon[1176]: [system] SELinux support is enabled Nov 1 03:50:02.618385 systemd[1]: Started dbus.service. Nov 1 03:50:02.623933 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 03:50:02.623998 systemd[1]: Reached target system-config.target. Nov 1 03:50:02.624492 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 03:50:02.624521 systemd[1]: Reached target user-config.target. Nov 1 03:50:02.627296 dbus-daemon[1176]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1029 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 03:50:02.630830 dbus-daemon[1176]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 03:50:02.637182 extend-filesystems[1180]: Found loop1 Nov 1 03:50:02.639631 extend-filesystems[1180]: Found vda Nov 1 03:50:02.640270 extend-filesystems[1180]: Found vda1 Nov 1 03:50:02.641487 extend-filesystems[1180]: Found vda2 Nov 1 03:50:02.642094 extend-filesystems[1180]: Found vda3 Nov 1 03:50:02.642591 extend-filesystems[1180]: Found usr Nov 1 03:50:02.643144 extend-filesystems[1180]: Found vda4 Nov 1 03:50:02.643365 systemd[1]: Starting systemd-hostnamed.service... Nov 1 03:50:02.643798 extend-filesystems[1180]: Found vda6 Nov 1 03:50:02.645102 extend-filesystems[1180]: Found vda7 Nov 1 03:50:02.645585 extend-filesystems[1180]: Found vda9 Nov 1 03:50:02.646123 extend-filesystems[1180]: Checking size of /dev/vda9 Nov 1 03:50:02.651905 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 03:50:02.652153 systemd[1]: Finished motdgen.service. Nov 1 03:50:02.676697 update_engine[1189]: I1101 03:50:02.676288 1189 main.cc:92] Flatcar Update Engine starting Nov 1 03:50:02.685168 update_engine[1189]: I1101 03:50:02.685130 1189 update_check_scheduler.cc:74] Next update check in 2m6s Nov 1 03:50:02.685325 systemd[1]: Started update-engine.service. Nov 1 03:50:02.687908 systemd[1]: Started locksmithd.service. Nov 1 03:50:02.690744 bash[1227]: Updated "/home/core/.ssh/authorized_keys" Nov 1 03:50:02.691626 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 03:50:02.696459 extend-filesystems[1180]: Resized partition /dev/vda9 Nov 1 03:50:02.701035 extend-filesystems[1231]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 03:50:03.618541 systemd-timesyncd[1140]: Contacted time server 139.143.5.32:123 (0.flatcar.pool.ntp.org). Nov 1 03:50:03.618603 systemd-timesyncd[1140]: Initial clock synchronization to Sat 2025-11-01 03:50:03.618376 UTC. Nov 1 03:50:03.618675 systemd-resolved[1138]: Clock change detected. Flushing caches. Nov 1 03:50:03.628351 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Nov 1 03:50:03.661565 env[1204]: time="2025-11-01T03:50:03.661488618Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 03:50:03.677825 systemd-logind[1186]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 03:50:03.677851 systemd-logind[1186]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 03:50:03.678059 systemd-logind[1186]: New seat seat0. Nov 1 03:50:03.680005 systemd[1]: Started systemd-logind.service. Nov 1 03:50:03.693348 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 1 03:50:03.702888 extend-filesystems[1231]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 03:50:03.702888 extend-filesystems[1231]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 1 03:50:03.702888 extend-filesystems[1231]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 1 03:50:03.705155 extend-filesystems[1180]: Resized filesystem in /dev/vda9 Nov 1 03:50:03.703210 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 03:50:03.703394 systemd[1]: Finished extend-filesystems.service. Nov 1 03:50:03.727018 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 03:50:03.727097 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 03:50:03.729945 dbus-daemon[1176]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 03:50:03.730305 systemd[1]: Started systemd-hostnamed.service. Nov 1 03:50:03.736283 env[1204]: time="2025-11-01T03:50:03.736235766Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 03:50:03.736494 dbus-daemon[1176]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1218 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 03:50:03.736773 env[1204]: time="2025-11-01T03:50:03.736749489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 03:50:03.739637 systemd[1]: Starting polkit.service... Nov 1 03:50:03.752676 env[1204]: time="2025-11-01T03:50:03.752200832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 03:50:03.752676 env[1204]: time="2025-11-01T03:50:03.752266901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 03:50:03.752676 env[1204]: time="2025-11-01T03:50:03.752572853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 03:50:03.752676 env[1204]: time="2025-11-01T03:50:03.752606219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 03:50:03.752676 env[1204]: time="2025-11-01T03:50:03.752621649Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 03:50:03.752676 env[1204]: time="2025-11-01T03:50:03.752633621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 03:50:03.753060 env[1204]: time="2025-11-01T03:50:03.753040742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 03:50:03.753465 env[1204]: time="2025-11-01T03:50:03.753446004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 03:50:03.753744 env[1204]: time="2025-11-01T03:50:03.753723612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 03:50:03.753841 env[1204]: time="2025-11-01T03:50:03.753827008Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 03:50:03.753978 env[1204]: time="2025-11-01T03:50:03.753953978Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 03:50:03.754049 env[1204]: time="2025-11-01T03:50:03.754030639Z" level=info msg="metadata content store policy set" policy=shared Nov 1 03:50:03.757615 polkitd[1237]: Started polkitd version 121 Nov 1 03:50:03.759441 env[1204]: time="2025-11-01T03:50:03.759417133Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 03:50:03.759559 env[1204]: time="2025-11-01T03:50:03.759542732Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 03:50:03.759638 env[1204]: time="2025-11-01T03:50:03.759625476Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 03:50:03.759749 env[1204]: time="2025-11-01T03:50:03.759735418Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 03:50:03.759835 env[1204]: time="2025-11-01T03:50:03.759821966Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 03:50:03.759908 env[1204]: time="2025-11-01T03:50:03.759896683Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 03:50:03.760509 env[1204]: time="2025-11-01T03:50:03.759969645Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 03:50:03.760591 env[1204]: time="2025-11-01T03:50:03.760579307Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 03:50:03.760666 env[1204]: time="2025-11-01T03:50:03.760654233Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 03:50:03.760752 env[1204]: time="2025-11-01T03:50:03.760739482Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 03:50:03.760829 env[1204]: time="2025-11-01T03:50:03.760817393Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 03:50:03.760906 env[1204]: time="2025-11-01T03:50:03.760893862Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 03:50:03.761090 env[1204]: time="2025-11-01T03:50:03.761075638Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 03:50:03.761640 env[1204]: time="2025-11-01T03:50:03.761624876Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 03:50:03.763108 env[1204]: time="2025-11-01T03:50:03.763073795Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 03:50:03.763184 env[1204]: time="2025-11-01T03:50:03.763124760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763184 env[1204]: time="2025-11-01T03:50:03.763152980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 03:50:03.763299 env[1204]: time="2025-11-01T03:50:03.763219462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763299 env[1204]: time="2025-11-01T03:50:03.763233486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763299 env[1204]: time="2025-11-01T03:50:03.763247084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763299 env[1204]: time="2025-11-01T03:50:03.763261193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763299 env[1204]: time="2025-11-01T03:50:03.763273868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763299 env[1204]: time="2025-11-01T03:50:03.763286263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763299 env[1204]: time="2025-11-01T03:50:03.763297415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763506 env[1204]: time="2025-11-01T03:50:03.763309613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763506 env[1204]: time="2025-11-01T03:50:03.763326551Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 03:50:03.763567 env[1204]: time="2025-11-01T03:50:03.763551208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763600 env[1204]: time="2025-11-01T03:50:03.763567432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763600 env[1204]: time="2025-11-01T03:50:03.763581304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763600 env[1204]: time="2025-11-01T03:50:03.763593512Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 03:50:03.763705 env[1204]: time="2025-11-01T03:50:03.763616806Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 03:50:03.763705 env[1204]: time="2025-11-01T03:50:03.763634067Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 03:50:03.763705 env[1204]: time="2025-11-01T03:50:03.763662000Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 03:50:03.763795 env[1204]: time="2025-11-01T03:50:03.763704553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 03:50:03.763956 env[1204]: time="2025-11-01T03:50:03.763910820Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 03:50:03.766661 env[1204]: time="2025-11-01T03:50:03.763972656Z" level=info msg="Connect containerd service" Nov 1 03:50:03.766661 env[1204]: time="2025-11-01T03:50:03.764071498Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 03:50:03.766661 env[1204]: time="2025-11-01T03:50:03.764802647Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 03:50:03.766661 env[1204]: time="2025-11-01T03:50:03.765077333Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 03:50:03.766661 env[1204]: time="2025-11-01T03:50:03.765119357Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 03:50:03.766661 env[1204]: time="2025-11-01T03:50:03.765258999Z" level=info msg="Start subscribing containerd event" Nov 1 03:50:03.766661 env[1204]: time="2025-11-01T03:50:03.765313129Z" level=info msg="Start recovering state" Nov 1 03:50:03.766661 env[1204]: time="2025-11-01T03:50:03.765394509Z" level=info msg="Start event monitor" Nov 1 03:50:03.766661 env[1204]: time="2025-11-01T03:50:03.765411529Z" level=info msg="Start snapshots syncer" Nov 1 03:50:03.766661 env[1204]: time="2025-11-01T03:50:03.766078454Z" level=info msg="containerd successfully booted in 0.108590s" Nov 1 03:50:03.765267 systemd[1]: Started containerd.service. Nov 1 03:50:03.776109 polkitd[1237]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 03:50:03.776192 polkitd[1237]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 03:50:03.779158 env[1204]: time="2025-11-01T03:50:03.779105980Z" level=info msg="Start cni network conf syncer for default" Nov 1 03:50:03.779158 env[1204]: time="2025-11-01T03:50:03.779159156Z" level=info msg="Start streaming server" Nov 1 03:50:03.780364 polkitd[1237]: Finished loading, compiling and executing 2 rules Nov 1 03:50:03.780792 dbus-daemon[1176]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 03:50:03.780943 systemd[1]: Started polkit.service. Nov 1 03:50:03.781899 polkitd[1237]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 03:50:03.800994 systemd-hostnamed[1218]: Hostname set to (static) Nov 1 03:50:04.234933 systemd-networkd[1029]: eth0: Gained IPv6LL Nov 1 03:50:04.238794 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 03:50:04.239595 systemd[1]: Reached target network-online.target. Nov 1 03:50:04.241943 systemd[1]: Starting kubelet.service... Nov 1 03:50:04.263205 tar[1195]: linux-amd64/README.md Nov 1 03:50:04.273961 systemd[1]: Finished prepare-helm.service. Nov 1 03:50:04.353512 systemd-networkd[1029]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:197f:24:19ff:fef4:65fe/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:197f:24:19ff:fef4:65fe/64 assigned by NDisc. Nov 1 03:50:04.353521 systemd-networkd[1029]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 1 03:50:04.416815 locksmithd[1228]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 03:50:04.443643 sshd_keygen[1205]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 03:50:04.471456 systemd[1]: Finished sshd-keygen.service. Nov 1 03:50:04.473663 systemd[1]: Starting issuegen.service... Nov 1 03:50:04.480256 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 03:50:04.480447 systemd[1]: Finished issuegen.service. Nov 1 03:50:04.482537 systemd[1]: Starting systemd-user-sessions.service... Nov 1 03:50:04.491174 systemd[1]: Finished systemd-user-sessions.service. Nov 1 03:50:04.493151 systemd[1]: Started getty@tty1.service. Nov 1 03:50:04.495095 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 03:50:04.495756 systemd[1]: Reached target getty.target. Nov 1 03:50:05.326166 systemd[1]: Started kubelet.service. Nov 1 03:50:05.914779 kubelet[1269]: E1101 03:50:05.914725 1269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 03:50:05.916901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 03:50:05.917053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 03:50:05.917454 systemd[1]: kubelet.service: Consumed 1.269s CPU time. Nov 1 03:50:10.588099 coreos-metadata[1175]: Nov 01 03:50:10.587 WARN failed to locate config-drive, using the metadata service API instead Nov 1 03:50:10.634877 coreos-metadata[1175]: Nov 01 03:50:10.634 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 1 03:50:10.655972 coreos-metadata[1175]: Nov 01 03:50:10.655 INFO Fetch successful Nov 1 03:50:10.656226 coreos-metadata[1175]: Nov 01 03:50:10.656 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 1 03:50:10.692310 coreos-metadata[1175]: Nov 01 03:50:10.692 INFO Fetch successful Nov 1 03:50:10.694286 unknown[1175]: wrote ssh authorized keys file for user: core Nov 1 03:50:10.711142 update-ssh-keys[1278]: Updated "/home/core/.ssh/authorized_keys" Nov 1 03:50:10.712387 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 03:50:10.712780 systemd[1]: Reached target multi-user.target. Nov 1 03:50:10.714445 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 03:50:10.724178 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 03:50:10.724340 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 03:50:10.724494 systemd[1]: Startup finished in 964ms (kernel) + 8.816s (initrd) + 12.295s (userspace) = 22.076s. Nov 1 03:50:13.586056 systemd[1]: Created slice system-sshd.slice. Nov 1 03:50:13.588763 systemd[1]: Started sshd@0-10.244.101.254:22-139.178.89.65:33612.service. Nov 1 03:50:14.510432 sshd[1281]: Accepted publickey for core from 139.178.89.65 port 33612 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:50:14.515325 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:50:14.534947 systemd[1]: Created slice user-500.slice. Nov 1 03:50:14.536878 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 03:50:14.543161 systemd-logind[1186]: New session 1 of user core. Nov 1 03:50:14.549597 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 03:50:14.552350 systemd[1]: Starting user@500.service... Nov 1 03:50:14.556418 (systemd)[1284]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:50:14.643157 systemd[1284]: Queued start job for default target default.target. Nov 1 03:50:14.644059 systemd[1284]: Reached target paths.target. Nov 1 03:50:14.644220 systemd[1284]: Reached target sockets.target. Nov 1 03:50:14.644323 systemd[1284]: Reached target timers.target. Nov 1 03:50:14.644441 systemd[1284]: Reached target basic.target. Nov 1 03:50:14.644580 systemd[1284]: Reached target default.target. Nov 1 03:50:14.644691 systemd[1284]: Startup finished in 80ms. Nov 1 03:50:14.645499 systemd[1]: Started user@500.service. Nov 1 03:50:14.650619 systemd[1]: Started session-1.scope. Nov 1 03:50:15.288222 systemd[1]: Started sshd@1-10.244.101.254:22-139.178.89.65:33614.service. Nov 1 03:50:16.049318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 03:50:16.049998 systemd[1]: Stopped kubelet.service. Nov 1 03:50:16.050105 systemd[1]: kubelet.service: Consumed 1.269s CPU time. Nov 1 03:50:16.053981 systemd[1]: Starting kubelet.service... Nov 1 03:50:16.205971 sshd[1293]: Accepted publickey for core from 139.178.89.65 port 33614 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:50:16.211831 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:50:16.222440 systemd-logind[1186]: New session 2 of user core. Nov 1 03:50:16.222582 systemd[1]: Started kubelet.service. Nov 1 03:50:16.227734 systemd[1]: Started session-2.scope. Nov 1 03:50:16.333277 kubelet[1299]: E1101 03:50:16.332962 1299 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 03:50:16.342472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 03:50:16.342654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 03:50:16.847023 sshd[1293]: pam_unix(sshd:session): session closed for user core Nov 1 03:50:16.854699 systemd-logind[1186]: Session 2 logged out. Waiting for processes to exit. Nov 1 03:50:16.856074 systemd[1]: sshd@1-10.244.101.254:22-139.178.89.65:33614.service: Deactivated successfully. Nov 1 03:50:16.858303 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 03:50:16.860279 systemd-logind[1186]: Removed session 2. Nov 1 03:50:17.001060 systemd[1]: Started sshd@2-10.244.101.254:22-139.178.89.65:49816.service. Nov 1 03:50:17.921829 sshd[1309]: Accepted publickey for core from 139.178.89.65 port 49816 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:50:17.926190 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:50:17.936526 systemd-logind[1186]: New session 3 of user core. Nov 1 03:50:17.937282 systemd[1]: Started session-3.scope. Nov 1 03:50:18.550404 sshd[1309]: pam_unix(sshd:session): session closed for user core Nov 1 03:50:18.556628 systemd[1]: sshd@2-10.244.101.254:22-139.178.89.65:49816.service: Deactivated successfully. Nov 1 03:50:18.558130 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 03:50:18.559534 systemd-logind[1186]: Session 3 logged out. Waiting for processes to exit. Nov 1 03:50:18.561596 systemd-logind[1186]: Removed session 3. Nov 1 03:50:18.703671 systemd[1]: Started sshd@3-10.244.101.254:22-139.178.89.65:49830.service. Nov 1 03:50:19.622192 sshd[1315]: Accepted publickey for core from 139.178.89.65 port 49830 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:50:19.625803 sshd[1315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:50:19.633436 systemd-logind[1186]: New session 4 of user core. Nov 1 03:50:19.634601 systemd[1]: Started session-4.scope. Nov 1 03:50:20.262386 sshd[1315]: pam_unix(sshd:session): session closed for user core Nov 1 03:50:20.267550 systemd-logind[1186]: Session 4 logged out. Waiting for processes to exit. Nov 1 03:50:20.268290 systemd[1]: sshd@3-10.244.101.254:22-139.178.89.65:49830.service: Deactivated successfully. Nov 1 03:50:20.269892 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 03:50:20.271451 systemd-logind[1186]: Removed session 4. Nov 1 03:50:20.415102 systemd[1]: Started sshd@4-10.244.101.254:22-139.178.89.65:49838.service. Nov 1 03:50:21.324136 sshd[1321]: Accepted publickey for core from 139.178.89.65 port 49838 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:50:21.328390 sshd[1321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:50:21.339110 systemd[1]: Started session-5.scope. Nov 1 03:50:21.339599 systemd-logind[1186]: New session 5 of user core. Nov 1 03:50:21.824098 sudo[1324]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 03:50:21.825192 sudo[1324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 03:50:21.871598 systemd[1]: Starting docker.service... Nov 1 03:50:21.929089 env[1334]: time="2025-11-01T03:50:21.929014652Z" level=info msg="Starting up" Nov 1 03:50:21.930737 env[1334]: time="2025-11-01T03:50:21.930703511Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 03:50:21.930737 env[1334]: time="2025-11-01T03:50:21.930722417Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 03:50:21.930909 env[1334]: time="2025-11-01T03:50:21.930741983Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 03:50:21.930909 env[1334]: time="2025-11-01T03:50:21.930754965Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 03:50:21.933885 env[1334]: time="2025-11-01T03:50:21.933862826Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 03:50:21.933988 env[1334]: time="2025-11-01T03:50:21.933974389Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 03:50:21.934060 env[1334]: time="2025-11-01T03:50:21.934045999Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 03:50:21.934119 env[1334]: time="2025-11-01T03:50:21.934107830Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 03:50:21.956954 env[1334]: time="2025-11-01T03:50:21.956908881Z" level=info msg="Loading containers: start." Nov 1 03:50:22.115367 kernel: Initializing XFRM netlink socket Nov 1 03:50:22.162769 env[1334]: time="2025-11-01T03:50:22.162698770Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 03:50:22.242241 systemd-networkd[1029]: docker0: Link UP Nov 1 03:50:22.256077 env[1334]: time="2025-11-01T03:50:22.256012921Z" level=info msg="Loading containers: done." Nov 1 03:50:22.275821 env[1334]: time="2025-11-01T03:50:22.274084271Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 03:50:22.275821 env[1334]: time="2025-11-01T03:50:22.274378460Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 03:50:22.275821 env[1334]: time="2025-11-01T03:50:22.274552372Z" level=info msg="Daemon has completed initialization" Nov 1 03:50:22.287586 systemd[1]: Started docker.service. Nov 1 03:50:22.297213 env[1334]: time="2025-11-01T03:50:22.297164877Z" level=info msg="API listen on /run/docker.sock" Nov 1 03:50:23.478439 env[1204]: time="2025-11-01T03:50:23.478355445Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 03:50:24.338470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207764448.mount: Deactivated successfully. Nov 1 03:50:26.199492 env[1204]: time="2025-11-01T03:50:26.199116469Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:26.202982 env[1204]: time="2025-11-01T03:50:26.202907485Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:26.206180 env[1204]: time="2025-11-01T03:50:26.206116053Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:26.209741 env[1204]: time="2025-11-01T03:50:26.209673729Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:26.213699 env[1204]: time="2025-11-01T03:50:26.213604782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 03:50:26.215642 env[1204]: time="2025-11-01T03:50:26.215481438Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 03:50:26.584636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 03:50:26.585676 systemd[1]: Stopped kubelet.service. Nov 1 03:50:26.591052 systemd[1]: Starting kubelet.service... Nov 1 03:50:26.745847 systemd[1]: Started kubelet.service. Nov 1 03:50:26.806795 kubelet[1466]: E1101 03:50:26.806713 1466 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 03:50:26.810071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 03:50:26.810350 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 03:50:32.541172 env[1204]: time="2025-11-01T03:50:32.540891430Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:32.543187 env[1204]: time="2025-11-01T03:50:32.543145265Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:32.545631 env[1204]: time="2025-11-01T03:50:32.545600176Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:32.547699 env[1204]: time="2025-11-01T03:50:32.547668475Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:32.548927 env[1204]: time="2025-11-01T03:50:32.548862196Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 03:50:32.550574 env[1204]: time="2025-11-01T03:50:32.550538865Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 03:50:34.366889 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 03:50:36.835302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 03:50:36.837206 systemd[1]: Stopped kubelet.service. Nov 1 03:50:36.849270 systemd[1]: Starting kubelet.service... Nov 1 03:50:36.985154 systemd[1]: Started kubelet.service. Nov 1 03:50:37.057823 kubelet[1478]: E1101 03:50:37.057764 1478 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 03:50:37.059392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 03:50:37.059541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 03:50:37.311860 env[1204]: time="2025-11-01T03:50:37.311500103Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:37.314594 env[1204]: time="2025-11-01T03:50:37.313661984Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:37.316487 env[1204]: time="2025-11-01T03:50:37.316440957Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:37.318493 env[1204]: time="2025-11-01T03:50:37.318448519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:37.319358 env[1204]: time="2025-11-01T03:50:37.319279251Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 03:50:37.321629 env[1204]: time="2025-11-01T03:50:37.321567845Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 03:50:39.975974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476067492.mount: Deactivated successfully. Nov 1 03:50:40.717376 env[1204]: time="2025-11-01T03:50:40.717279268Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:40.719096 env[1204]: time="2025-11-01T03:50:40.719002256Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:40.720486 env[1204]: time="2025-11-01T03:50:40.720448738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:40.721855 env[1204]: time="2025-11-01T03:50:40.721763724Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:40.722479 env[1204]: time="2025-11-01T03:50:40.722227798Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 03:50:40.724591 env[1204]: time="2025-11-01T03:50:40.724436549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 03:50:41.489871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474046193.mount: Deactivated successfully. Nov 1 03:50:42.560186 env[1204]: time="2025-11-01T03:50:42.559127494Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:42.564534 env[1204]: time="2025-11-01T03:50:42.563206021Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:42.564534 env[1204]: time="2025-11-01T03:50:42.564012091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:42.565235 env[1204]: time="2025-11-01T03:50:42.565183193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:42.566862 env[1204]: time="2025-11-01T03:50:42.566022379Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 03:50:42.569640 env[1204]: time="2025-11-01T03:50:42.569580186Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 03:50:43.268579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061412846.mount: Deactivated successfully. Nov 1 03:50:43.272420 env[1204]: time="2025-11-01T03:50:43.272373779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:43.273281 env[1204]: time="2025-11-01T03:50:43.273252317Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:43.275481 env[1204]: time="2025-11-01T03:50:43.275449197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:43.278847 env[1204]: time="2025-11-01T03:50:43.278818640Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:43.279146 env[1204]: time="2025-11-01T03:50:43.279114957Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 03:50:43.280443 env[1204]: time="2025-11-01T03:50:43.280417630Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 03:50:44.103025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576684264.mount: Deactivated successfully. Nov 1 03:50:46.998963 env[1204]: time="2025-11-01T03:50:46.998786807Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:47.000959 env[1204]: time="2025-11-01T03:50:47.000922051Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:47.005379 env[1204]: time="2025-11-01T03:50:47.005293201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:47.008170 env[1204]: time="2025-11-01T03:50:47.008131431Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:47.009523 env[1204]: time="2025-11-01T03:50:47.009476026Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 03:50:47.085687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 03:50:47.086309 systemd[1]: Stopped kubelet.service. Nov 1 03:50:47.092969 systemd[1]: Starting kubelet.service... Nov 1 03:50:47.558082 systemd[1]: Started kubelet.service. Nov 1 03:50:47.630974 kubelet[1494]: E1101 03:50:47.630907 1494 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 03:50:47.633691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 03:50:47.633894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 03:50:48.663461 update_engine[1189]: I1101 03:50:48.663142 1189 update_attempter.cc:509] Updating boot flags... Nov 1 03:50:50.313650 systemd[1]: Stopped kubelet.service. Nov 1 03:50:50.316654 systemd[1]: Starting kubelet.service... Nov 1 03:50:50.354516 systemd[1]: Reloading. Nov 1 03:50:50.462274 /usr/lib/systemd/system-generators/torcx-generator[1553]: time="2025-11-01T03:50:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 03:50:50.466716 /usr/lib/systemd/system-generators/torcx-generator[1553]: time="2025-11-01T03:50:50Z" level=info msg="torcx already run" Nov 1 03:50:50.560146 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 03:50:50.560442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 03:50:50.580422 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 03:50:50.679538 systemd[1]: Started kubelet.service. Nov 1 03:50:50.681977 systemd[1]: Stopping kubelet.service... Nov 1 03:50:50.682956 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 03:50:50.683141 systemd[1]: Stopped kubelet.service. Nov 1 03:50:50.685561 systemd[1]: Starting kubelet.service... Nov 1 03:50:50.799659 systemd[1]: Started kubelet.service. Nov 1 03:50:50.887936 kubelet[1607]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 03:50:50.887936 kubelet[1607]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 03:50:50.887936 kubelet[1607]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 03:50:50.888447 kubelet[1607]: I1101 03:50:50.888025 1607 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 03:50:51.391174 kubelet[1607]: I1101 03:50:51.391093 1607 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 03:50:51.391174 kubelet[1607]: I1101 03:50:51.391161 1607 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 03:50:51.391867 kubelet[1607]: I1101 03:50:51.391815 1607 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 03:50:51.457787 kubelet[1607]: I1101 03:50:51.456302 1607 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 03:50:51.464717 kubelet[1607]: E1101 03:50:51.464206 1607 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.101.254:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.101.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 03:50:51.474981 kubelet[1607]: E1101 03:50:51.474924 1607 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 03:50:51.475166 kubelet[1607]: I1101 03:50:51.475153 1607 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 03:50:51.478534 kubelet[1607]: I1101 03:50:51.478511 1607 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 03:50:51.479696 kubelet[1607]: I1101 03:50:51.479658 1607 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 03:50:51.479985 kubelet[1607]: I1101 03:50:51.479794 1607 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-n2oyf.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 03:50:51.480245 kubelet[1607]: I1101 03:50:51.480231 1607 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 03:50:51.480313 kubelet[1607]: I1101 03:50:51.480304 1607 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 03:50:51.480558 kubelet[1607]: I1101 03:50:51.480543 1607 state_mem.go:36] "Initialized new in-memory state store" Nov 1 03:50:51.483735 kubelet[1607]: I1101 03:50:51.483717 1607 kubelet.go:446] "Attempting to sync node with API server" Nov 1 03:50:51.483887 kubelet[1607]: I1101 03:50:51.483872 1607 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 03:50:51.483982 kubelet[1607]: I1101 03:50:51.483972 1607 kubelet.go:352] "Adding apiserver pod source" Nov 1 03:50:51.484108 kubelet[1607]: I1101 03:50:51.484097 1607 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 03:50:51.493757 kubelet[1607]: I1101 03:50:51.493731 1607 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 03:50:51.494323 kubelet[1607]: I1101 03:50:51.494303 1607 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 03:50:51.495054 kubelet[1607]: W1101 03:50:51.495028 1607 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 03:50:51.497378 kubelet[1607]: I1101 03:50:51.497346 1607 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 03:50:51.497496 kubelet[1607]: I1101 03:50:51.497389 1607 server.go:1287] "Started kubelet" Nov 1 03:50:51.498362 kubelet[1607]: W1101 03:50:51.497541 1607 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.101.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-n2oyf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.101.254:6443: connect: connection refused Nov 1 03:50:51.498362 kubelet[1607]: E1101 03:50:51.497618 1607 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.101.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-n2oyf.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.101.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 03:50:51.509501 kubelet[1607]: W1101 03:50:51.509399 1607 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.101.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.101.254:6443: connect: connection refused Nov 1 03:50:51.509501 kubelet[1607]: E1101 03:50:51.509461 1607 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.101.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.101.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 03:50:51.509666 kubelet[1607]: I1101 03:50:51.509511 1607 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 03:50:51.510112 kubelet[1607]: I1101 03:50:51.510063 1607 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 03:50:51.510522 kubelet[1607]: I1101 03:50:51.510469 1607 server.go:479] "Adding debug handlers to kubelet server" Nov 1 03:50:51.510656 kubelet[1607]: I1101 03:50:51.510643 1607 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 03:50:51.512268 kubelet[1607]: E1101 03:50:51.510990 1607 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.101.254:6443/api/v1/namespaces/default/events\": dial tcp 10.244.101.254:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-n2oyf.gb1.brightbox.com.1873c5845bbb8596 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-n2oyf.gb1.brightbox.com,UID:srv-n2oyf.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-n2oyf.gb1.brightbox.com,},FirstTimestamp:2025-11-01 03:50:51.497366934 +0000 UTC m=+0.687759039,LastTimestamp:2025-11-01 03:50:51.497366934 +0000 UTC m=+0.687759039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-n2oyf.gb1.brightbox.com,}" Nov 1 03:50:51.516992 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 03:50:51.517139 kubelet[1607]: I1101 03:50:51.516895 1607 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 03:50:51.517139 kubelet[1607]: I1101 03:50:51.516929 1607 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 03:50:51.518826 kubelet[1607]: E1101 03:50:51.518799 1607 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 03:50:51.521072 kubelet[1607]: I1101 03:50:51.521046 1607 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 03:50:51.521194 kubelet[1607]: I1101 03:50:51.521146 1607 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 03:50:51.521264 kubelet[1607]: I1101 03:50:51.521198 1607 reconciler.go:26] "Reconciler: start to sync state" Nov 1 03:50:51.529398 kubelet[1607]: I1101 03:50:51.528035 1607 factory.go:221] Registration of the systemd container factory successfully Nov 1 03:50:51.529398 kubelet[1607]: I1101 03:50:51.528118 1607 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 03:50:51.529398 kubelet[1607]: W1101 03:50:51.528323 1607 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.101.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.101.254:6443: connect: connection refused Nov 1 03:50:51.529398 kubelet[1607]: E1101 03:50:51.528397 1607 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.101.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.101.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 03:50:51.529398 kubelet[1607]: E1101 03:50:51.528855 1607 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" Nov 1 03:50:51.529398 kubelet[1607]: E1101 03:50:51.528935 1607 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n2oyf.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.254:6443: connect: connection refused" interval="200ms" Nov 1 03:50:51.530247 kubelet[1607]: I1101 03:50:51.530218 1607 factory.go:221] Registration of the containerd container factory successfully Nov 1 03:50:51.548600 kubelet[1607]: I1101 03:50:51.548564 1607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 03:50:51.549815 kubelet[1607]: I1101 03:50:51.549797 1607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 03:50:51.549928 kubelet[1607]: I1101 03:50:51.549918 1607 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 03:50:51.550013 kubelet[1607]: I1101 03:50:51.550003 1607 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 03:50:51.550083 kubelet[1607]: I1101 03:50:51.550074 1607 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 03:50:51.550217 kubelet[1607]: E1101 03:50:51.550190 1607 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 03:50:51.563337 kubelet[1607]: W1101 03:50:51.563282 1607 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.101.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.101.254:6443: connect: connection refused Nov 1 03:50:51.565178 kubelet[1607]: E1101 03:50:51.565144 1607 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.101.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.101.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 03:50:51.566383 kubelet[1607]: I1101 03:50:51.566365 1607 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 03:50:51.566383 kubelet[1607]: I1101 03:50:51.566378 1607 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 03:50:51.566516 kubelet[1607]: I1101 03:50:51.566395 1607 state_mem.go:36] "Initialized new in-memory state store" Nov 1 03:50:51.568687 kubelet[1607]: I1101 03:50:51.568669 1607 policy_none.go:49] "None policy: Start" Nov 1 03:50:51.570171 kubelet[1607]: I1101 03:50:51.570153 1607 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 03:50:51.570284 kubelet[1607]: I1101 03:50:51.570273 1607 state_mem.go:35] "Initializing new in-memory state store" Nov 1 03:50:51.576015 systemd[1]: Created slice kubepods.slice. Nov 1 03:50:51.580873 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 03:50:51.583511 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 03:50:51.590208 kubelet[1607]: I1101 03:50:51.590187 1607 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 03:50:51.590523 kubelet[1607]: I1101 03:50:51.590510 1607 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 03:50:51.590657 kubelet[1607]: I1101 03:50:51.590618 1607 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 03:50:51.591390 kubelet[1607]: I1101 03:50:51.591374 1607 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 03:50:51.592576 kubelet[1607]: E1101 03:50:51.592560 1607 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 03:50:51.592964 kubelet[1607]: E1101 03:50:51.592951 1607 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-n2oyf.gb1.brightbox.com\" not found" Nov 1 03:50:51.669840 systemd[1]: Created slice kubepods-burstable-podd4dd67e4e9587f6c35369d71106eb4bd.slice. Nov 1 03:50:51.681955 kubelet[1607]: E1101 03:50:51.681895 1607 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.686058 systemd[1]: Created slice kubepods-burstable-podbdfa6e050b9f2c953ebee4aafbb42bd3.slice. Nov 1 03:50:51.692799 kubelet[1607]: E1101 03:50:51.692773 1607 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.693948 kubelet[1607]: I1101 03:50:51.693861 1607 kubelet_node_status.go:75] "Attempting to register node" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.695572 systemd[1]: Created slice kubepods-burstable-podec34529b847490784fcde37e752c9f3e.slice. Nov 1 03:50:51.696080 kubelet[1607]: E1101 03:50:51.695977 1607 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.101.254:6443/api/v1/nodes\": dial tcp 10.244.101.254:6443: connect: connection refused" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.697109 kubelet[1607]: E1101 03:50:51.697089 1607 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.730985 kubelet[1607]: E1101 03:50:51.730919 1607 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n2oyf.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.254:6443: connect: connection refused" interval="400ms" Nov 1 03:50:51.822256 kubelet[1607]: I1101 03:50:51.822142 1607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4dd67e4e9587f6c35369d71106eb4bd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-n2oyf.gb1.brightbox.com\" (UID: \"d4dd67e4e9587f6c35369d71106eb4bd\") " pod="kube-system/kube-apiserver-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.822752 kubelet[1607]: I1101 03:50:51.822711 1607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bdfa6e050b9f2c953ebee4aafbb42bd3-flexvolume-dir\") pod \"kube-controller-manager-srv-n2oyf.gb1.brightbox.com\" (UID: \"bdfa6e050b9f2c953ebee4aafbb42bd3\") " pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.823020 kubelet[1607]: I1101 03:50:51.822986 1607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bdfa6e050b9f2c953ebee4aafbb42bd3-k8s-certs\") pod \"kube-controller-manager-srv-n2oyf.gb1.brightbox.com\" (UID: \"bdfa6e050b9f2c953ebee4aafbb42bd3\") " pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.823308 kubelet[1607]: I1101 03:50:51.823275 1607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bdfa6e050b9f2c953ebee4aafbb42bd3-kubeconfig\") pod \"kube-controller-manager-srv-n2oyf.gb1.brightbox.com\" (UID: \"bdfa6e050b9f2c953ebee4aafbb42bd3\") " pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.823620 kubelet[1607]: I1101 03:50:51.823584 1607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bdfa6e050b9f2c953ebee4aafbb42bd3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-n2oyf.gb1.brightbox.com\" (UID: \"bdfa6e050b9f2c953ebee4aafbb42bd3\") " pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.823863 kubelet[1607]: I1101 03:50:51.823829 1607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4dd67e4e9587f6c35369d71106eb4bd-ca-certs\") pod \"kube-apiserver-srv-n2oyf.gb1.brightbox.com\" (UID: \"d4dd67e4e9587f6c35369d71106eb4bd\") " pod="kube-system/kube-apiserver-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.824114 kubelet[1607]: I1101 03:50:51.824077 1607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4dd67e4e9587f6c35369d71106eb4bd-k8s-certs\") pod \"kube-apiserver-srv-n2oyf.gb1.brightbox.com\" (UID: \"d4dd67e4e9587f6c35369d71106eb4bd\") " pod="kube-system/kube-apiserver-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.824387 kubelet[1607]: I1101 03:50:51.824321 1607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bdfa6e050b9f2c953ebee4aafbb42bd3-ca-certs\") pod \"kube-controller-manager-srv-n2oyf.gb1.brightbox.com\" (UID: \"bdfa6e050b9f2c953ebee4aafbb42bd3\") " pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.824658 kubelet[1607]: I1101 03:50:51.824623 1607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec34529b847490784fcde37e752c9f3e-kubeconfig\") pod \"kube-scheduler-srv-n2oyf.gb1.brightbox.com\" (UID: \"ec34529b847490784fcde37e752c9f3e\") " pod="kube-system/kube-scheduler-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.900569 kubelet[1607]: I1101 03:50:51.900512 1607 kubelet_node_status.go:75] "Attempting to register node" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.901285 kubelet[1607]: E1101 03:50:51.901052 1607 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.101.254:6443/api/v1/nodes\": dial tcp 10.244.101.254:6443: connect: connection refused" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:51.986579 env[1204]: time="2025-11-01T03:50:51.985272625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-n2oyf.gb1.brightbox.com,Uid:d4dd67e4e9587f6c35369d71106eb4bd,Namespace:kube-system,Attempt:0,}" Nov 1 03:50:51.996779 env[1204]: time="2025-11-01T03:50:51.996673929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-n2oyf.gb1.brightbox.com,Uid:bdfa6e050b9f2c953ebee4aafbb42bd3,Namespace:kube-system,Attempt:0,}" Nov 1 03:50:52.000409 env[1204]: time="2025-11-01T03:50:51.999906894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-n2oyf.gb1.brightbox.com,Uid:ec34529b847490784fcde37e752c9f3e,Namespace:kube-system,Attempt:0,}" Nov 1 03:50:52.133022 kubelet[1607]: E1101 03:50:52.132898 1607 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n2oyf.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.254:6443: connect: connection refused" interval="800ms" Nov 1 03:50:52.310628 kubelet[1607]: I1101 03:50:52.310100 1607 kubelet_node_status.go:75] "Attempting to register node" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:52.311839 kubelet[1607]: E1101 03:50:52.311780 1607 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.101.254:6443/api/v1/nodes\": dial tcp 10.244.101.254:6443: connect: connection refused" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:52.482001 kubelet[1607]: W1101 03:50:52.481863 1607 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.101.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.101.254:6443: connect: connection refused Nov 1 03:50:52.482297 kubelet[1607]: E1101 03:50:52.482010 1607 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.101.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.101.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 03:50:52.614743 kubelet[1607]: W1101 03:50:52.614649 1607 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.101.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.101.254:6443: connect: connection refused Nov 1 03:50:52.614743 kubelet[1607]: E1101 03:50:52.614743 1607 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.101.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.101.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 03:50:52.674431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649992452.mount: Deactivated successfully. Nov 1 03:50:52.679324 env[1204]: time="2025-11-01T03:50:52.679215995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.681749 env[1204]: time="2025-11-01T03:50:52.681699183Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.686555 env[1204]: time="2025-11-01T03:50:52.686502449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.687383 env[1204]: time="2025-11-01T03:50:52.687325201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.692653 env[1204]: time="2025-11-01T03:50:52.692605294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.693928 env[1204]: time="2025-11-01T03:50:52.693890551Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.697785 env[1204]: time="2025-11-01T03:50:52.697744754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.698627 env[1204]: time="2025-11-01T03:50:52.698570796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.699402 env[1204]: time="2025-11-01T03:50:52.699379096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.700172 env[1204]: time="2025-11-01T03:50:52.700151727Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.705646 env[1204]: time="2025-11-01T03:50:52.705621202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.716958 env[1204]: time="2025-11-01T03:50:52.716929231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:50:52.724091 env[1204]: time="2025-11-01T03:50:52.723983467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 03:50:52.724265 env[1204]: time="2025-11-01T03:50:52.724065034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 03:50:52.724265 env[1204]: time="2025-11-01T03:50:52.724246431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 03:50:52.728762 env[1204]: time="2025-11-01T03:50:52.728729208Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cdd120b150a5a587696bc859f65bf8d108aa45753862113c6aefc2b6d30003e pid=1653 runtime=io.containerd.runc.v2 Nov 1 03:50:52.729565 env[1204]: time="2025-11-01T03:50:52.729485513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 03:50:52.729664 env[1204]: time="2025-11-01T03:50:52.729555694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 03:50:52.729664 env[1204]: time="2025-11-01T03:50:52.729568571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 03:50:52.729803 env[1204]: time="2025-11-01T03:50:52.729769704Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/123269f2df7ad496ffe849b3968e64d7b3e53de4c4dd82e54e89632a2b19051b pid=1661 runtime=io.containerd.runc.v2 Nov 1 03:50:52.739087 kubelet[1607]: W1101 03:50:52.738804 1607 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.101.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.101.254:6443: connect: connection refused Nov 1 03:50:52.739087 kubelet[1607]: E1101 03:50:52.738854 1607 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.101.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.101.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 03:50:52.745765 env[1204]: time="2025-11-01T03:50:52.745677608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 03:50:52.745939 env[1204]: time="2025-11-01T03:50:52.745738954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 03:50:52.745939 env[1204]: time="2025-11-01T03:50:52.745768404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 03:50:52.747094 env[1204]: time="2025-11-01T03:50:52.747043435Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ac8d018015cf38a7fe2323845beeb9f21d4e09c7251e8d6967ada9a53088c1e pid=1686 runtime=io.containerd.runc.v2 Nov 1 03:50:52.760488 systemd[1]: Started cri-containerd-123269f2df7ad496ffe849b3968e64d7b3e53de4c4dd82e54e89632a2b19051b.scope. Nov 1 03:50:52.779167 systemd[1]: Started cri-containerd-3ac8d018015cf38a7fe2323845beeb9f21d4e09c7251e8d6967ada9a53088c1e.scope. Nov 1 03:50:52.791044 systemd[1]: Started cri-containerd-9cdd120b150a5a587696bc859f65bf8d108aa45753862113c6aefc2b6d30003e.scope. Nov 1 03:50:52.848865 env[1204]: time="2025-11-01T03:50:52.848815075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-n2oyf.gb1.brightbox.com,Uid:bdfa6e050b9f2c953ebee4aafbb42bd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cdd120b150a5a587696bc859f65bf8d108aa45753862113c6aefc2b6d30003e\"" Nov 1 03:50:52.852363 env[1204]: time="2025-11-01T03:50:52.852058942Z" level=info msg="CreateContainer within sandbox \"9cdd120b150a5a587696bc859f65bf8d108aa45753862113c6aefc2b6d30003e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 03:50:52.868451 env[1204]: time="2025-11-01T03:50:52.868321361Z" level=info msg="CreateContainer within sandbox \"9cdd120b150a5a587696bc859f65bf8d108aa45753862113c6aefc2b6d30003e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"10ef7ba99d3540d3737d2892f9cbe1b2f18f3e86bd5e8cc7cf1eb370c8719f55\"" Nov 1 03:50:52.869741 env[1204]: time="2025-11-01T03:50:52.869720811Z" level=info msg="StartContainer for \"10ef7ba99d3540d3737d2892f9cbe1b2f18f3e86bd5e8cc7cf1eb370c8719f55\"" Nov 1 03:50:52.873137 env[1204]: time="2025-11-01T03:50:52.873111438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-n2oyf.gb1.brightbox.com,Uid:d4dd67e4e9587f6c35369d71106eb4bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"123269f2df7ad496ffe849b3968e64d7b3e53de4c4dd82e54e89632a2b19051b\"" Nov 1 03:50:52.876855 env[1204]: time="2025-11-01T03:50:52.876830612Z" level=info msg="CreateContainer within sandbox \"123269f2df7ad496ffe849b3968e64d7b3e53de4c4dd82e54e89632a2b19051b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 03:50:52.887490 env[1204]: time="2025-11-01T03:50:52.887452147Z" level=info msg="CreateContainer within sandbox \"123269f2df7ad496ffe849b3968e64d7b3e53de4c4dd82e54e89632a2b19051b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7e167a36cd77a748e4268045997d1a00208ccbf76d40e6675999a648094797ff\"" Nov 1 03:50:52.888083 env[1204]: time="2025-11-01T03:50:52.888061070Z" level=info msg="StartContainer for \"7e167a36cd77a748e4268045997d1a00208ccbf76d40e6675999a648094797ff\"" Nov 1 03:50:52.899371 env[1204]: time="2025-11-01T03:50:52.899317912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-n2oyf.gb1.brightbox.com,Uid:ec34529b847490784fcde37e752c9f3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ac8d018015cf38a7fe2323845beeb9f21d4e09c7251e8d6967ada9a53088c1e\"" Nov 1 03:50:52.901686 env[1204]: time="2025-11-01T03:50:52.901659664Z" level=info msg="CreateContainer within sandbox \"3ac8d018015cf38a7fe2323845beeb9f21d4e09c7251e8d6967ada9a53088c1e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 03:50:52.908500 systemd[1]: Started cri-containerd-10ef7ba99d3540d3737d2892f9cbe1b2f18f3e86bd5e8cc7cf1eb370c8719f55.scope. Nov 1 03:50:52.929136 systemd[1]: Started cri-containerd-7e167a36cd77a748e4268045997d1a00208ccbf76d40e6675999a648094797ff.scope. Nov 1 03:50:52.933429 env[1204]: time="2025-11-01T03:50:52.932502086Z" level=info msg="CreateContainer within sandbox \"3ac8d018015cf38a7fe2323845beeb9f21d4e09c7251e8d6967ada9a53088c1e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9d51810101b73055ed3151ddd579a4add690852981d68799133ed6eee11a8ee2\"" Nov 1 03:50:52.933982 env[1204]: time="2025-11-01T03:50:52.933958462Z" level=info msg="StartContainer for \"9d51810101b73055ed3151ddd579a4add690852981d68799133ed6eee11a8ee2\"" Nov 1 03:50:52.934168 kubelet[1607]: E1101 03:50:52.934134 1607 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n2oyf.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.254:6443: connect: connection refused" interval="1.6s" Nov 1 03:50:52.974435 systemd[1]: Started cri-containerd-9d51810101b73055ed3151ddd579a4add690852981d68799133ed6eee11a8ee2.scope. Nov 1 03:50:53.010782 env[1204]: time="2025-11-01T03:50:53.010739219Z" level=info msg="StartContainer for \"7e167a36cd77a748e4268045997d1a00208ccbf76d40e6675999a648094797ff\" returns successfully" Nov 1 03:50:53.019390 env[1204]: time="2025-11-01T03:50:53.019326963Z" level=info msg="StartContainer for \"10ef7ba99d3540d3737d2892f9cbe1b2f18f3e86bd5e8cc7cf1eb370c8719f55\" returns successfully" Nov 1 03:50:53.030236 kubelet[1607]: W1101 03:50:53.030175 1607 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.101.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-n2oyf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.101.254:6443: connect: connection refused Nov 1 03:50:53.030414 kubelet[1607]: E1101 03:50:53.030245 1607 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.101.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-n2oyf.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.101.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 03:50:53.062036 env[1204]: time="2025-11-01T03:50:53.061978598Z" level=info msg="StartContainer for \"9d51810101b73055ed3151ddd579a4add690852981d68799133ed6eee11a8ee2\" returns successfully" Nov 1 03:50:53.116146 kubelet[1607]: I1101 03:50:53.115604 1607 kubelet_node_status.go:75] "Attempting to register node" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:53.116368 kubelet[1607]: E1101 03:50:53.116278 1607 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.101.254:6443/api/v1/nodes\": dial tcp 10.244.101.254:6443: connect: connection refused" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:53.479370 kubelet[1607]: E1101 03:50:53.479308 1607 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.101.254:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.101.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 03:50:53.576803 kubelet[1607]: E1101 03:50:53.576765 1607 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:53.578385 kubelet[1607]: E1101 03:50:53.578184 1607 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:53.580887 kubelet[1607]: E1101 03:50:53.580863 1607 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:54.583927 kubelet[1607]: E1101 03:50:54.583892 1607 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:54.585852 kubelet[1607]: E1101 03:50:54.585810 1607 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:54.719244 kubelet[1607]: I1101 03:50:54.718957 1607 kubelet_node_status.go:75] "Attempting to register node" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:55.313128 kubelet[1607]: E1101 03:50:55.313077 1607 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-n2oyf.gb1.brightbox.com\" not found" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:55.369049 kubelet[1607]: I1101 03:50:55.369014 1607 kubelet_node_status.go:78] "Successfully registered node" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:55.369244 kubelet[1607]: E1101 03:50:55.369225 1607 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-n2oyf.gb1.brightbox.com\": node \"srv-n2oyf.gb1.brightbox.com\" not found" Nov 1 03:50:55.444063 kubelet[1607]: E1101 03:50:55.444032 1607 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" Nov 1 03:50:55.545350 kubelet[1607]: E1101 03:50:55.545303 1607 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" Nov 1 03:50:55.646668 kubelet[1607]: E1101 03:50:55.646585 1607 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" Nov 1 03:50:55.747134 kubelet[1607]: E1101 03:50:55.747020 1607 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-n2oyf.gb1.brightbox.com\" not found" Nov 1 03:50:55.929890 kubelet[1607]: I1101 03:50:55.929679 1607 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:55.944511 kubelet[1607]: E1101 03:50:55.944458 1607 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-n2oyf.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:55.944850 kubelet[1607]: I1101 03:50:55.944814 1607 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:55.947251 kubelet[1607]: E1101 03:50:55.947185 1607 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-n2oyf.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:55.947613 kubelet[1607]: I1101 03:50:55.947581 1607 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:55.950052 kubelet[1607]: E1101 03:50:55.950008 1607 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-n2oyf.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:56.511041 kubelet[1607]: I1101 03:50:56.510949 1607 apiserver.go:52] "Watching apiserver" Nov 1 03:50:56.521769 kubelet[1607]: I1101 03:50:56.521585 1607 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 03:50:57.819544 systemd[1]: Reloading. Nov 1 03:50:57.933021 /usr/lib/systemd/system-generators/torcx-generator[1894]: time="2025-11-01T03:50:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 03:50:57.936409 /usr/lib/systemd/system-generators/torcx-generator[1894]: time="2025-11-01T03:50:57Z" level=info msg="torcx already run" Nov 1 03:50:57.994562 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 03:50:57.994791 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 03:50:58.014890 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 03:50:58.144172 systemd[1]: Stopping kubelet.service... Nov 1 03:50:58.172550 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 03:50:58.173441 systemd[1]: Stopped kubelet.service. Nov 1 03:50:58.173939 systemd[1]: kubelet.service: Consumed 1.162s CPU time. Nov 1 03:50:58.180818 systemd[1]: Starting kubelet.service... Nov 1 03:50:59.297678 systemd[1]: Started kubelet.service. Nov 1 03:50:59.396825 sudo[1956]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 03:50:59.397088 sudo[1956]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 03:50:59.425287 kubelet[1945]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 03:50:59.425287 kubelet[1945]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 03:50:59.425287 kubelet[1945]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 03:50:59.425766 kubelet[1945]: I1101 03:50:59.425408 1945 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 03:50:59.446897 kubelet[1945]: I1101 03:50:59.446860 1945 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 03:50:59.448119 kubelet[1945]: I1101 03:50:59.447071 1945 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 03:50:59.448119 kubelet[1945]: I1101 03:50:59.447447 1945 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 03:50:59.449373 kubelet[1945]: I1101 03:50:59.448956 1945 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 03:50:59.453440 kubelet[1945]: I1101 03:50:59.453419 1945 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 03:50:59.460377 kubelet[1945]: E1101 03:50:59.460344 1945 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 03:50:59.460495 kubelet[1945]: I1101 03:50:59.460484 1945 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 03:50:59.463585 kubelet[1945]: I1101 03:50:59.463568 1945 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 03:50:59.463955 kubelet[1945]: I1101 03:50:59.463922 1945 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 03:50:59.464201 kubelet[1945]: I1101 03:50:59.464029 1945 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-n2oyf.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 03:50:59.464447 kubelet[1945]: I1101 03:50:59.464434 1945 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 03:50:59.464518 kubelet[1945]: I1101 03:50:59.464509 1945 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 03:50:59.464647 kubelet[1945]: I1101 03:50:59.464637 1945 state_mem.go:36] "Initialized new in-memory state store" Nov 1 03:50:59.473094 kubelet[1945]: I1101 03:50:59.473074 1945 kubelet.go:446] "Attempting to sync node with API server" Nov 1 03:50:59.473254 kubelet[1945]: I1101 03:50:59.473233 1945 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 03:50:59.473385 kubelet[1945]: I1101 03:50:59.473375 1945 kubelet.go:352] "Adding apiserver pod source" Nov 1 03:50:59.473452 kubelet[1945]: I1101 03:50:59.473443 1945 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 03:50:59.496641 kubelet[1945]: I1101 03:50:59.496619 1945 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 03:50:59.497269 kubelet[1945]: I1101 03:50:59.497255 1945 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 03:50:59.499052 kubelet[1945]: I1101 03:50:59.499022 1945 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 03:50:59.499240 kubelet[1945]: I1101 03:50:59.499230 1945 server.go:1287] "Started kubelet" Nov 1 03:50:59.508378 kubelet[1945]: E1101 03:50:59.508356 1945 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 03:50:59.510289 kubelet[1945]: I1101 03:50:59.510235 1945 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 03:50:59.510870 kubelet[1945]: I1101 03:50:59.510855 1945 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 03:50:59.511049 kubelet[1945]: I1101 03:50:59.511028 1945 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 03:50:59.512618 kubelet[1945]: I1101 03:50:59.512602 1945 server.go:479] "Adding debug handlers to kubelet server" Nov 1 03:50:59.514532 kubelet[1945]: I1101 03:50:59.514518 1945 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 03:50:59.519585 kubelet[1945]: I1101 03:50:59.515447 1945 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 03:50:59.520236 kubelet[1945]: I1101 03:50:59.520224 1945 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 03:50:59.521531 kubelet[1945]: I1101 03:50:59.521518 1945 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 03:50:59.521744 kubelet[1945]: I1101 03:50:59.521735 1945 reconciler.go:26] "Reconciler: start to sync state" Nov 1 03:50:59.523760 kubelet[1945]: I1101 03:50:59.523743 1945 factory.go:221] Registration of the systemd container factory successfully Nov 1 03:50:59.523974 kubelet[1945]: I1101 03:50:59.523952 1945 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 03:50:59.525626 kubelet[1945]: I1101 03:50:59.525611 1945 factory.go:221] Registration of the containerd container factory successfully Nov 1 03:50:59.591464 kubelet[1945]: I1101 03:50:59.591426 1945 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 03:50:59.596108 kubelet[1945]: I1101 03:50:59.596078 1945 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 03:50:59.596315 kubelet[1945]: I1101 03:50:59.596301 1945 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 03:50:59.596433 kubelet[1945]: I1101 03:50:59.596421 1945 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 03:50:59.596543 kubelet[1945]: I1101 03:50:59.596533 1945 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 03:50:59.596662 kubelet[1945]: E1101 03:50:59.596642 1945 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 03:50:59.599652 kubelet[1945]: I1101 03:50:59.599625 1945 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 03:50:59.599804 kubelet[1945]: I1101 03:50:59.599792 1945 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 03:50:59.599889 kubelet[1945]: I1101 03:50:59.599880 1945 state_mem.go:36] "Initialized new in-memory state store" Nov 1 03:50:59.600157 kubelet[1945]: I1101 03:50:59.600144 1945 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 03:50:59.600283 kubelet[1945]: I1101 03:50:59.600259 1945 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 03:50:59.600392 kubelet[1945]: I1101 03:50:59.600382 1945 policy_none.go:49] "None policy: Start" Nov 1 03:50:59.600481 kubelet[1945]: I1101 03:50:59.600472 1945 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 03:50:59.600570 kubelet[1945]: I1101 03:50:59.600561 1945 state_mem.go:35] "Initializing new in-memory state store" Nov 1 03:50:59.600779 kubelet[1945]: I1101 03:50:59.600763 1945 state_mem.go:75] "Updated machine memory state" Nov 1 03:50:59.624675 kubelet[1945]: I1101 03:50:59.624652 1945 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 03:50:59.625326 kubelet[1945]: I1101 03:50:59.625311 1945 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 03:50:59.625821 kubelet[1945]: I1101 03:50:59.625706 1945 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 03:50:59.627442 kubelet[1945]: I1101 03:50:59.627409 1945 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 03:50:59.629475 kubelet[1945]: E1101 03:50:59.629458 1945 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 03:50:59.702973 kubelet[1945]: I1101 03:50:59.702919 1945 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.711859 kubelet[1945]: I1101 03:50:59.711834 1945 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.712503 kubelet[1945]: I1101 03:50:59.712480 1945 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.717850 kubelet[1945]: W1101 03:50:59.717822 1945 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 03:50:59.719453 kubelet[1945]: W1101 03:50:59.719435 1945 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 03:50:59.723191 kubelet[1945]: I1101 03:50:59.723169 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bdfa6e050b9f2c953ebee4aafbb42bd3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-n2oyf.gb1.brightbox.com\" (UID: \"bdfa6e050b9f2c953ebee4aafbb42bd3\") " pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.723360 kubelet[1945]: I1101 03:50:59.723330 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec34529b847490784fcde37e752c9f3e-kubeconfig\") pod \"kube-scheduler-srv-n2oyf.gb1.brightbox.com\" (UID: \"ec34529b847490784fcde37e752c9f3e\") " pod="kube-system/kube-scheduler-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.723461 kubelet[1945]: I1101 03:50:59.723445 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4dd67e4e9587f6c35369d71106eb4bd-ca-certs\") pod \"kube-apiserver-srv-n2oyf.gb1.brightbox.com\" (UID: \"d4dd67e4e9587f6c35369d71106eb4bd\") " pod="kube-system/kube-apiserver-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.723556 kubelet[1945]: I1101 03:50:59.723544 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4dd67e4e9587f6c35369d71106eb4bd-k8s-certs\") pod \"kube-apiserver-srv-n2oyf.gb1.brightbox.com\" (UID: \"d4dd67e4e9587f6c35369d71106eb4bd\") " pod="kube-system/kube-apiserver-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.723650 kubelet[1945]: I1101 03:50:59.723638 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4dd67e4e9587f6c35369d71106eb4bd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-n2oyf.gb1.brightbox.com\" (UID: \"d4dd67e4e9587f6c35369d71106eb4bd\") " pod="kube-system/kube-apiserver-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.723770 kubelet[1945]: I1101 03:50:59.723758 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bdfa6e050b9f2c953ebee4aafbb42bd3-flexvolume-dir\") pod \"kube-controller-manager-srv-n2oyf.gb1.brightbox.com\" (UID: \"bdfa6e050b9f2c953ebee4aafbb42bd3\") " pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.723867 kubelet[1945]: I1101 03:50:59.723854 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bdfa6e050b9f2c953ebee4aafbb42bd3-ca-certs\") pod \"kube-controller-manager-srv-n2oyf.gb1.brightbox.com\" (UID: \"bdfa6e050b9f2c953ebee4aafbb42bd3\") " pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.723955 kubelet[1945]: I1101 03:50:59.723942 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bdfa6e050b9f2c953ebee4aafbb42bd3-k8s-certs\") pod \"kube-controller-manager-srv-n2oyf.gb1.brightbox.com\" (UID: \"bdfa6e050b9f2c953ebee4aafbb42bd3\") " pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.724053 kubelet[1945]: I1101 03:50:59.724040 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bdfa6e050b9f2c953ebee4aafbb42bd3-kubeconfig\") pod \"kube-controller-manager-srv-n2oyf.gb1.brightbox.com\" (UID: \"bdfa6e050b9f2c953ebee4aafbb42bd3\") " pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.724721 kubelet[1945]: W1101 03:50:59.724706 1945 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 03:50:59.749906 kubelet[1945]: I1101 03:50:59.749877 1945 kubelet_node_status.go:75] "Attempting to register node" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.763233 kubelet[1945]: I1101 03:50:59.763209 1945 kubelet_node_status.go:124] "Node was previously registered" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:50:59.763454 kubelet[1945]: I1101 03:50:59.763443 1945 kubelet_node_status.go:78] "Successfully registered node" node="srv-n2oyf.gb1.brightbox.com" Nov 1 03:51:00.141761 sudo[1956]: pam_unix(sudo:session): session closed for user root Nov 1 03:51:00.484682 kubelet[1945]: I1101 03:51:00.484270 1945 apiserver.go:52] "Watching apiserver" Nov 1 03:51:00.522117 kubelet[1945]: I1101 03:51:00.522019 1945 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 03:51:00.622304 kubelet[1945]: I1101 03:51:00.622243 1945 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-n2oyf.gb1.brightbox.com" Nov 1 03:51:00.643005 kubelet[1945]: W1101 03:51:00.642975 1945 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 03:51:00.643285 kubelet[1945]: E1101 03:51:00.643260 1945 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-n2oyf.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-n2oyf.gb1.brightbox.com" Nov 1 03:51:00.708758 kubelet[1945]: I1101 03:51:00.708676 1945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-n2oyf.gb1.brightbox.com" podStartSLOduration=1.708647912 podStartE2EDuration="1.708647912s" podCreationTimestamp="2025-11-01 03:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 03:51:00.683506145 +0000 UTC m=+1.365977990" watchObservedRunningTime="2025-11-01 03:51:00.708647912 +0000 UTC m=+1.391119734" Nov 1 03:51:00.737288 kubelet[1945]: I1101 03:51:00.737153 1945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-n2oyf.gb1.brightbox.com" podStartSLOduration=1.7371267110000002 podStartE2EDuration="1.737126711s" podCreationTimestamp="2025-11-01 03:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 03:51:00.735917855 +0000 UTC m=+1.418389697" watchObservedRunningTime="2025-11-01 03:51:00.737126711 +0000 UTC m=+1.419598534" Nov 1 03:51:00.737604 kubelet[1945]: I1101 03:51:00.737575 1945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-n2oyf.gb1.brightbox.com" podStartSLOduration=1.7375549810000002 podStartE2EDuration="1.737554981s" podCreationTimestamp="2025-11-01 03:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 03:51:00.709856521 +0000 UTC m=+1.392328366" watchObservedRunningTime="2025-11-01 03:51:00.737554981 +0000 UTC m=+1.420026828" Nov 1 03:51:02.247313 sudo[1324]: pam_unix(sudo:session): session closed for user root Nov 1 03:51:02.393707 sshd[1321]: pam_unix(sshd:session): session closed for user core Nov 1 03:51:02.405427 systemd-logind[1186]: Session 5 logged out. Waiting for processes to exit. Nov 1 03:51:02.406501 systemd[1]: sshd@4-10.244.101.254:22-139.178.89.65:49838.service: Deactivated successfully. Nov 1 03:51:02.409135 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 03:51:02.409359 systemd[1]: session-5.scope: Consumed 5.577s CPU time. Nov 1 03:51:02.413349 systemd-logind[1186]: Removed session 5. Nov 1 03:51:03.048517 kubelet[1945]: I1101 03:51:03.048410 1945 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 03:51:03.050951 env[1204]: time="2025-11-01T03:51:03.050553133Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 03:51:03.052279 kubelet[1945]: I1101 03:51:03.052205 1945 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 03:51:03.915215 systemd[1]: Created slice kubepods-besteffort-poda6fc3a1d_ad6b_462d_9cbd_836d792da239.slice. Nov 1 03:51:03.929095 systemd[1]: Created slice kubepods-burstable-pod7d5709e6_fa43_4c18_93bf_cfe4733c46ce.slice. Nov 1 03:51:03.950844 kubelet[1945]: I1101 03:51:03.950789 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-run\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.951097 kubelet[1945]: I1101 03:51:03.951069 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-bpf-maps\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.951193 kubelet[1945]: I1101 03:51:03.951177 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ls56\" (UniqueName: \"kubernetes.io/projected/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-kube-api-access-5ls56\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.951285 kubelet[1945]: I1101 03:51:03.951273 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cni-path\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.951380 kubelet[1945]: I1101 03:51:03.951367 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-clustermesh-secrets\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.951489 kubelet[1945]: I1101 03:51:03.951472 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a6fc3a1d-ad6b-462d-9cbd-836d792da239-kube-proxy\") pod \"kube-proxy-mn6ln\" (UID: \"a6fc3a1d-ad6b-462d-9cbd-836d792da239\") " pod="kube-system/kube-proxy-mn6ln" Nov 1 03:51:03.951588 kubelet[1945]: I1101 03:51:03.951559 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-etc-cni-netd\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.951668 kubelet[1945]: I1101 03:51:03.951654 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-lib-modules\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.951752 kubelet[1945]: I1101 03:51:03.951739 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-hubble-tls\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.951830 kubelet[1945]: I1101 03:51:03.951818 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvdkd\" (UniqueName: \"kubernetes.io/projected/a6fc3a1d-ad6b-462d-9cbd-836d792da239-kube-api-access-zvdkd\") pod \"kube-proxy-mn6ln\" (UID: \"a6fc3a1d-ad6b-462d-9cbd-836d792da239\") " pod="kube-system/kube-proxy-mn6ln" Nov 1 03:51:03.951923 kubelet[1945]: I1101 03:51:03.951903 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-config-path\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.952008 kubelet[1945]: I1101 03:51:03.951997 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6fc3a1d-ad6b-462d-9cbd-836d792da239-xtables-lock\") pod \"kube-proxy-mn6ln\" (UID: \"a6fc3a1d-ad6b-462d-9cbd-836d792da239\") " pod="kube-system/kube-proxy-mn6ln" Nov 1 03:51:03.952098 kubelet[1945]: I1101 03:51:03.952086 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-xtables-lock\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.952184 kubelet[1945]: I1101 03:51:03.952172 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-hostproc\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.952450 kubelet[1945]: I1101 03:51:03.952435 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-host-proc-sys-kernel\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.952597 kubelet[1945]: I1101 03:51:03.952585 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6fc3a1d-ad6b-462d-9cbd-836d792da239-lib-modules\") pod \"kube-proxy-mn6ln\" (UID: \"a6fc3a1d-ad6b-462d-9cbd-836d792da239\") " pod="kube-system/kube-proxy-mn6ln" Nov 1 03:51:03.952705 kubelet[1945]: I1101 03:51:03.952694 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-cgroup\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:03.952806 kubelet[1945]: I1101 03:51:03.952795 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-host-proc-sys-net\") pod \"cilium-b4zkl\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " pod="kube-system/cilium-b4zkl" Nov 1 03:51:04.058492 kubelet[1945]: I1101 03:51:04.058282 1945 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 03:51:04.119916 systemd[1]: Created slice kubepods-besteffort-pod29a6b27a_ef91_4446_829f_a75ce7239cc5.slice. Nov 1 03:51:04.155672 kubelet[1945]: I1101 03:51:04.155619 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fck48\" (UniqueName: \"kubernetes.io/projected/29a6b27a-ef91-4446-829f-a75ce7239cc5-kube-api-access-fck48\") pod \"cilium-operator-6c4d7847fc-dzgrq\" (UID: \"29a6b27a-ef91-4446-829f-a75ce7239cc5\") " pod="kube-system/cilium-operator-6c4d7847fc-dzgrq" Nov 1 03:51:04.155909 kubelet[1945]: I1101 03:51:04.155888 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29a6b27a-ef91-4446-829f-a75ce7239cc5-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dzgrq\" (UID: \"29a6b27a-ef91-4446-829f-a75ce7239cc5\") " pod="kube-system/cilium-operator-6c4d7847fc-dzgrq" Nov 1 03:51:04.224758 env[1204]: time="2025-11-01T03:51:04.224008671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mn6ln,Uid:a6fc3a1d-ad6b-462d-9cbd-836d792da239,Namespace:kube-system,Attempt:0,}" Nov 1 03:51:04.239459 env[1204]: time="2025-11-01T03:51:04.239419857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b4zkl,Uid:7d5709e6-fa43-4c18-93bf-cfe4733c46ce,Namespace:kube-system,Attempt:0,}" Nov 1 03:51:04.244467 env[1204]: time="2025-11-01T03:51:04.243020645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 03:51:04.244467 env[1204]: time="2025-11-01T03:51:04.243083821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 03:51:04.244467 env[1204]: time="2025-11-01T03:51:04.243096124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 03:51:04.244913 env[1204]: time="2025-11-01T03:51:04.244802157Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/55e1e4d9576f2740993af0ec019faa949ae7016dd6807de04ed53f5242266bd8 pid=2027 runtime=io.containerd.runc.v2 Nov 1 03:51:04.262885 env[1204]: time="2025-11-01T03:51:04.257826966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 03:51:04.262885 env[1204]: time="2025-11-01T03:51:04.257897460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 03:51:04.262885 env[1204]: time="2025-11-01T03:51:04.257910073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 03:51:04.262885 env[1204]: time="2025-11-01T03:51:04.258410568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3 pid=2049 runtime=io.containerd.runc.v2 Nov 1 03:51:04.273095 systemd[1]: Started cri-containerd-55e1e4d9576f2740993af0ec019faa949ae7016dd6807de04ed53f5242266bd8.scope. Nov 1 03:51:04.283861 systemd[1]: Started cri-containerd-eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3.scope. Nov 1 03:51:04.335514 env[1204]: time="2025-11-01T03:51:04.335475742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mn6ln,Uid:a6fc3a1d-ad6b-462d-9cbd-836d792da239,Namespace:kube-system,Attempt:0,} returns sandbox id \"55e1e4d9576f2740993af0ec019faa949ae7016dd6807de04ed53f5242266bd8\"" Nov 1 03:51:04.340680 env[1204]: time="2025-11-01T03:51:04.340644307Z" level=info msg="CreateContainer within sandbox \"55e1e4d9576f2740993af0ec019faa949ae7016dd6807de04ed53f5242266bd8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 03:51:04.348196 env[1204]: time="2025-11-01T03:51:04.348144646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b4zkl,Uid:7d5709e6-fa43-4c18-93bf-cfe4733c46ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\"" Nov 1 03:51:04.353857 env[1204]: time="2025-11-01T03:51:04.353820565Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 03:51:04.360259 env[1204]: time="2025-11-01T03:51:04.360226951Z" level=info msg="CreateContainer within sandbox \"55e1e4d9576f2740993af0ec019faa949ae7016dd6807de04ed53f5242266bd8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"37329c76a28ce108453e58fb8a058c7b291d90d891fd11d1b83ddeb4744fdfc3\"" Nov 1 03:51:04.365745 env[1204]: time="2025-11-01T03:51:04.365716398Z" level=info msg="StartContainer for \"37329c76a28ce108453e58fb8a058c7b291d90d891fd11d1b83ddeb4744fdfc3\"" Nov 1 03:51:04.386177 systemd[1]: Started cri-containerd-37329c76a28ce108453e58fb8a058c7b291d90d891fd11d1b83ddeb4744fdfc3.scope. Nov 1 03:51:04.429882 env[1204]: time="2025-11-01T03:51:04.429843797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dzgrq,Uid:29a6b27a-ef91-4446-829f-a75ce7239cc5,Namespace:kube-system,Attempt:0,}" Nov 1 03:51:04.440924 env[1204]: time="2025-11-01T03:51:04.440881521Z" level=info msg="StartContainer for \"37329c76a28ce108453e58fb8a058c7b291d90d891fd11d1b83ddeb4744fdfc3\" returns successfully" Nov 1 03:51:04.449733 env[1204]: time="2025-11-01T03:51:04.449519864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 03:51:04.449733 env[1204]: time="2025-11-01T03:51:04.449568434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 03:51:04.449733 env[1204]: time="2025-11-01T03:51:04.449579896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 03:51:04.450100 env[1204]: time="2025-11-01T03:51:04.450036898Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82 pid=2143 runtime=io.containerd.runc.v2 Nov 1 03:51:04.463551 systemd[1]: Started cri-containerd-df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82.scope. Nov 1 03:51:04.523508 env[1204]: time="2025-11-01T03:51:04.523397534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dzgrq,Uid:29a6b27a-ef91-4446-829f-a75ce7239cc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\"" Nov 1 03:51:06.052274 kubelet[1945]: I1101 03:51:06.052075 1945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mn6ln" podStartSLOduration=3.052026265 podStartE2EDuration="3.052026265s" podCreationTimestamp="2025-11-01 03:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 03:51:04.65246505 +0000 UTC m=+5.334936894" watchObservedRunningTime="2025-11-01 03:51:06.052026265 +0000 UTC m=+6.734498113" Nov 1 03:51:12.743141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936778755.mount: Deactivated successfully. Nov 1 03:51:16.098884 env[1204]: time="2025-11-01T03:51:16.098783756Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:51:16.101393 env[1204]: time="2025-11-01T03:51:16.101361503Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:51:16.105180 env[1204]: time="2025-11-01T03:51:16.105117259Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:51:16.106598 env[1204]: time="2025-11-01T03:51:16.106532324Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 03:51:16.110515 env[1204]: time="2025-11-01T03:51:16.110471445Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 03:51:16.119345 env[1204]: time="2025-11-01T03:51:16.119276359Z" level=info msg="CreateContainer within sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 03:51:16.132091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1861956515.mount: Deactivated successfully. Nov 1 03:51:16.138205 env[1204]: time="2025-11-01T03:51:16.138164833Z" level=info msg="CreateContainer within sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82\"" Nov 1 03:51:16.138586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963341638.mount: Deactivated successfully. Nov 1 03:51:16.139229 env[1204]: time="2025-11-01T03:51:16.139204817Z" level=info msg="StartContainer for \"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82\"" Nov 1 03:51:16.170622 systemd[1]: Started cri-containerd-43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82.scope. Nov 1 03:51:16.213296 env[1204]: time="2025-11-01T03:51:16.213248943Z" level=info msg="StartContainer for \"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82\" returns successfully" Nov 1 03:51:16.222718 systemd[1]: cri-containerd-43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82.scope: Deactivated successfully. Nov 1 03:51:16.282710 env[1204]: time="2025-11-01T03:51:16.282649518Z" level=info msg="shim disconnected" id=43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82 Nov 1 03:51:16.283048 env[1204]: time="2025-11-01T03:51:16.283020465Z" level=warning msg="cleaning up after shim disconnected" id=43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82 namespace=k8s.io Nov 1 03:51:16.283144 env[1204]: time="2025-11-01T03:51:16.283131007Z" level=info msg="cleaning up dead shim" Nov 1 03:51:16.296268 env[1204]: time="2025-11-01T03:51:16.296227272Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:51:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2355 runtime=io.containerd.runc.v2\n" Nov 1 03:51:16.689636 env[1204]: time="2025-11-01T03:51:16.689575279Z" level=info msg="CreateContainer within sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 03:51:16.700000 env[1204]: time="2025-11-01T03:51:16.699950876Z" level=info msg="CreateContainer within sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178\"" Nov 1 03:51:16.700813 env[1204]: time="2025-11-01T03:51:16.700781089Z" level=info msg="StartContainer for \"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178\"" Nov 1 03:51:16.733807 systemd[1]: Started cri-containerd-86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178.scope. Nov 1 03:51:16.768605 env[1204]: time="2025-11-01T03:51:16.768562270Z" level=info msg="StartContainer for \"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178\" returns successfully" Nov 1 03:51:16.787281 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 03:51:16.787598 systemd[1]: Stopped systemd-sysctl.service. Nov 1 03:51:16.787882 systemd[1]: Stopping systemd-sysctl.service... Nov 1 03:51:16.790069 systemd[1]: Starting systemd-sysctl.service... Nov 1 03:51:16.795644 systemd[1]: cri-containerd-86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178.scope: Deactivated successfully. Nov 1 03:51:16.808974 systemd[1]: Finished systemd-sysctl.service. Nov 1 03:51:16.825831 env[1204]: time="2025-11-01T03:51:16.825786700Z" level=info msg="shim disconnected" id=86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178 Nov 1 03:51:16.826125 env[1204]: time="2025-11-01T03:51:16.826103651Z" level=warning msg="cleaning up after shim disconnected" id=86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178 namespace=k8s.io Nov 1 03:51:16.826237 env[1204]: time="2025-11-01T03:51:16.826222608Z" level=info msg="cleaning up dead shim" Nov 1 03:51:16.835631 env[1204]: time="2025-11-01T03:51:16.835587432Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:51:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2420 runtime=io.containerd.runc.v2\n" Nov 1 03:51:17.134216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82-rootfs.mount: Deactivated successfully. Nov 1 03:51:17.658904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179046835.mount: Deactivated successfully. Nov 1 03:51:17.707517 env[1204]: time="2025-11-01T03:51:17.707162807Z" level=info msg="CreateContainer within sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 03:51:17.728783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096376227.mount: Deactivated successfully. Nov 1 03:51:17.734386 env[1204]: time="2025-11-01T03:51:17.734272579Z" level=info msg="CreateContainer within sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e\"" Nov 1 03:51:17.735677 env[1204]: time="2025-11-01T03:51:17.735635538Z" level=info msg="StartContainer for \"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e\"" Nov 1 03:51:17.758865 systemd[1]: Started cri-containerd-a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e.scope. Nov 1 03:51:17.820900 env[1204]: time="2025-11-01T03:51:17.820826095Z" level=info msg="StartContainer for \"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e\" returns successfully" Nov 1 03:51:17.826501 systemd[1]: cri-containerd-a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e.scope: Deactivated successfully. Nov 1 03:51:17.859100 env[1204]: time="2025-11-01T03:51:17.859013578Z" level=info msg="shim disconnected" id=a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e Nov 1 03:51:17.859100 env[1204]: time="2025-11-01T03:51:17.859103957Z" level=warning msg="cleaning up after shim disconnected" id=a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e namespace=k8s.io Nov 1 03:51:17.859493 env[1204]: time="2025-11-01T03:51:17.859133564Z" level=info msg="cleaning up dead shim" Nov 1 03:51:17.877656 env[1204]: time="2025-11-01T03:51:17.877594782Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:51:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2480 runtime=io.containerd.runc.v2\n" Nov 1 03:51:18.715052 env[1204]: time="2025-11-01T03:51:18.714506592Z" level=info msg="CreateContainer within sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 03:51:18.731153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822023640.mount: Deactivated successfully. Nov 1 03:51:18.739672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1446316660.mount: Deactivated successfully. Nov 1 03:51:18.743415 env[1204]: time="2025-11-01T03:51:18.743363935Z" level=info msg="CreateContainer within sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f\"" Nov 1 03:51:18.745221 env[1204]: time="2025-11-01T03:51:18.745181347Z" level=info msg="StartContainer for \"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f\"" Nov 1 03:51:18.776080 systemd[1]: Started cri-containerd-e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f.scope. Nov 1 03:51:18.826653 systemd[1]: cri-containerd-e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f.scope: Deactivated successfully. Nov 1 03:51:18.829006 env[1204]: time="2025-11-01T03:51:18.828714753Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d5709e6_fa43_4c18_93bf_cfe4733c46ce.slice/cri-containerd-e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f.scope/memory.events\": no such file or directory" Nov 1 03:51:18.831516 env[1204]: time="2025-11-01T03:51:18.831474520Z" level=info msg="StartContainer for \"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f\" returns successfully" Nov 1 03:51:18.856384 env[1204]: time="2025-11-01T03:51:18.856065560Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:51:18.859697 env[1204]: time="2025-11-01T03:51:18.859637715Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:51:18.861836 env[1204]: time="2025-11-01T03:51:18.861799802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 03:51:18.862975 env[1204]: time="2025-11-01T03:51:18.862925308Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 03:51:18.868412 env[1204]: time="2025-11-01T03:51:18.867415697Z" level=info msg="CreateContainer within sandbox \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 03:51:18.902239 env[1204]: time="2025-11-01T03:51:18.902181676Z" level=info msg="shim disconnected" id=e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f Nov 1 03:51:18.902571 env[1204]: time="2025-11-01T03:51:18.902551016Z" level=warning msg="cleaning up after shim disconnected" id=e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f namespace=k8s.io Nov 1 03:51:18.902658 env[1204]: time="2025-11-01T03:51:18.902644412Z" level=info msg="cleaning up dead shim" Nov 1 03:51:18.909436 env[1204]: time="2025-11-01T03:51:18.909321734Z" level=info msg="CreateContainer within sandbox \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\"" Nov 1 03:51:18.912203 env[1204]: time="2025-11-01T03:51:18.911061265Z" level=info msg="StartContainer for \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\"" Nov 1 03:51:18.925968 env[1204]: time="2025-11-01T03:51:18.925915946Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:51:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2534 runtime=io.containerd.runc.v2\n" Nov 1 03:51:18.944452 systemd[1]: Started cri-containerd-9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6.scope. Nov 1 03:51:18.984516 env[1204]: time="2025-11-01T03:51:18.984414468Z" level=info msg="StartContainer for \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\" returns successfully" Nov 1 03:51:19.724091 env[1204]: time="2025-11-01T03:51:19.722712727Z" level=info msg="CreateContainer within sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 03:51:19.736092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872796158.mount: Deactivated successfully. Nov 1 03:51:19.747549 env[1204]: time="2025-11-01T03:51:19.747482261Z" level=info msg="CreateContainer within sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\"" Nov 1 03:51:19.748049 env[1204]: time="2025-11-01T03:51:19.748018278Z" level=info msg="StartContainer for \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\"" Nov 1 03:51:19.819623 systemd[1]: Started cri-containerd-87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8.scope. Nov 1 03:51:19.922744 kubelet[1945]: I1101 03:51:19.922656 1945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dzgrq" podStartSLOduration=1.583755965 podStartE2EDuration="15.92259695s" podCreationTimestamp="2025-11-01 03:51:04 +0000 UTC" firstStartedPulling="2025-11-01 03:51:04.526090021 +0000 UTC m=+5.208561857" lastFinishedPulling="2025-11-01 03:51:18.864931006 +0000 UTC m=+19.547402842" observedRunningTime="2025-11-01 03:51:19.846661261 +0000 UTC m=+20.529133100" watchObservedRunningTime="2025-11-01 03:51:19.92259695 +0000 UTC m=+20.605068785" Nov 1 03:51:19.925090 env[1204]: time="2025-11-01T03:51:19.925040656Z" level=info msg="StartContainer for \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\" returns successfully" Nov 1 03:51:20.179198 kubelet[1945]: I1101 03:51:20.179159 1945 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 03:51:20.232187 systemd[1]: Created slice kubepods-burstable-pod17c5ed3d_6c26_4407_a5c6_b2e389f185d3.slice. Nov 1 03:51:20.238176 systemd[1]: Created slice kubepods-burstable-pod70a5a370_f825_46a8_96e7_d3b8897c58b5.slice. Nov 1 03:51:20.301378 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Nov 1 03:51:20.302788 kubelet[1945]: I1101 03:51:20.302728 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70a5a370-f825-46a8-96e7-d3b8897c58b5-config-volume\") pod \"coredns-668d6bf9bc-fgdzc\" (UID: \"70a5a370-f825-46a8-96e7-d3b8897c58b5\") " pod="kube-system/coredns-668d6bf9bc-fgdzc" Nov 1 03:51:20.302788 kubelet[1945]: I1101 03:51:20.302774 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9pbx\" (UniqueName: \"kubernetes.io/projected/17c5ed3d-6c26-4407-a5c6-b2e389f185d3-kube-api-access-h9pbx\") pod \"coredns-668d6bf9bc-dclsr\" (UID: \"17c5ed3d-6c26-4407-a5c6-b2e389f185d3\") " pod="kube-system/coredns-668d6bf9bc-dclsr" Nov 1 03:51:20.302788 kubelet[1945]: I1101 03:51:20.302799 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ghzb\" (UniqueName: \"kubernetes.io/projected/70a5a370-f825-46a8-96e7-d3b8897c58b5-kube-api-access-6ghzb\") pod \"coredns-668d6bf9bc-fgdzc\" (UID: \"70a5a370-f825-46a8-96e7-d3b8897c58b5\") " pod="kube-system/coredns-668d6bf9bc-fgdzc" Nov 1 03:51:20.303311 kubelet[1945]: I1101 03:51:20.302824 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17c5ed3d-6c26-4407-a5c6-b2e389f185d3-config-volume\") pod \"coredns-668d6bf9bc-dclsr\" (UID: \"17c5ed3d-6c26-4407-a5c6-b2e389f185d3\") " pod="kube-system/coredns-668d6bf9bc-dclsr" Nov 1 03:51:20.540615 env[1204]: time="2025-11-01T03:51:20.539990223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dclsr,Uid:17c5ed3d-6c26-4407-a5c6-b2e389f185d3,Namespace:kube-system,Attempt:0,}" Nov 1 03:51:20.542773 env[1204]: time="2025-11-01T03:51:20.541475717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fgdzc,Uid:70a5a370-f825-46a8-96e7-d3b8897c58b5,Namespace:kube-system,Attempt:0,}" Nov 1 03:51:20.745801 kubelet[1945]: I1101 03:51:20.745735 1945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b4zkl" podStartSLOduration=5.987465831 podStartE2EDuration="17.745712516s" podCreationTimestamp="2025-11-01 03:51:03 +0000 UTC" firstStartedPulling="2025-11-01 03:51:04.351629722 +0000 UTC m=+5.034101544" lastFinishedPulling="2025-11-01 03:51:16.109876403 +0000 UTC m=+16.792348229" observedRunningTime="2025-11-01 03:51:20.743797669 +0000 UTC m=+21.426269514" watchObservedRunningTime="2025-11-01 03:51:20.745712516 +0000 UTC m=+21.428184366" Nov 1 03:51:20.766379 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Nov 1 03:51:22.491033 systemd-networkd[1029]: cilium_host: Link UP Nov 1 03:51:22.491856 systemd-networkd[1029]: cilium_net: Link UP Nov 1 03:51:22.491860 systemd-networkd[1029]: cilium_net: Gained carrier Nov 1 03:51:22.492041 systemd-networkd[1029]: cilium_host: Gained carrier Nov 1 03:51:22.505086 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 03:51:22.515099 systemd-networkd[1029]: cilium_host: Gained IPv6LL Nov 1 03:51:22.698451 systemd-networkd[1029]: cilium_vxlan: Link UP Nov 1 03:51:22.698461 systemd-networkd[1029]: cilium_vxlan: Gained carrier Nov 1 03:51:22.738974 systemd-networkd[1029]: cilium_net: Gained IPv6LL Nov 1 03:51:23.203580 kernel: NET: Registered PF_ALG protocol family Nov 1 03:51:23.726453 systemd-networkd[1029]: cilium_vxlan: Gained IPv6LL Nov 1 03:51:24.160543 systemd-networkd[1029]: lxc_health: Link UP Nov 1 03:51:24.188423 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 03:51:24.188089 systemd-networkd[1029]: lxc_health: Gained carrier Nov 1 03:51:24.628659 systemd-networkd[1029]: lxcc924adde24ae: Link UP Nov 1 03:51:24.644366 kernel: eth0: renamed from tmpb0be5 Nov 1 03:51:24.655629 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc924adde24ae: link becomes ready Nov 1 03:51:24.653379 systemd-networkd[1029]: lxcc924adde24ae: Gained carrier Nov 1 03:51:24.654050 systemd-networkd[1029]: lxca4c1fc87ddb6: Link UP Nov 1 03:51:24.666200 kernel: eth0: renamed from tmp087da Nov 1 03:51:24.672363 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca4c1fc87ddb6: link becomes ready Nov 1 03:51:24.673519 systemd-networkd[1029]: lxca4c1fc87ddb6: Gained carrier Nov 1 03:51:25.641596 systemd-networkd[1029]: lxc_health: Gained IPv6LL Nov 1 03:51:25.769637 systemd-networkd[1029]: lxca4c1fc87ddb6: Gained IPv6LL Nov 1 03:51:26.155811 systemd-networkd[1029]: lxcc924adde24ae: Gained IPv6LL Nov 1 03:51:28.963296 env[1204]: time="2025-11-01T03:51:28.963185271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 03:51:28.963296 env[1204]: time="2025-11-01T03:51:28.963241443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 03:51:28.964292 env[1204]: time="2025-11-01T03:51:28.963251611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 03:51:28.964292 env[1204]: time="2025-11-01T03:51:28.963409897Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/087da80d52bdc1bdf6cb2ea028c44cd24e300c635d96d90f75784223229a9794 pid=3118 runtime=io.containerd.runc.v2 Nov 1 03:51:28.998592 env[1204]: time="2025-11-01T03:51:28.993354881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 03:51:28.998592 env[1204]: time="2025-11-01T03:51:28.993431979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 03:51:28.998592 env[1204]: time="2025-11-01T03:51:28.993444391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 03:51:28.998592 env[1204]: time="2025-11-01T03:51:28.993606493Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0be59508cf9da656acf825e1b48fb125bcb55c4fc388ba51c45b9554e85ab9f pid=3134 runtime=io.containerd.runc.v2 Nov 1 03:51:29.024583 systemd[1]: run-containerd-runc-k8s.io-087da80d52bdc1bdf6cb2ea028c44cd24e300c635d96d90f75784223229a9794-runc.NrX7vJ.mount: Deactivated successfully. Nov 1 03:51:29.036117 systemd[1]: Started cri-containerd-087da80d52bdc1bdf6cb2ea028c44cd24e300c635d96d90f75784223229a9794.scope. Nov 1 03:51:29.069429 systemd[1]: Started cri-containerd-b0be59508cf9da656acf825e1b48fb125bcb55c4fc388ba51c45b9554e85ab9f.scope. Nov 1 03:51:29.164246 env[1204]: time="2025-11-01T03:51:29.164173278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fgdzc,Uid:70a5a370-f825-46a8-96e7-d3b8897c58b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"087da80d52bdc1bdf6cb2ea028c44cd24e300c635d96d90f75784223229a9794\"" Nov 1 03:51:29.173989 env[1204]: time="2025-11-01T03:51:29.173941746Z" level=info msg="CreateContainer within sandbox \"087da80d52bdc1bdf6cb2ea028c44cd24e300c635d96d90f75784223229a9794\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 03:51:29.184305 env[1204]: time="2025-11-01T03:51:29.184249595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dclsr,Uid:17c5ed3d-6c26-4407-a5c6-b2e389f185d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0be59508cf9da656acf825e1b48fb125bcb55c4fc388ba51c45b9554e85ab9f\"" Nov 1 03:51:29.190665 env[1204]: time="2025-11-01T03:51:29.190152807Z" level=info msg="CreateContainer within sandbox \"b0be59508cf9da656acf825e1b48fb125bcb55c4fc388ba51c45b9554e85ab9f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 03:51:29.199265 env[1204]: time="2025-11-01T03:51:29.199222710Z" level=info msg="CreateContainer within sandbox \"087da80d52bdc1bdf6cb2ea028c44cd24e300c635d96d90f75784223229a9794\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41ad6dd8a5c5429af667cf6f1ddacb37c9e3c4b0ed3513e344e97a1144587c5a\"" Nov 1 03:51:29.200283 env[1204]: time="2025-11-01T03:51:29.200252328Z" level=info msg="StartContainer for \"41ad6dd8a5c5429af667cf6f1ddacb37c9e3c4b0ed3513e344e97a1144587c5a\"" Nov 1 03:51:29.204775 env[1204]: time="2025-11-01T03:51:29.204742118Z" level=info msg="CreateContainer within sandbox \"b0be59508cf9da656acf825e1b48fb125bcb55c4fc388ba51c45b9554e85ab9f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc1f4de872f6e2ba8257e4e73d1d2b1fcaccc7ae7159a868777ca9fda6982497\"" Nov 1 03:51:29.205374 env[1204]: time="2025-11-01T03:51:29.205322866Z" level=info msg="StartContainer for \"bc1f4de872f6e2ba8257e4e73d1d2b1fcaccc7ae7159a868777ca9fda6982497\"" Nov 1 03:51:29.230824 systemd[1]: Started cri-containerd-bc1f4de872f6e2ba8257e4e73d1d2b1fcaccc7ae7159a868777ca9fda6982497.scope. Nov 1 03:51:29.236890 systemd[1]: Started cri-containerd-41ad6dd8a5c5429af667cf6f1ddacb37c9e3c4b0ed3513e344e97a1144587c5a.scope. Nov 1 03:51:29.287240 env[1204]: time="2025-11-01T03:51:29.287177527Z" level=info msg="StartContainer for \"41ad6dd8a5c5429af667cf6f1ddacb37c9e3c4b0ed3513e344e97a1144587c5a\" returns successfully" Nov 1 03:51:29.294577 env[1204]: time="2025-11-01T03:51:29.294527753Z" level=info msg="StartContainer for \"bc1f4de872f6e2ba8257e4e73d1d2b1fcaccc7ae7159a868777ca9fda6982497\" returns successfully" Nov 1 03:51:29.796297 kubelet[1945]: I1101 03:51:29.796200 1945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dclsr" podStartSLOduration=25.796144774 podStartE2EDuration="25.796144774s" podCreationTimestamp="2025-11-01 03:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 03:51:29.791509727 +0000 UTC m=+30.473981549" watchObservedRunningTime="2025-11-01 03:51:29.796144774 +0000 UTC m=+30.478616611" Nov 1 03:51:53.920551 systemd[1]: Started sshd@5-10.244.101.254:22-182.230.214.138:33478.service. Nov 1 03:51:55.193258 sshd[3275]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 user=root Nov 1 03:51:57.046762 sshd[3275]: Failed password for root from 182.230.214.138 port 33478 ssh2 Nov 1 03:51:58.572267 sshd[3275]: Connection closed by authenticating user root 182.230.214.138 port 33478 [preauth] Nov 1 03:51:58.575683 systemd[1]: sshd@5-10.244.101.254:22-182.230.214.138:33478.service: Deactivated successfully. Nov 1 03:52:03.863724 systemd[1]: Started sshd@6-10.244.101.254:22-182.230.214.138:51694.service. Nov 1 03:52:04.944449 sshd[3281]: Invalid user test from 182.230.214.138 port 51694 Nov 1 03:52:05.217543 sshd[3281]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:05.219111 sshd[3281]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:52:05.219225 sshd[3281]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:52:05.220431 sshd[3281]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:07.779061 sshd[3281]: Failed password for invalid user test from 182.230.214.138 port 51694 ssh2 Nov 1 03:52:09.696263 update_engine[1189]: I1101 03:52:09.695829 1189 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 1 03:52:09.696263 update_engine[1189]: I1101 03:52:09.696074 1189 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 1 03:52:09.701443 update_engine[1189]: I1101 03:52:09.701083 1189 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 1 03:52:09.702608 update_engine[1189]: I1101 03:52:09.702519 1189 omaha_request_params.cc:62] Current group set to lts Nov 1 03:52:09.705923 update_engine[1189]: I1101 03:52:09.705591 1189 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 1 03:52:09.705923 update_engine[1189]: I1101 03:52:09.705631 1189 update_attempter.cc:643] Scheduling an action processor start. Nov 1 03:52:09.705923 update_engine[1189]: I1101 03:52:09.705749 1189 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 03:52:09.709315 update_engine[1189]: I1101 03:52:09.708549 1189 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 1 03:52:09.709315 update_engine[1189]: I1101 03:52:09.708763 1189 omaha_request_action.cc:270] Posting an Omaha request to disabled Nov 1 03:52:09.709315 update_engine[1189]: I1101 03:52:09.708779 1189 omaha_request_action.cc:271] Request: Nov 1 03:52:09.709315 update_engine[1189]: Nov 1 03:52:09.709315 update_engine[1189]: Nov 1 03:52:09.709315 update_engine[1189]: Nov 1 03:52:09.709315 update_engine[1189]: Nov 1 03:52:09.709315 update_engine[1189]: Nov 1 03:52:09.709315 update_engine[1189]: Nov 1 03:52:09.709315 update_engine[1189]: Nov 1 03:52:09.709315 update_engine[1189]: Nov 1 03:52:09.709315 update_engine[1189]: I1101 03:52:09.708791 1189 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 03:52:09.719043 update_engine[1189]: I1101 03:52:09.718648 1189 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 03:52:09.719043 update_engine[1189]: I1101 03:52:09.718968 1189 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 03:52:09.725629 locksmithd[1228]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 1 03:52:09.725908 update_engine[1189]: E1101 03:52:09.725495 1189 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 03:52:09.725908 update_engine[1189]: I1101 03:52:09.725602 1189 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 1 03:52:10.027782 sshd[3281]: Connection closed by invalid user test 182.230.214.138 port 51694 [preauth] Nov 1 03:52:10.031131 systemd[1]: sshd@6-10.244.101.254:22-182.230.214.138:51694.service: Deactivated successfully. Nov 1 03:52:10.276913 systemd[1]: Started sshd@7-10.244.101.254:22-182.230.214.138:42394.service. Nov 1 03:52:11.321827 sshd[3289]: Invalid user ansadmin from 182.230.214.138 port 42394 Nov 1 03:52:11.581637 sshd[3289]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:11.582564 sshd[3289]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:52:11.582620 sshd[3289]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:52:11.583205 sshd[3289]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:13.631861 sshd[3289]: Failed password for invalid user ansadmin from 182.230.214.138 port 42394 ssh2 Nov 1 03:52:14.517547 sshd[3289]: Connection closed by invalid user ansadmin 182.230.214.138 port 42394 [preauth] Nov 1 03:52:14.518717 systemd[1]: sshd@7-10.244.101.254:22-182.230.214.138:42394.service: Deactivated successfully. Nov 1 03:52:14.763386 systemd[1]: Started sshd@8-10.244.101.254:22-182.230.214.138:42410.service. Nov 1 03:52:15.725430 sshd[3293]: Invalid user appserver from 182.230.214.138 port 42410 Nov 1 03:52:15.963814 sshd[3293]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:15.966181 sshd[3293]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:52:15.966286 sshd[3293]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:52:15.967409 sshd[3293]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:17.564530 sshd[3293]: Failed password for invalid user appserver from 182.230.214.138 port 42410 ssh2 Nov 1 03:52:19.306123 sshd[3293]: Connection closed by invalid user appserver 182.230.214.138 port 42410 [preauth] Nov 1 03:52:19.307875 systemd[1]: sshd@8-10.244.101.254:22-182.230.214.138:42410.service: Deactivated successfully. Nov 1 03:52:19.554224 systemd[1]: Started sshd@9-10.244.101.254:22-182.230.214.138:37196.service. Nov 1 03:52:19.659700 update_engine[1189]: I1101 03:52:19.658744 1189 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 03:52:19.659700 update_engine[1189]: I1101 03:52:19.659379 1189 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 03:52:19.659700 update_engine[1189]: I1101 03:52:19.659644 1189 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 03:52:19.661153 update_engine[1189]: E1101 03:52:19.661005 1189 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 03:52:19.661153 update_engine[1189]: I1101 03:52:19.661123 1189 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 1 03:52:20.772813 sshd[3297]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 user=root Nov 1 03:52:22.389698 sshd[3297]: Failed password for root from 182.230.214.138 port 37196 ssh2 Nov 1 03:52:23.707881 systemd[1]: Started sshd@10-10.244.101.254:22-139.178.89.65:52948.service. Nov 1 03:52:24.143561 sshd[3297]: Connection closed by authenticating user root 182.230.214.138 port 37196 [preauth] Nov 1 03:52:24.147521 systemd[1]: sshd@9-10.244.101.254:22-182.230.214.138:37196.service: Deactivated successfully. Nov 1 03:52:24.426872 systemd[1]: Started sshd@11-10.244.101.254:22-182.230.214.138:37208.service. Nov 1 03:52:24.619582 sshd[3301]: Accepted publickey for core from 139.178.89.65 port 52948 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:52:24.623974 sshd[3301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:52:24.637538 systemd-logind[1186]: New session 6 of user core. Nov 1 03:52:24.638320 systemd[1]: Started session-6.scope. Nov 1 03:52:25.450900 sshd[3301]: pam_unix(sshd:session): session closed for user core Nov 1 03:52:25.460883 systemd[1]: sshd@10-10.244.101.254:22-139.178.89.65:52948.service: Deactivated successfully. Nov 1 03:52:25.462125 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 03:52:25.462996 systemd-logind[1186]: Session 6 logged out. Waiting for processes to exit. Nov 1 03:52:25.465423 systemd-logind[1186]: Removed session 6. Nov 1 03:52:25.482110 sshd[3305]: Invalid user jenkins from 182.230.214.138 port 37208 Nov 1 03:52:25.740407 sshd[3305]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:25.742389 sshd[3305]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:52:25.742487 sshd[3305]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:52:25.743788 sshd[3305]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:27.380904 sshd[3305]: Failed password for invalid user jenkins from 182.230.214.138 port 37208 ssh2 Nov 1 03:52:29.276314 sshd[3305]: Connection closed by invalid user jenkins 182.230.214.138 port 37208 [preauth] Nov 1 03:52:29.279531 systemd[1]: sshd@11-10.244.101.254:22-182.230.214.138:37208.service: Deactivated successfully. Nov 1 03:52:29.522693 systemd[1]: Started sshd@12-10.244.101.254:22-182.230.214.138:36694.service. Nov 1 03:52:29.652737 update_engine[1189]: I1101 03:52:29.652539 1189 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 03:52:29.653555 update_engine[1189]: I1101 03:52:29.653151 1189 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 03:52:29.653555 update_engine[1189]: I1101 03:52:29.653518 1189 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 03:52:29.654110 update_engine[1189]: E1101 03:52:29.654062 1189 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 03:52:29.654284 update_engine[1189]: I1101 03:52:29.654250 1189 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 1 03:52:30.491948 sshd[3319]: Invalid user deploy from 182.230.214.138 port 36694 Nov 1 03:52:30.611934 systemd[1]: Started sshd@13-10.244.101.254:22-139.178.89.65:58526.service. Nov 1 03:52:30.732970 sshd[3319]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:30.735611 sshd[3319]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:52:30.735995 sshd[3319]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:52:30.737800 sshd[3319]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:31.538053 sshd[3322]: Accepted publickey for core from 139.178.89.65 port 58526 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:52:31.542020 sshd[3322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:52:31.555425 systemd-logind[1186]: New session 7 of user core. Nov 1 03:52:31.557124 systemd[1]: Started session-7.scope. Nov 1 03:52:32.293111 sshd[3322]: pam_unix(sshd:session): session closed for user core Nov 1 03:52:32.299251 systemd[1]: sshd@13-10.244.101.254:22-139.178.89.65:58526.service: Deactivated successfully. Nov 1 03:52:32.301041 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 03:52:32.302485 systemd-logind[1186]: Session 7 logged out. Waiting for processes to exit. Nov 1 03:52:32.303464 systemd-logind[1186]: Removed session 7. Nov 1 03:52:32.395266 sshd[3319]: Failed password for invalid user deploy from 182.230.214.138 port 36694 ssh2 Nov 1 03:52:34.218136 sshd[3319]: Connection closed by invalid user deploy 182.230.214.138 port 36694 [preauth] Nov 1 03:52:34.221576 systemd[1]: sshd@12-10.244.101.254:22-182.230.214.138:36694.service: Deactivated successfully. Nov 1 03:52:34.464944 systemd[1]: Started sshd@14-10.244.101.254:22-182.230.214.138:36700.service. Nov 1 03:52:35.687864 sshd[3337]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 user=root Nov 1 03:52:37.448797 systemd[1]: Started sshd@15-10.244.101.254:22-139.178.89.65:46928.service. Nov 1 03:52:37.696611 sshd[3337]: Failed password for root from 182.230.214.138 port 36700 ssh2 Nov 1 03:52:38.372878 sshd[3342]: Accepted publickey for core from 139.178.89.65 port 46928 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:52:38.377122 sshd[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:52:38.390412 systemd-logind[1186]: New session 8 of user core. Nov 1 03:52:38.390800 systemd[1]: Started session-8.scope. Nov 1 03:52:39.068791 sshd[3337]: Connection closed by authenticating user root 182.230.214.138 port 36700 [preauth] Nov 1 03:52:39.072223 systemd[1]: sshd@14-10.244.101.254:22-182.230.214.138:36700.service: Deactivated successfully. Nov 1 03:52:39.108911 sshd[3342]: pam_unix(sshd:session): session closed for user core Nov 1 03:52:39.115684 systemd[1]: sshd@15-10.244.101.254:22-139.178.89.65:46928.service: Deactivated successfully. Nov 1 03:52:39.117362 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 03:52:39.118480 systemd-logind[1186]: Session 8 logged out. Waiting for processes to exit. Nov 1 03:52:39.119681 systemd-logind[1186]: Removed session 8. Nov 1 03:52:39.329815 systemd[1]: Started sshd@16-10.244.101.254:22-182.230.214.138:51644.service. Nov 1 03:52:39.649777 update_engine[1189]: I1101 03:52:39.649600 1189 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 03:52:39.650863 update_engine[1189]: I1101 03:52:39.650630 1189 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 03:52:39.651175 update_engine[1189]: I1101 03:52:39.651136 1189 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 03:52:39.651943 update_engine[1189]: E1101 03:52:39.651810 1189 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 03:52:39.654354 update_engine[1189]: I1101 03:52:39.652010 1189 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 03:52:39.654354 update_engine[1189]: I1101 03:52:39.652046 1189 omaha_request_action.cc:621] Omaha request response: Nov 1 03:52:39.654354 update_engine[1189]: E1101 03:52:39.652297 1189 omaha_request_action.cc:640] Omaha request network transfer failed. Nov 1 03:52:39.654354 update_engine[1189]: I1101 03:52:39.652375 1189 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 1 03:52:39.654354 update_engine[1189]: I1101 03:52:39.652385 1189 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 03:52:39.654354 update_engine[1189]: I1101 03:52:39.652395 1189 update_attempter.cc:306] Processing Done. Nov 1 03:52:39.654354 update_engine[1189]: E1101 03:52:39.652477 1189 update_attempter.cc:619] Update failed. Nov 1 03:52:39.654354 update_engine[1189]: I1101 03:52:39.652510 1189 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 1 03:52:39.654354 update_engine[1189]: I1101 03:52:39.652520 1189 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 1 03:52:39.654354 update_engine[1189]: I1101 03:52:39.652531 1189 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 1 03:52:39.654354 update_engine[1189]: I1101 03:52:39.652727 1189 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 03:52:39.654354 update_engine[1189]: I1101 03:52:39.652806 1189 omaha_request_action.cc:270] Posting an Omaha request to disabled Nov 1 03:52:39.654354 update_engine[1189]: I1101 03:52:39.652818 1189 omaha_request_action.cc:271] Request: Nov 1 03:52:39.654354 update_engine[1189]: Nov 1 03:52:39.654354 update_engine[1189]: Nov 1 03:52:39.654354 update_engine[1189]: Nov 1 03:52:39.654354 update_engine[1189]: Nov 1 03:52:39.654354 update_engine[1189]: Nov 1 03:52:39.654354 update_engine[1189]: Nov 1 03:52:39.656654 update_engine[1189]: I1101 03:52:39.652827 1189 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 03:52:39.656654 update_engine[1189]: I1101 03:52:39.653312 1189 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 03:52:39.656654 update_engine[1189]: I1101 03:52:39.653843 1189 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 03:52:39.656654 update_engine[1189]: E1101 03:52:39.654743 1189 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 03:52:39.656654 update_engine[1189]: I1101 03:52:39.655069 1189 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 03:52:39.656654 update_engine[1189]: I1101 03:52:39.655083 1189 omaha_request_action.cc:621] Omaha request response: Nov 1 03:52:39.656654 update_engine[1189]: I1101 03:52:39.655090 1189 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 03:52:39.656654 update_engine[1189]: I1101 03:52:39.655097 1189 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 03:52:39.656654 update_engine[1189]: I1101 03:52:39.655104 1189 update_attempter.cc:306] Processing Done. Nov 1 03:52:39.656654 update_engine[1189]: I1101 03:52:39.655112 1189 update_attempter.cc:310] Error event sent. Nov 1 03:52:39.656654 update_engine[1189]: I1101 03:52:39.655123 1189 update_check_scheduler.cc:74] Next update check in 47m7s Nov 1 03:52:39.664061 locksmithd[1228]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 1 03:52:39.664061 locksmithd[1228]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 1 03:52:40.311323 sshd[3356]: Invalid user test from 182.230.214.138 port 51644 Nov 1 03:52:40.562316 sshd[3356]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:40.564704 sshd[3356]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:52:40.565073 sshd[3356]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:52:40.566225 sshd[3356]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:42.930659 sshd[3356]: Failed password for invalid user test from 182.230.214.138 port 51644 ssh2 Nov 1 03:52:44.265237 systemd[1]: Started sshd@17-10.244.101.254:22-139.178.89.65:46940.service. Nov 1 03:52:45.181742 sshd[3359]: Accepted publickey for core from 139.178.89.65 port 46940 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:52:45.186014 sshd[3359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:52:45.196116 systemd[1]: Started session-9.scope. Nov 1 03:52:45.196873 systemd-logind[1186]: New session 9 of user core. Nov 1 03:52:45.344271 sshd[3356]: Connection closed by invalid user test 182.230.214.138 port 51644 [preauth] Nov 1 03:52:45.346063 systemd[1]: sshd@16-10.244.101.254:22-182.230.214.138:51644.service: Deactivated successfully. Nov 1 03:52:45.627606 systemd[1]: Started sshd@18-10.244.101.254:22-182.230.214.138:51658.service. Nov 1 03:52:45.943121 sshd[3359]: pam_unix(sshd:session): session closed for user core Nov 1 03:52:45.950470 systemd[1]: sshd@17-10.244.101.254:22-139.178.89.65:46940.service: Deactivated successfully. Nov 1 03:52:45.952740 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 03:52:45.954229 systemd-logind[1186]: Session 9 logged out. Waiting for processes to exit. Nov 1 03:52:45.957176 systemd-logind[1186]: Removed session 9. Nov 1 03:52:46.682816 sshd[3364]: Invalid user ubnt from 182.230.214.138 port 51658 Nov 1 03:52:46.942810 sshd[3364]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:46.944640 sshd[3364]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:52:46.944975 sshd[3364]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:52:46.946956 sshd[3364]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:48.800224 sshd[3364]: Failed password for invalid user ubnt from 182.230.214.138 port 51658 ssh2 Nov 1 03:52:50.779106 sshd[3364]: Connection closed by invalid user ubnt 182.230.214.138 port 51658 [preauth] Nov 1 03:52:50.782580 systemd[1]: sshd@18-10.244.101.254:22-182.230.214.138:51658.service: Deactivated successfully. Nov 1 03:52:50.989469 systemd[1]: Started sshd@19-10.244.101.254:22-182.230.214.138:39158.service. Nov 1 03:52:51.108836 systemd[1]: Started sshd@20-10.244.101.254:22-139.178.89.65:49770.service. Nov 1 03:52:51.928193 sshd[3376]: Invalid user odoo from 182.230.214.138 port 39158 Nov 1 03:52:52.025287 sshd[3379]: Accepted publickey for core from 139.178.89.65 port 49770 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:52:52.030085 sshd[3379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:52:52.039981 systemd[1]: Started session-10.scope. Nov 1 03:52:52.040297 systemd-logind[1186]: New session 10 of user core. Nov 1 03:52:52.161808 sshd[3376]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:52.164095 sshd[3376]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:52:52.164208 sshd[3376]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:52:52.165450 sshd[3376]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:52.752540 sshd[3379]: pam_unix(sshd:session): session closed for user core Nov 1 03:52:52.759189 systemd[1]: sshd@20-10.244.101.254:22-139.178.89.65:49770.service: Deactivated successfully. Nov 1 03:52:52.760569 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 03:52:52.762126 systemd-logind[1186]: Session 10 logged out. Waiting for processes to exit. Nov 1 03:52:52.763763 systemd-logind[1186]: Removed session 10. Nov 1 03:52:52.904104 systemd[1]: Started sshd@21-10.244.101.254:22-139.178.89.65:49774.service. Nov 1 03:52:53.811635 sshd[3392]: Accepted publickey for core from 139.178.89.65 port 49774 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:52:53.815049 sshd[3392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:52:53.827218 systemd-logind[1186]: New session 11 of user core. Nov 1 03:52:53.828595 systemd[1]: Started session-11.scope. Nov 1 03:52:54.509853 sshd[3376]: Failed password for invalid user odoo from 182.230.214.138 port 39158 ssh2 Nov 1 03:52:54.595908 sshd[3392]: pam_unix(sshd:session): session closed for user core Nov 1 03:52:54.603085 systemd[1]: sshd@21-10.244.101.254:22-139.178.89.65:49774.service: Deactivated successfully. Nov 1 03:52:54.604379 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 03:52:54.605657 systemd-logind[1186]: Session 11 logged out. Waiting for processes to exit. Nov 1 03:52:54.606981 systemd-logind[1186]: Removed session 11. Nov 1 03:52:54.754178 systemd[1]: Started sshd@22-10.244.101.254:22-139.178.89.65:49784.service. Nov 1 03:52:55.676411 sshd[3402]: Accepted publickey for core from 139.178.89.65 port 49784 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:52:55.679274 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:52:55.687845 systemd[1]: Started session-12.scope. Nov 1 03:52:55.688859 systemd-logind[1186]: New session 12 of user core. Nov 1 03:52:56.372812 sshd[3376]: Connection closed by invalid user odoo 182.230.214.138 port 39158 [preauth] Nov 1 03:52:56.376556 systemd[1]: sshd@19-10.244.101.254:22-182.230.214.138:39158.service: Deactivated successfully. Nov 1 03:52:56.396836 sshd[3402]: pam_unix(sshd:session): session closed for user core Nov 1 03:52:56.404028 systemd[1]: sshd@22-10.244.101.254:22-139.178.89.65:49784.service: Deactivated successfully. Nov 1 03:52:56.405233 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 03:52:56.406278 systemd-logind[1186]: Session 12 logged out. Waiting for processes to exit. Nov 1 03:52:56.407742 systemd-logind[1186]: Removed session 12. Nov 1 03:52:56.661565 systemd[1]: Started sshd@23-10.244.101.254:22-182.230.214.138:39174.service. Nov 1 03:52:57.697649 sshd[3415]: Invalid user steam from 182.230.214.138 port 39174 Nov 1 03:52:57.954000 sshd[3415]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:57.956577 sshd[3415]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:52:57.956947 sshd[3415]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:52:57.961635 sshd[3415]: pam_faillock(sshd:auth): User unknown Nov 1 03:52:59.988753 sshd[3415]: Failed password for invalid user steam from 182.230.214.138 port 39174 ssh2 Nov 1 03:53:01.366486 sshd[3415]: Connection closed by invalid user steam 182.230.214.138 port 39174 [preauth] Nov 1 03:53:01.369398 systemd[1]: sshd@23-10.244.101.254:22-182.230.214.138:39174.service: Deactivated successfully. Nov 1 03:53:01.563358 systemd[1]: Started sshd@24-10.244.101.254:22-139.178.89.65:33436.service. Nov 1 03:53:01.593996 systemd[1]: Started sshd@25-10.244.101.254:22-182.230.214.138:45574.service. Nov 1 03:53:02.476114 sshd[3421]: Accepted publickey for core from 139.178.89.65 port 33436 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:02.479306 sshd[3421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:02.486930 systemd-logind[1186]: New session 13 of user core. Nov 1 03:53:02.490501 systemd[1]: Started session-13.scope. Nov 1 03:53:02.576243 sshd[3424]: Invalid user mysql from 182.230.214.138 port 45574 Nov 1 03:53:02.824256 sshd[3424]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:02.826493 sshd[3424]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:53:02.826816 sshd[3424]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:53:02.828258 sshd[3424]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:03.227056 sshd[3421]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:03.234039 systemd[1]: sshd@24-10.244.101.254:22-139.178.89.65:33436.service: Deactivated successfully. Nov 1 03:53:03.235970 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 03:53:03.237158 systemd-logind[1186]: Session 13 logged out. Waiting for processes to exit. Nov 1 03:53:03.239727 systemd-logind[1186]: Removed session 13. Nov 1 03:53:04.545874 sshd[3424]: Failed password for invalid user mysql from 182.230.214.138 port 45574 ssh2 Nov 1 03:53:04.852047 sshd[3424]: Connection closed by invalid user mysql 182.230.214.138 port 45574 [preauth] Nov 1 03:53:04.855096 systemd[1]: sshd@25-10.244.101.254:22-182.230.214.138:45574.service: Deactivated successfully. Nov 1 03:53:05.088316 systemd[1]: Started sshd@26-10.244.101.254:22-182.230.214.138:45586.service. Nov 1 03:53:06.063505 sshd[3438]: Invalid user ftpuser from 182.230.214.138 port 45586 Nov 1 03:53:06.300493 sshd[3438]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:06.302949 sshd[3438]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:53:06.303137 sshd[3438]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:53:06.304244 sshd[3438]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:07.570321 sshd[3438]: Failed password for invalid user ftpuser from 182.230.214.138 port 45586 ssh2 Nov 1 03:53:08.374665 systemd[1]: Started sshd@27-10.244.101.254:22-139.178.89.65:53028.service. Nov 1 03:53:08.825538 sshd[3438]: Connection closed by invalid user ftpuser 182.230.214.138 port 45586 [preauth] Nov 1 03:53:08.828476 systemd[1]: sshd@26-10.244.101.254:22-182.230.214.138:45586.service: Deactivated successfully. Nov 1 03:53:09.077114 systemd[1]: Started sshd@28-10.244.101.254:22-182.230.214.138:46098.service. Nov 1 03:53:09.275419 sshd[3441]: Accepted publickey for core from 139.178.89.65 port 53028 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:09.278455 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:09.288685 systemd[1]: Started session-14.scope. Nov 1 03:53:09.289724 systemd-logind[1186]: New session 14 of user core. Nov 1 03:53:10.005621 sshd[3441]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:10.012036 systemd[1]: sshd@27-10.244.101.254:22-139.178.89.65:53028.service: Deactivated successfully. Nov 1 03:53:10.013072 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 03:53:10.013806 systemd-logind[1186]: Session 14 logged out. Waiting for processes to exit. Nov 1 03:53:10.014764 systemd-logind[1186]: Removed session 14. Nov 1 03:53:10.053905 sshd[3445]: Invalid user ubuntu from 182.230.214.138 port 46098 Nov 1 03:53:10.155274 systemd[1]: Started sshd@29-10.244.101.254:22-139.178.89.65:53030.service. Nov 1 03:53:10.300704 sshd[3445]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:10.302743 sshd[3445]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:53:10.302993 sshd[3445]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:53:10.304529 sshd[3445]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:11.067923 sshd[3457]: Accepted publickey for core from 139.178.89.65 port 53030 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:11.072036 sshd[3457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:11.083049 systemd-logind[1186]: New session 15 of user core. Nov 1 03:53:11.085180 systemd[1]: Started session-15.scope. Nov 1 03:53:11.993857 sshd[3457]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:12.000395 systemd[1]: sshd@29-10.244.101.254:22-139.178.89.65:53030.service: Deactivated successfully. Nov 1 03:53:12.002574 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 03:53:12.004084 systemd-logind[1186]: Session 15 logged out. Waiting for processes to exit. Nov 1 03:53:12.005806 systemd-logind[1186]: Removed session 15. Nov 1 03:53:12.117003 sshd[3445]: Failed password for invalid user ubuntu from 182.230.214.138 port 46098 ssh2 Nov 1 03:53:12.148369 systemd[1]: Started sshd@30-10.244.101.254:22-139.178.89.65:53038.service. Nov 1 03:53:12.786631 sshd[3445]: Connection closed by invalid user ubuntu 182.230.214.138 port 46098 [preauth] Nov 1 03:53:12.788028 systemd[1]: sshd@28-10.244.101.254:22-182.230.214.138:46098.service: Deactivated successfully. Nov 1 03:53:13.072601 sshd[3467]: Accepted publickey for core from 139.178.89.65 port 53038 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:13.075594 sshd[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:13.086707 systemd-logind[1186]: New session 16 of user core. Nov 1 03:53:13.091599 systemd[1]: Started session-16.scope. Nov 1 03:53:14.403083 sshd[3467]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:14.409672 systemd[1]: sshd@30-10.244.101.254:22-139.178.89.65:53038.service: Deactivated successfully. Nov 1 03:53:14.412120 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 03:53:14.413419 systemd-logind[1186]: Session 16 logged out. Waiting for processes to exit. Nov 1 03:53:14.415148 systemd-logind[1186]: Removed session 16. Nov 1 03:53:14.553844 systemd[1]: Started sshd@31-10.244.101.254:22-139.178.89.65:53046.service. Nov 1 03:53:15.480533 sshd[3488]: Accepted publickey for core from 139.178.89.65 port 53046 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:15.484700 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:15.494847 systemd[1]: Started session-17.scope. Nov 1 03:53:15.495577 systemd-logind[1186]: New session 17 of user core. Nov 1 03:53:16.396825 sshd[3488]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:16.409750 systemd[1]: sshd@31-10.244.101.254:22-139.178.89.65:53046.service: Deactivated successfully. Nov 1 03:53:16.411622 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 03:53:16.413388 systemd-logind[1186]: Session 17 logged out. Waiting for processes to exit. Nov 1 03:53:16.414741 systemd-logind[1186]: Removed session 17. Nov 1 03:53:16.555309 systemd[1]: Started sshd@32-10.244.101.254:22-139.178.89.65:41102.service. Nov 1 03:53:17.494727 sshd[3498]: Accepted publickey for core from 139.178.89.65 port 41102 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:17.497749 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:17.505609 systemd[1]: Started session-18.scope. Nov 1 03:53:17.506167 systemd-logind[1186]: New session 18 of user core. Nov 1 03:53:18.258000 sshd[3498]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:18.264879 systemd[1]: sshd@32-10.244.101.254:22-139.178.89.65:41102.service: Deactivated successfully. Nov 1 03:53:18.265832 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 03:53:18.266793 systemd-logind[1186]: Session 18 logged out. Waiting for processes to exit. Nov 1 03:53:18.267880 systemd-logind[1186]: Removed session 18. Nov 1 03:53:23.409314 systemd[1]: Started sshd@33-10.244.101.254:22-139.178.89.65:41114.service. Nov 1 03:53:24.326542 sshd[3510]: Accepted publickey for core from 139.178.89.65 port 41114 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:24.329667 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:24.342606 systemd[1]: Started session-19.scope. Nov 1 03:53:24.343476 systemd-logind[1186]: New session 19 of user core. Nov 1 03:53:25.046970 sshd[3510]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:25.052562 systemd[1]: sshd@33-10.244.101.254:22-139.178.89.65:41114.service: Deactivated successfully. Nov 1 03:53:25.053903 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 03:53:25.054960 systemd-logind[1186]: Session 19 logged out. Waiting for processes to exit. Nov 1 03:53:25.056927 systemd-logind[1186]: Removed session 19. Nov 1 03:53:30.195905 systemd[1]: Started sshd@34-10.244.101.254:22-139.178.89.65:49826.service. Nov 1 03:53:31.101687 sshd[3525]: Accepted publickey for core from 139.178.89.65 port 49826 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:31.103974 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:31.112144 systemd[1]: Started session-20.scope. Nov 1 03:53:31.112894 systemd-logind[1186]: New session 20 of user core. Nov 1 03:53:31.855080 sshd[3525]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:31.861500 systemd[1]: sshd@34-10.244.101.254:22-139.178.89.65:49826.service: Deactivated successfully. Nov 1 03:53:31.863568 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 03:53:31.865366 systemd-logind[1186]: Session 20 logged out. Waiting for processes to exit. Nov 1 03:53:31.867813 systemd-logind[1186]: Removed session 20. Nov 1 03:53:33.032872 systemd[1]: Started sshd@35-10.244.101.254:22-182.230.214.138:36790.service. Nov 1 03:53:33.991261 sshd[3538]: Invalid user ubuntu from 182.230.214.138 port 36790 Nov 1 03:53:34.226412 sshd[3538]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:34.227403 sshd[3538]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:53:34.227489 sshd[3538]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:53:34.228420 sshd[3538]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:36.670725 sshd[3538]: Failed password for invalid user ubuntu from 182.230.214.138 port 36790 ssh2 Nov 1 03:53:37.014531 systemd[1]: Started sshd@36-10.244.101.254:22-139.178.89.65:47324.service. Nov 1 03:53:37.942991 sshd[3543]: Accepted publickey for core from 139.178.89.65 port 47324 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:37.947555 sshd[3543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:37.958314 systemd-logind[1186]: New session 21 of user core. Nov 1 03:53:37.959132 systemd[1]: Started session-21.scope. Nov 1 03:53:38.669790 sshd[3543]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:38.674385 systemd-logind[1186]: Session 21 logged out. Waiting for processes to exit. Nov 1 03:53:38.674711 systemd[1]: sshd@36-10.244.101.254:22-139.178.89.65:47324.service: Deactivated successfully. Nov 1 03:53:38.675843 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 03:53:38.676932 systemd-logind[1186]: Removed session 21. Nov 1 03:53:38.821313 systemd[1]: Started sshd@37-10.244.101.254:22-139.178.89.65:47334.service. Nov 1 03:53:38.959993 sshd[3538]: Connection closed by invalid user ubuntu 182.230.214.138 port 36790 [preauth] Nov 1 03:53:38.961750 systemd[1]: sshd@35-10.244.101.254:22-182.230.214.138:36790.service: Deactivated successfully. Nov 1 03:53:39.258253 systemd[1]: Started sshd@38-10.244.101.254:22-182.230.214.138:46042.service. Nov 1 03:53:39.731643 sshd[3555]: Accepted publickey for core from 139.178.89.65 port 47334 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:39.734877 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:39.742403 systemd[1]: Started session-22.scope. Nov 1 03:53:39.742716 systemd-logind[1186]: New session 22 of user core. Nov 1 03:53:40.386213 sshd[3559]: Invalid user vps from 182.230.214.138 port 46042 Nov 1 03:53:40.656040 sshd[3559]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:40.657237 sshd[3559]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:53:40.657290 sshd[3559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:53:40.658959 sshd[3559]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:41.636530 kubelet[1945]: I1101 03:53:41.636306 1945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fgdzc" podStartSLOduration=157.63621754 podStartE2EDuration="2m37.63621754s" podCreationTimestamp="2025-11-01 03:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 03:51:29.838516105 +0000 UTC m=+30.520987950" watchObservedRunningTime="2025-11-01 03:53:41.63621754 +0000 UTC m=+162.318689382" Nov 1 03:53:41.675914 systemd[1]: run-containerd-runc-k8s.io-87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8-runc.E13DtD.mount: Deactivated successfully. Nov 1 03:53:41.692788 env[1204]: time="2025-11-01T03:53:41.692701170Z" level=info msg="StopContainer for \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\" with timeout 30 (s)" Nov 1 03:53:41.694088 env[1204]: time="2025-11-01T03:53:41.694055088Z" level=info msg="Stop container \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\" with signal terminated" Nov 1 03:53:41.710023 systemd[1]: cri-containerd-9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6.scope: Deactivated successfully. Nov 1 03:53:41.715599 env[1204]: time="2025-11-01T03:53:41.715530473Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 03:53:41.721279 env[1204]: time="2025-11-01T03:53:41.721243708Z" level=info msg="StopContainer for \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\" with timeout 2 (s)" Nov 1 03:53:41.721730 env[1204]: time="2025-11-01T03:53:41.721689572Z" level=info msg="Stop container \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\" with signal terminated" Nov 1 03:53:41.735913 systemd-networkd[1029]: lxc_health: Link DOWN Nov 1 03:53:41.735926 systemd-networkd[1029]: lxc_health: Lost carrier Nov 1 03:53:41.740472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6-rootfs.mount: Deactivated successfully. Nov 1 03:53:41.758941 env[1204]: time="2025-11-01T03:53:41.758864812Z" level=info msg="shim disconnected" id=9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6 Nov 1 03:53:41.760385 env[1204]: time="2025-11-01T03:53:41.760357812Z" level=warning msg="cleaning up after shim disconnected" id=9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6 namespace=k8s.io Nov 1 03:53:41.760512 env[1204]: time="2025-11-01T03:53:41.760494487Z" level=info msg="cleaning up dead shim" Nov 1 03:53:41.772862 systemd[1]: cri-containerd-87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8.scope: Deactivated successfully. Nov 1 03:53:41.773134 systemd[1]: cri-containerd-87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8.scope: Consumed 8.553s CPU time. Nov 1 03:53:41.782139 env[1204]: time="2025-11-01T03:53:41.782092162Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:53:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3613 runtime=io.containerd.runc.v2\n" Nov 1 03:53:41.783570 env[1204]: time="2025-11-01T03:53:41.783534867Z" level=info msg="StopContainer for \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\" returns successfully" Nov 1 03:53:41.785943 env[1204]: time="2025-11-01T03:53:41.785906543Z" level=info msg="StopPodSandbox for \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\"" Nov 1 03:53:41.786877 env[1204]: time="2025-11-01T03:53:41.786099126Z" level=info msg="Container to stop \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 03:53:41.789115 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82-shm.mount: Deactivated successfully. Nov 1 03:53:41.799307 systemd[1]: cri-containerd-df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82.scope: Deactivated successfully. Nov 1 03:53:41.811809 env[1204]: time="2025-11-01T03:53:41.811762366Z" level=info msg="shim disconnected" id=87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8 Nov 1 03:53:41.811809 env[1204]: time="2025-11-01T03:53:41.811806881Z" level=warning msg="cleaning up after shim disconnected" id=87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8 namespace=k8s.io Nov 1 03:53:41.812068 env[1204]: time="2025-11-01T03:53:41.811816730Z" level=info msg="cleaning up dead shim" Nov 1 03:53:41.823709 env[1204]: time="2025-11-01T03:53:41.823668115Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:53:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3651 runtime=io.containerd.runc.v2\n" Nov 1 03:53:41.825991 env[1204]: time="2025-11-01T03:53:41.825955228Z" level=info msg="StopContainer for \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\" returns successfully" Nov 1 03:53:41.827324 env[1204]: time="2025-11-01T03:53:41.827296696Z" level=info msg="StopPodSandbox for \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\"" Nov 1 03:53:41.827324 env[1204]: time="2025-11-01T03:53:41.827395716Z" level=info msg="Container to stop \"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 03:53:41.827324 env[1204]: time="2025-11-01T03:53:41.827410978Z" level=info msg="Container to stop \"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 03:53:41.827324 env[1204]: time="2025-11-01T03:53:41.827422885Z" level=info msg="Container to stop \"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 03:53:41.827324 env[1204]: time="2025-11-01T03:53:41.827434050Z" level=info msg="Container to stop \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 03:53:41.827324 env[1204]: time="2025-11-01T03:53:41.827445922Z" level=info msg="Container to stop \"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 03:53:41.828379 env[1204]: time="2025-11-01T03:53:41.828287924Z" level=info msg="shim disconnected" id=df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82 Nov 1 03:53:41.828379 env[1204]: time="2025-11-01T03:53:41.828324411Z" level=warning msg="cleaning up after shim disconnected" id=df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82 namespace=k8s.io Nov 1 03:53:41.828379 env[1204]: time="2025-11-01T03:53:41.828357032Z" level=info msg="cleaning up dead shim" Nov 1 03:53:41.838275 systemd[1]: cri-containerd-eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3.scope: Deactivated successfully. Nov 1 03:53:41.844239 env[1204]: time="2025-11-01T03:53:41.844200835Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:53:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3672 runtime=io.containerd.runc.v2\n" Nov 1 03:53:41.845133 env[1204]: time="2025-11-01T03:53:41.845100607Z" level=info msg="TearDown network for sandbox \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\" successfully" Nov 1 03:53:41.845276 env[1204]: time="2025-11-01T03:53:41.845254193Z" level=info msg="StopPodSandbox for \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\" returns successfully" Nov 1 03:53:41.883296 env[1204]: time="2025-11-01T03:53:41.883244701Z" level=info msg="shim disconnected" id=eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3 Nov 1 03:53:41.884140 env[1204]: time="2025-11-01T03:53:41.884110486Z" level=warning msg="cleaning up after shim disconnected" id=eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3 namespace=k8s.io Nov 1 03:53:41.884295 env[1204]: time="2025-11-01T03:53:41.884277107Z" level=info msg="cleaning up dead shim" Nov 1 03:53:41.897109 env[1204]: time="2025-11-01T03:53:41.896991066Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:53:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3706 runtime=io.containerd.runc.v2\n" Nov 1 03:53:41.898844 env[1204]: time="2025-11-01T03:53:41.898810726Z" level=info msg="TearDown network for sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" successfully" Nov 1 03:53:41.898844 env[1204]: time="2025-11-01T03:53:41.898841942Z" level=info msg="StopPodSandbox for \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" returns successfully" Nov 1 03:53:42.040614 kubelet[1945]: I1101 03:53:42.040138 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cni-path\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.040614 kubelet[1945]: I1101 03:53:42.040433 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-etc-cni-netd\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.040614 kubelet[1945]: I1101 03:53:42.040583 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ls56\" (UniqueName: \"kubernetes.io/projected/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-kube-api-access-5ls56\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.041660 kubelet[1945]: I1101 03:53:42.040638 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-lib-modules\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.041660 kubelet[1945]: I1101 03:53:42.040708 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-config-path\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.041660 kubelet[1945]: I1101 03:53:42.040789 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-run\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.041660 kubelet[1945]: I1101 03:53:42.040828 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-host-proc-sys-net\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.041660 kubelet[1945]: I1101 03:53:42.040871 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fck48\" (UniqueName: \"kubernetes.io/projected/29a6b27a-ef91-4446-829f-a75ce7239cc5-kube-api-access-fck48\") pod \"29a6b27a-ef91-4446-829f-a75ce7239cc5\" (UID: \"29a6b27a-ef91-4446-829f-a75ce7239cc5\") " Nov 1 03:53:42.041660 kubelet[1945]: I1101 03:53:42.041081 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-clustermesh-secrets\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.042228 kubelet[1945]: I1101 03:53:42.041140 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-host-proc-sys-kernel\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.042228 kubelet[1945]: I1101 03:53:42.041205 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-hubble-tls\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.042228 kubelet[1945]: I1101 03:53:42.041245 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-bpf-maps\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.042228 kubelet[1945]: I1101 03:53:42.041285 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-xtables-lock\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.042228 kubelet[1945]: I1101 03:53:42.041323 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-hostproc\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.042228 kubelet[1945]: I1101 03:53:42.041381 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-cgroup\") pod \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\" (UID: \"7d5709e6-fa43-4c18-93bf-cfe4733c46ce\") " Nov 1 03:53:42.042783 kubelet[1945]: I1101 03:53:42.041425 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29a6b27a-ef91-4446-829f-a75ce7239cc5-cilium-config-path\") pod \"29a6b27a-ef91-4446-829f-a75ce7239cc5\" (UID: \"29a6b27a-ef91-4446-829f-a75ce7239cc5\") " Nov 1 03:53:42.057917 kubelet[1945]: I1101 03:53:42.047509 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cni-path" (OuterVolumeSpecName: "cni-path") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:42.058257 kubelet[1945]: I1101 03:53:42.058214 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:42.062769 kubelet[1945]: I1101 03:53:42.062704 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a6b27a-ef91-4446-829f-a75ce7239cc5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "29a6b27a-ef91-4446-829f-a75ce7239cc5" (UID: "29a6b27a-ef91-4446-829f-a75ce7239cc5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 03:53:42.065017 kubelet[1945]: I1101 03:53:42.064971 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-kube-api-access-5ls56" (OuterVolumeSpecName: "kube-api-access-5ls56") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "kube-api-access-5ls56". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 03:53:42.065124 kubelet[1945]: I1101 03:53:42.065050 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:42.067380 kubelet[1945]: I1101 03:53:42.067353 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 03:53:42.067526 kubelet[1945]: I1101 03:53:42.067511 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:42.069480 kubelet[1945]: I1101 03:53:42.069444 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 03:53:42.069563 kubelet[1945]: I1101 03:53:42.069504 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:42.069563 kubelet[1945]: I1101 03:53:42.069531 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:42.069563 kubelet[1945]: I1101 03:53:42.069555 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-hostproc" (OuterVolumeSpecName: "hostproc") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:42.069692 kubelet[1945]: I1101 03:53:42.069595 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:42.069692 kubelet[1945]: I1101 03:53:42.069623 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:42.069692 kubelet[1945]: I1101 03:53:42.069647 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:42.070041 kubelet[1945]: I1101 03:53:42.070016 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7d5709e6-fa43-4c18-93bf-cfe4733c46ce" (UID: "7d5709e6-fa43-4c18-93bf-cfe4733c46ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 03:53:42.073807 kubelet[1945]: I1101 03:53:42.073764 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a6b27a-ef91-4446-829f-a75ce7239cc5-kube-api-access-fck48" (OuterVolumeSpecName: "kube-api-access-fck48") pod "29a6b27a-ef91-4446-829f-a75ce7239cc5" (UID: "29a6b27a-ef91-4446-829f-a75ce7239cc5"). InnerVolumeSpecName "kube-api-access-fck48". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 03:53:42.145758 kubelet[1945]: I1101 03:53:42.145605 1945 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-clustermesh-secrets\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.152590 kubelet[1945]: I1101 03:53:42.146366 1945 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-host-proc-sys-kernel\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.152590 kubelet[1945]: I1101 03:53:42.146420 1945 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-hubble-tls\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.152590 kubelet[1945]: I1101 03:53:42.146505 1945 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-cgroup\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.152590 kubelet[1945]: I1101 03:53:42.146560 1945 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-bpf-maps\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.152590 kubelet[1945]: I1101 03:53:42.146592 1945 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-xtables-lock\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.152590 kubelet[1945]: I1101 03:53:42.146646 1945 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-hostproc\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.152590 kubelet[1945]: I1101 03:53:42.146690 1945 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29a6b27a-ef91-4446-829f-a75ce7239cc5-cilium-config-path\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.152590 kubelet[1945]: I1101 03:53:42.146761 1945 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cni-path\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.154591 kubelet[1945]: I1101 03:53:42.146824 1945 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-etc-cni-netd\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.154591 kubelet[1945]: I1101 03:53:42.146849 1945 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5ls56\" (UniqueName: \"kubernetes.io/projected/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-kube-api-access-5ls56\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.154591 kubelet[1945]: I1101 03:53:42.146913 1945 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-lib-modules\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.154591 kubelet[1945]: I1101 03:53:42.146939 1945 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-config-path\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.154591 kubelet[1945]: I1101 03:53:42.146958 1945 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-cilium-run\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.154591 kubelet[1945]: I1101 03:53:42.147012 1945 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7d5709e6-fa43-4c18-93bf-cfe4733c46ce-host-proc-sys-net\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.154591 kubelet[1945]: I1101 03:53:42.147035 1945 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fck48\" (UniqueName: \"kubernetes.io/projected/29a6b27a-ef91-4446-829f-a75ce7239cc5-kube-api-access-fck48\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:42.254007 kubelet[1945]: I1101 03:53:42.253967 1945 scope.go:117] "RemoveContainer" containerID="9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6" Nov 1 03:53:42.259262 systemd[1]: Removed slice kubepods-besteffort-pod29a6b27a_ef91_4446_829f_a75ce7239cc5.slice. Nov 1 03:53:42.263710 env[1204]: time="2025-11-01T03:53:42.263250734Z" level=info msg="RemoveContainer for \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\"" Nov 1 03:53:42.266468 env[1204]: time="2025-11-01T03:53:42.266314640Z" level=info msg="RemoveContainer for \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\" returns successfully" Nov 1 03:53:42.267217 kubelet[1945]: I1101 03:53:42.267197 1945 scope.go:117] "RemoveContainer" containerID="9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6" Nov 1 03:53:42.270982 systemd[1]: Removed slice kubepods-burstable-pod7d5709e6_fa43_4c18_93bf_cfe4733c46ce.slice. Nov 1 03:53:42.271070 systemd[1]: kubepods-burstable-pod7d5709e6_fa43_4c18_93bf_cfe4733c46ce.slice: Consumed 8.692s CPU time. Nov 1 03:53:42.272552 env[1204]: time="2025-11-01T03:53:42.272421436Z" level=error msg="ContainerStatus for \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\": not found" Nov 1 03:53:42.275830 kubelet[1945]: E1101 03:53:42.275806 1945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\": not found" containerID="9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6" Nov 1 03:53:42.277564 kubelet[1945]: I1101 03:53:42.277446 1945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6"} err="failed to get container status \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e7a7961134e9272bd0bd799af7e02ca97ea518b4aaea5d3a6217d0b8f5d86c6\": not found" Nov 1 03:53:42.277704 kubelet[1945]: I1101 03:53:42.277691 1945 scope.go:117] "RemoveContainer" containerID="87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8" Nov 1 03:53:42.280094 env[1204]: time="2025-11-01T03:53:42.279801291Z" level=info msg="RemoveContainer for \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\"" Nov 1 03:53:42.282395 env[1204]: time="2025-11-01T03:53:42.282198177Z" level=info msg="RemoveContainer for \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\" returns successfully" Nov 1 03:53:42.284469 kubelet[1945]: I1101 03:53:42.284450 1945 scope.go:117] "RemoveContainer" containerID="e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f" Nov 1 03:53:42.286207 env[1204]: time="2025-11-01T03:53:42.285840690Z" level=info msg="RemoveContainer for \"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f\"" Nov 1 03:53:42.288000 env[1204]: time="2025-11-01T03:53:42.287975667Z" level=info msg="RemoveContainer for \"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f\" returns successfully" Nov 1 03:53:42.288362 kubelet[1945]: I1101 03:53:42.288318 1945 scope.go:117] "RemoveContainer" containerID="a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e" Nov 1 03:53:42.289691 env[1204]: time="2025-11-01T03:53:42.289668493Z" level=info msg="RemoveContainer for \"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e\"" Nov 1 03:53:42.291691 env[1204]: time="2025-11-01T03:53:42.291657387Z" level=info msg="RemoveContainer for \"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e\" returns successfully" Nov 1 03:53:42.291954 kubelet[1945]: I1101 03:53:42.291935 1945 scope.go:117] "RemoveContainer" containerID="86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178" Nov 1 03:53:42.293186 env[1204]: time="2025-11-01T03:53:42.293162935Z" level=info msg="RemoveContainer for \"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178\"" Nov 1 03:53:42.296313 env[1204]: time="2025-11-01T03:53:42.296287165Z" level=info msg="RemoveContainer for \"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178\" returns successfully" Nov 1 03:53:42.296555 kubelet[1945]: I1101 03:53:42.296537 1945 scope.go:117] "RemoveContainer" containerID="43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82" Nov 1 03:53:42.298841 env[1204]: time="2025-11-01T03:53:42.298818290Z" level=info msg="RemoveContainer for \"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82\"" Nov 1 03:53:42.300926 env[1204]: time="2025-11-01T03:53:42.300881707Z" level=info msg="RemoveContainer for \"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82\" returns successfully" Nov 1 03:53:42.301161 kubelet[1945]: I1101 03:53:42.301147 1945 scope.go:117] "RemoveContainer" containerID="87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8" Nov 1 03:53:42.301497 env[1204]: time="2025-11-01T03:53:42.301446168Z" level=error msg="ContainerStatus for \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\": not found" Nov 1 03:53:42.301779 kubelet[1945]: E1101 03:53:42.301761 1945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\": not found" containerID="87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8" Nov 1 03:53:42.301900 kubelet[1945]: I1101 03:53:42.301880 1945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8"} err="failed to get container status \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\": rpc error: code = NotFound desc = an error occurred when try to find container \"87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8\": not found" Nov 1 03:53:42.301989 kubelet[1945]: I1101 03:53:42.301978 1945 scope.go:117] "RemoveContainer" containerID="e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f" Nov 1 03:53:42.302371 env[1204]: time="2025-11-01T03:53:42.302311431Z" level=error msg="ContainerStatus for \"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f\": not found" Nov 1 03:53:42.302658 kubelet[1945]: E1101 03:53:42.302642 1945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f\": not found" containerID="e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f" Nov 1 03:53:42.302771 kubelet[1945]: I1101 03:53:42.302752 1945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f"} err="failed to get container status \"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9fbcec8970256e718ae7c1805129bc65767f09b6d49c3c0448399e9a1c0431f\": not found" Nov 1 03:53:42.302857 kubelet[1945]: I1101 03:53:42.302846 1945 scope.go:117] "RemoveContainer" containerID="a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e" Nov 1 03:53:42.303172 env[1204]: time="2025-11-01T03:53:42.303119838Z" level=error msg="ContainerStatus for \"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e\": not found" Nov 1 03:53:42.303430 kubelet[1945]: E1101 03:53:42.303411 1945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e\": not found" containerID="a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e" Nov 1 03:53:42.303548 kubelet[1945]: I1101 03:53:42.303442 1945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e"} err="failed to get container status \"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a89df5c626d5a8ad556d4f5ea399669db1215cc325f5de102ba81f4bad517f3e\": not found" Nov 1 03:53:42.303548 kubelet[1945]: I1101 03:53:42.303459 1945 scope.go:117] "RemoveContainer" containerID="86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178" Nov 1 03:53:42.303925 env[1204]: time="2025-11-01T03:53:42.303882802Z" level=error msg="ContainerStatus for \"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178\": not found" Nov 1 03:53:42.304144 kubelet[1945]: E1101 03:53:42.304128 1945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178\": not found" containerID="86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178" Nov 1 03:53:42.304234 kubelet[1945]: I1101 03:53:42.304216 1945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178"} err="failed to get container status \"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178\": rpc error: code = NotFound desc = an error occurred when try to find container \"86c919e0e7ae1085714340e928823cbba4ca8c07930435b04f32ebb8579c6178\": not found" Nov 1 03:53:42.304313 kubelet[1945]: I1101 03:53:42.304302 1945 scope.go:117] "RemoveContainer" containerID="43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82" Nov 1 03:53:42.304609 env[1204]: time="2025-11-01T03:53:42.304561056Z" level=error msg="ContainerStatus for \"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82\": not found" Nov 1 03:53:42.304897 kubelet[1945]: E1101 03:53:42.304870 1945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82\": not found" containerID="43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82" Nov 1 03:53:42.304976 kubelet[1945]: I1101 03:53:42.304902 1945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82"} err="failed to get container status \"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82\": rpc error: code = NotFound desc = an error occurred when try to find container \"43efdcbdeafd3a3dcd2fe59cd31aad1f6e033ac03e1b056bc7fea2663e9a7b82\": not found" Nov 1 03:53:42.591511 sshd[3559]: Failed password for invalid user vps from 182.230.214.138 port 46042 ssh2 Nov 1 03:53:42.667768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87e3668f19e69ab0e52334c8e6357d6205c819062852eec71626890d64f2faf8-rootfs.mount: Deactivated successfully. Nov 1 03:53:42.667937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82-rootfs.mount: Deactivated successfully. Nov 1 03:53:42.668012 systemd[1]: var-lib-kubelet-pods-29a6b27a\x2def91\x2d4446\x2d829f\x2da75ce7239cc5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfck48.mount: Deactivated successfully. Nov 1 03:53:42.668103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3-rootfs.mount: Deactivated successfully. Nov 1 03:53:42.668164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3-shm.mount: Deactivated successfully. Nov 1 03:53:42.668246 systemd[1]: var-lib-kubelet-pods-7d5709e6\x2dfa43\x2d4c18\x2d93bf\x2dcfe4733c46ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5ls56.mount: Deactivated successfully. Nov 1 03:53:42.668351 systemd[1]: var-lib-kubelet-pods-7d5709e6\x2dfa43\x2d4c18\x2d93bf\x2dcfe4733c46ce-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 03:53:42.668420 systemd[1]: var-lib-kubelet-pods-7d5709e6\x2dfa43\x2d4c18\x2d93bf\x2dcfe4733c46ce-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 03:53:42.860765 sshd[3559]: Connection closed by invalid user vps 182.230.214.138 port 46042 [preauth] Nov 1 03:53:42.863428 systemd[1]: sshd@38-10.244.101.254:22-182.230.214.138:46042.service: Deactivated successfully. Nov 1 03:53:43.077465 systemd[1]: Started sshd@39-10.244.101.254:22-182.230.214.138:46054.service. Nov 1 03:53:43.600584 kubelet[1945]: I1101 03:53:43.600513 1945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a6b27a-ef91-4446-829f-a75ce7239cc5" path="/var/lib/kubelet/pods/29a6b27a-ef91-4446-829f-a75ce7239cc5/volumes" Nov 1 03:53:43.601975 kubelet[1945]: I1101 03:53:43.601942 1945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5709e6-fa43-4c18-93bf-cfe4733c46ce" path="/var/lib/kubelet/pods/7d5709e6-fa43-4c18-93bf-cfe4733c46ce/volumes" Nov 1 03:53:43.708411 sshd[3555]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:43.716072 systemd[1]: sshd@37-10.244.101.254:22-139.178.89.65:47334.service: Deactivated successfully. Nov 1 03:53:43.717828 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 03:53:43.718932 systemd-logind[1186]: Session 22 logged out. Waiting for processes to exit. Nov 1 03:53:43.721136 systemd-logind[1186]: Removed session 22. Nov 1 03:53:43.864217 systemd[1]: Started sshd@40-10.244.101.254:22-139.178.89.65:47340.service. Nov 1 03:53:44.292642 sshd[3725]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 user=root Nov 1 03:53:44.688257 kubelet[1945]: E1101 03:53:44.688130 1945 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 03:53:44.773958 sshd[3729]: Accepted publickey for core from 139.178.89.65 port 47340 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:44.777900 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:44.791055 systemd-logind[1186]: New session 23 of user core. Nov 1 03:53:44.791283 systemd[1]: Started session-23.scope. Nov 1 03:53:45.773697 sshd[3725]: Failed password for root from 182.230.214.138 port 46054 ssh2 Nov 1 03:53:45.819370 kubelet[1945]: I1101 03:53:45.819312 1945 memory_manager.go:355] "RemoveStaleState removing state" podUID="29a6b27a-ef91-4446-829f-a75ce7239cc5" containerName="cilium-operator" Nov 1 03:53:45.819370 kubelet[1945]: I1101 03:53:45.819359 1945 memory_manager.go:355] "RemoveStaleState removing state" podUID="7d5709e6-fa43-4c18-93bf-cfe4733c46ce" containerName="cilium-agent" Nov 1 03:53:45.831018 systemd[1]: Created slice kubepods-burstable-podae1d4312_6c20_49c5_83d9_bcb34557cf62.slice. Nov 1 03:53:45.971008 sshd[3729]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:45.983539 kubelet[1945]: I1101 03:53:45.983482 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-hostproc\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.983718 kubelet[1945]: I1101 03:53:45.983566 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-lib-modules\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.983718 kubelet[1945]: I1101 03:53:45.983620 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-run\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.983718 kubelet[1945]: I1101 03:53:45.983649 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-etc-cni-netd\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.983718 kubelet[1945]: I1101 03:53:45.983675 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-host-proc-sys-kernel\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.983718 kubelet[1945]: I1101 03:53:45.983705 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-cgroup\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.983906 kubelet[1945]: I1101 03:53:45.983730 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae1d4312-6c20-49c5-83d9-bcb34557cf62-clustermesh-secrets\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.983906 kubelet[1945]: I1101 03:53:45.983754 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-host-proc-sys-net\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.983906 kubelet[1945]: I1101 03:53:45.983785 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-bpf-maps\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.983906 kubelet[1945]: I1101 03:53:45.983814 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-xtables-lock\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.983906 kubelet[1945]: I1101 03:53:45.983839 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-config-path\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.983906 kubelet[1945]: I1101 03:53:45.983886 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cni-path\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.984191 kubelet[1945]: I1101 03:53:45.983909 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-ipsec-secrets\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.984191 kubelet[1945]: I1101 03:53:45.983934 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79zj\" (UniqueName: \"kubernetes.io/projected/ae1d4312-6c20-49c5-83d9-bcb34557cf62-kube-api-access-s79zj\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.984191 kubelet[1945]: I1101 03:53:45.984007 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae1d4312-6c20-49c5-83d9-bcb34557cf62-hubble-tls\") pod \"cilium-b4ntc\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " pod="kube-system/cilium-b4ntc" Nov 1 03:53:45.984632 systemd-logind[1186]: Session 23 logged out. Waiting for processes to exit. Nov 1 03:53:45.985154 systemd[1]: sshd@40-10.244.101.254:22-139.178.89.65:47340.service: Deactivated successfully. Nov 1 03:53:45.986585 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 03:53:45.987570 systemd-logind[1186]: Removed session 23. Nov 1 03:53:46.097577 sshd[3725]: Connection closed by authenticating user root 182.230.214.138 port 46054 [preauth] Nov 1 03:53:46.097837 systemd[1]: sshd@39-10.244.101.254:22-182.230.214.138:46054.service: Deactivated successfully. Nov 1 03:53:46.125512 systemd[1]: Started sshd@41-10.244.101.254:22-139.178.89.65:47356.service. Nov 1 03:53:46.135977 env[1204]: time="2025-11-01T03:53:46.135613550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b4ntc,Uid:ae1d4312-6c20-49c5-83d9-bcb34557cf62,Namespace:kube-system,Attempt:0,}" Nov 1 03:53:46.162415 env[1204]: time="2025-11-01T03:53:46.162279429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 03:53:46.162415 env[1204]: time="2025-11-01T03:53:46.162357070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 03:53:46.162415 env[1204]: time="2025-11-01T03:53:46.162370419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 03:53:46.163375 env[1204]: time="2025-11-01T03:53:46.162719020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d pid=3753 runtime=io.containerd.runc.v2 Nov 1 03:53:46.177878 systemd[1]: Started cri-containerd-8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d.scope. Nov 1 03:53:46.212645 env[1204]: time="2025-11-01T03:53:46.212597226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b4ntc,Uid:ae1d4312-6c20-49c5-83d9-bcb34557cf62,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\"" Nov 1 03:53:46.217443 env[1204]: time="2025-11-01T03:53:46.217398404Z" level=info msg="CreateContainer within sandbox \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 03:53:46.223858 env[1204]: time="2025-11-01T03:53:46.223819730Z" level=info msg="CreateContainer within sandbox \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe\"" Nov 1 03:53:46.224610 env[1204]: time="2025-11-01T03:53:46.224579781Z" level=info msg="StartContainer for \"d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe\"" Nov 1 03:53:46.241807 systemd[1]: Started cri-containerd-d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe.scope. Nov 1 03:53:46.258376 systemd[1]: cri-containerd-d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe.scope: Deactivated successfully. Nov 1 03:53:46.270268 env[1204]: time="2025-11-01T03:53:46.270218462Z" level=info msg="shim disconnected" id=d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe Nov 1 03:53:46.270599 env[1204]: time="2025-11-01T03:53:46.270578981Z" level=warning msg="cleaning up after shim disconnected" id=d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe namespace=k8s.io Nov 1 03:53:46.270705 env[1204]: time="2025-11-01T03:53:46.270691215Z" level=info msg="cleaning up dead shim" Nov 1 03:53:46.282202 env[1204]: time="2025-11-01T03:53:46.282112982Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:53:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3810 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T03:53:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 03:53:46.283119 env[1204]: time="2025-11-01T03:53:46.282830035Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Nov 1 03:53:46.283683 env[1204]: time="2025-11-01T03:53:46.283627584Z" level=error msg="Failed to pipe stderr of container \"d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe\"" error="reading from a closed fifo" Nov 1 03:53:46.283822 env[1204]: time="2025-11-01T03:53:46.283615607Z" level=error msg="Failed to pipe stdout of container \"d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe\"" error="reading from a closed fifo" Nov 1 03:53:46.284930 env[1204]: time="2025-11-01T03:53:46.284825753Z" level=error msg="StartContainer for \"d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 03:53:46.285384 kubelet[1945]: E1101 03:53:46.285251 1945 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe" Nov 1 03:53:46.290224 kubelet[1945]: E1101 03:53:46.290187 1945 kuberuntime_manager.go:1341] "Unhandled Error" err=< Nov 1 03:53:46.290224 kubelet[1945]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Nov 1 03:53:46.290224 kubelet[1945]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Nov 1 03:53:46.290224 kubelet[1945]: rm /hostbin/cilium-mount Nov 1 03:53:46.290555 kubelet[1945]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s79zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-b4ntc_kube-system(ae1d4312-6c20-49c5-83d9-bcb34557cf62): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Nov 1 03:53:46.290555 kubelet[1945]: > logger="UnhandledError" Nov 1 03:53:46.292039 kubelet[1945]: E1101 03:53:46.292005 1945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-b4ntc" podUID="ae1d4312-6c20-49c5-83d9-bcb34557cf62" Nov 1 03:53:46.363811 systemd[1]: Started sshd@42-10.244.101.254:22-182.230.214.138:46062.service. Nov 1 03:53:47.040898 sshd[3745]: Accepted publickey for core from 139.178.89.65 port 47356 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:47.043198 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:47.051122 systemd[1]: Started session-24.scope. Nov 1 03:53:47.051530 systemd-logind[1186]: New session 24 of user core. Nov 1 03:53:47.303121 env[1204]: time="2025-11-01T03:53:47.302901665Z" level=info msg="CreateContainer within sandbox \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Nov 1 03:53:47.324751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2825207355.mount: Deactivated successfully. Nov 1 03:53:47.334717 env[1204]: time="2025-11-01T03:53:47.334281532Z" level=info msg="CreateContainer within sandbox \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6\"" Nov 1 03:53:47.336669 env[1204]: time="2025-11-01T03:53:47.336584935Z" level=info msg="StartContainer for \"91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6\"" Nov 1 03:53:47.372090 systemd[1]: Started cri-containerd-91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6.scope. Nov 1 03:53:47.399298 systemd[1]: cri-containerd-91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6.scope: Deactivated successfully. Nov 1 03:53:47.407070 env[1204]: time="2025-11-01T03:53:47.406999970Z" level=info msg="shim disconnected" id=91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6 Nov 1 03:53:47.407308 env[1204]: time="2025-11-01T03:53:47.407290697Z" level=warning msg="cleaning up after shim disconnected" id=91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6 namespace=k8s.io Nov 1 03:53:47.407519 env[1204]: time="2025-11-01T03:53:47.407502637Z" level=info msg="cleaning up dead shim" Nov 1 03:53:47.418306 env[1204]: time="2025-11-01T03:53:47.418258679Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:53:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3850 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T03:53:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 03:53:47.418808 env[1204]: time="2025-11-01T03:53:47.418747389Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Nov 1 03:53:47.419556 env[1204]: time="2025-11-01T03:53:47.419432776Z" level=error msg="Failed to pipe stderr of container \"91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6\"" error="reading from a closed fifo" Nov 1 03:53:47.419630 env[1204]: time="2025-11-01T03:53:47.419606211Z" level=error msg="Failed to pipe stdout of container \"91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6\"" error="reading from a closed fifo" Nov 1 03:53:47.420552 env[1204]: time="2025-11-01T03:53:47.420496445Z" level=error msg="StartContainer for \"91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 03:53:47.421401 kubelet[1945]: E1101 03:53:47.420813 1945 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6" Nov 1 03:53:47.421401 kubelet[1945]: E1101 03:53:47.421016 1945 kuberuntime_manager.go:1341] "Unhandled Error" err=< Nov 1 03:53:47.421401 kubelet[1945]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Nov 1 03:53:47.421401 kubelet[1945]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Nov 1 03:53:47.421401 kubelet[1945]: rm /hostbin/cilium-mount Nov 1 03:53:47.421401 kubelet[1945]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s79zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-b4ntc_kube-system(ae1d4312-6c20-49c5-83d9-bcb34557cf62): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Nov 1 03:53:47.421401 kubelet[1945]: > logger="UnhandledError" Nov 1 03:53:47.422546 kubelet[1945]: E1101 03:53:47.422490 1945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-b4ntc" podUID="ae1d4312-6c20-49c5-83d9-bcb34557cf62" Nov 1 03:53:47.470790 sshd[3825]: Invalid user devuser from 182.230.214.138 port 46062 Nov 1 03:53:47.598805 kubelet[1945]: E1101 03:53:47.598748 1945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-fgdzc" podUID="70a5a370-f825-46a8-96e7-d3b8897c58b5" Nov 1 03:53:47.735597 sshd[3825]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:47.736138 sshd[3825]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:53:47.736186 sshd[3825]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:53:47.736664 sshd[3825]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:47.832263 sshd[3745]: pam_unix(sshd:session): session closed for user core Nov 1 03:53:47.835747 systemd[1]: sshd@41-10.244.101.254:22-139.178.89.65:47356.service: Deactivated successfully. Nov 1 03:53:47.837584 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 03:53:47.839169 systemd-logind[1186]: Session 24 logged out. Waiting for processes to exit. Nov 1 03:53:47.841094 systemd-logind[1186]: Removed session 24. Nov 1 03:53:47.988161 systemd[1]: Started sshd@43-10.244.101.254:22-139.178.89.65:33258.service. Nov 1 03:53:48.097631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6-rootfs.mount: Deactivated successfully. Nov 1 03:53:48.301812 kubelet[1945]: I1101 03:53:48.301723 1945 scope.go:117] "RemoveContainer" containerID="d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe" Nov 1 03:53:48.302824 env[1204]: time="2025-11-01T03:53:48.302776699Z" level=info msg="StopPodSandbox for \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\"" Nov 1 03:53:48.303067 env[1204]: time="2025-11-01T03:53:48.303040697Z" level=info msg="Container to stop \"d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 03:53:48.303159 env[1204]: time="2025-11-01T03:53:48.303143005Z" level=info msg="Container to stop \"91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 03:53:48.306034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d-shm.mount: Deactivated successfully. Nov 1 03:53:48.308096 env[1204]: time="2025-11-01T03:53:48.308068024Z" level=info msg="RemoveContainer for \"d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe\"" Nov 1 03:53:48.310624 env[1204]: time="2025-11-01T03:53:48.310596654Z" level=info msg="RemoveContainer for \"d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe\" returns successfully" Nov 1 03:53:48.316253 systemd[1]: cri-containerd-8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d.scope: Deactivated successfully. Nov 1 03:53:48.340740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d-rootfs.mount: Deactivated successfully. Nov 1 03:53:48.346800 env[1204]: time="2025-11-01T03:53:48.346745249Z" level=info msg="shim disconnected" id=8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d Nov 1 03:53:48.346961 env[1204]: time="2025-11-01T03:53:48.346803294Z" level=warning msg="cleaning up after shim disconnected" id=8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d namespace=k8s.io Nov 1 03:53:48.346961 env[1204]: time="2025-11-01T03:53:48.346816049Z" level=info msg="cleaning up dead shim" Nov 1 03:53:48.355972 env[1204]: time="2025-11-01T03:53:48.355903718Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:53:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3891 runtime=io.containerd.runc.v2\n" Nov 1 03:53:48.356579 env[1204]: time="2025-11-01T03:53:48.356539959Z" level=info msg="TearDown network for sandbox \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\" successfully" Nov 1 03:53:48.356643 env[1204]: time="2025-11-01T03:53:48.356586820Z" level=info msg="StopPodSandbox for \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\" returns successfully" Nov 1 03:53:48.503270 kubelet[1945]: I1101 03:53:48.503195 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s79zj\" (UniqueName: \"kubernetes.io/projected/ae1d4312-6c20-49c5-83d9-bcb34557cf62-kube-api-access-s79zj\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.504590 kubelet[1945]: I1101 03:53:48.504473 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-etc-cni-netd\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.504590 kubelet[1945]: I1101 03:53:48.504556 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-bpf-maps\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.504873 kubelet[1945]: I1101 03:53:48.504630 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cni-path\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.504873 kubelet[1945]: I1101 03:53:48.504699 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-config-path\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.504873 kubelet[1945]: I1101 03:53:48.504753 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-hostproc\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.504873 kubelet[1945]: I1101 03:53:48.504791 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-host-proc-sys-kernel\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.504873 kubelet[1945]: I1101 03:53:48.504839 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-run\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.504873 kubelet[1945]: I1101 03:53:48.504876 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-lib-modules\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.505491 kubelet[1945]: I1101 03:53:48.504923 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-xtables-lock\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.505491 kubelet[1945]: I1101 03:53:48.504969 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-cgroup\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.505491 kubelet[1945]: I1101 03:53:48.505019 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-ipsec-secrets\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.505491 kubelet[1945]: I1101 03:53:48.505065 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae1d4312-6c20-49c5-83d9-bcb34557cf62-clustermesh-secrets\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.505491 kubelet[1945]: I1101 03:53:48.505104 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-host-proc-sys-net\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.505491 kubelet[1945]: I1101 03:53:48.505154 1945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae1d4312-6c20-49c5-83d9-bcb34557cf62-hubble-tls\") pod \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\" (UID: \"ae1d4312-6c20-49c5-83d9-bcb34557cf62\") " Nov 1 03:53:48.507099 kubelet[1945]: I1101 03:53:48.506596 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:48.507099 kubelet[1945]: I1101 03:53:48.506749 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:48.507099 kubelet[1945]: I1101 03:53:48.506792 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:48.507099 kubelet[1945]: I1101 03:53:48.506860 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cni-path" (OuterVolumeSpecName: "cni-path") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:48.508517 kubelet[1945]: I1101 03:53:48.508047 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-hostproc" (OuterVolumeSpecName: "hostproc") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:48.509240 kubelet[1945]: I1101 03:53:48.508790 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:48.509240 kubelet[1945]: I1101 03:53:48.508911 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:48.509240 kubelet[1945]: I1101 03:53:48.508952 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:48.509240 kubelet[1945]: I1101 03:53:48.508993 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:48.510766 kubelet[1945]: I1101 03:53:48.510694 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 03:53:48.514754 kubelet[1945]: I1101 03:53:48.514725 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 03:53:48.519090 systemd[1]: var-lib-kubelet-pods-ae1d4312\x2d6c20\x2d49c5\x2d83d9\x2dbcb34557cf62-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds79zj.mount: Deactivated successfully. Nov 1 03:53:48.521949 systemd[1]: var-lib-kubelet-pods-ae1d4312\x2d6c20\x2d49c5\x2d83d9\x2dbcb34557cf62-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 03:53:48.523906 kubelet[1945]: I1101 03:53:48.523865 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae1d4312-6c20-49c5-83d9-bcb34557cf62-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 03:53:48.526181 kubelet[1945]: I1101 03:53:48.526150 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae1d4312-6c20-49c5-83d9-bcb34557cf62-kube-api-access-s79zj" (OuterVolumeSpecName: "kube-api-access-s79zj") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "kube-api-access-s79zj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 03:53:48.526479 kubelet[1945]: I1101 03:53:48.526448 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae1d4312-6c20-49c5-83d9-bcb34557cf62-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 03:53:48.527497 kubelet[1945]: I1101 03:53:48.527470 1945 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ae1d4312-6c20-49c5-83d9-bcb34557cf62" (UID: "ae1d4312-6c20-49c5-83d9-bcb34557cf62"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 03:53:48.606446 kubelet[1945]: I1101 03:53:48.606319 1945 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-lib-modules\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606446 kubelet[1945]: I1101 03:53:48.606436 1945 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-xtables-lock\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606446 kubelet[1945]: I1101 03:53:48.606463 1945 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-cgroup\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606487 1945 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-ipsec-secrets\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606517 1945 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae1d4312-6c20-49c5-83d9-bcb34557cf62-hubble-tls\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606547 1945 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae1d4312-6c20-49c5-83d9-bcb34557cf62-clustermesh-secrets\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606566 1945 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-host-proc-sys-net\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606586 1945 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s79zj\" (UniqueName: \"kubernetes.io/projected/ae1d4312-6c20-49c5-83d9-bcb34557cf62-kube-api-access-s79zj\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606607 1945 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-etc-cni-netd\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606626 1945 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-bpf-maps\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606643 1945 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cni-path\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606663 1945 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-hostproc\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606681 1945 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-host-proc-sys-kernel\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606700 1945 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-config-path\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.606851 kubelet[1945]: I1101 03:53:48.606718 1945 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae1d4312-6c20-49c5-83d9-bcb34557cf62-cilium-run\") on node \"srv-n2oyf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 03:53:48.890209 sshd[3870]: Accepted publickey for core from 139.178.89.65 port 33258 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 03:53:48.892055 sshd[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 03:53:48.900402 systemd-logind[1186]: New session 25 of user core. Nov 1 03:53:48.903637 systemd[1]: Started session-25.scope. Nov 1 03:53:49.097615 systemd[1]: var-lib-kubelet-pods-ae1d4312\x2d6c20\x2d49c5\x2d83d9\x2dbcb34557cf62-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 03:53:49.097891 systemd[1]: var-lib-kubelet-pods-ae1d4312\x2d6c20\x2d49c5\x2d83d9\x2dbcb34557cf62-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 03:53:49.304912 kubelet[1945]: I1101 03:53:49.304814 1945 scope.go:117] "RemoveContainer" containerID="91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6" Nov 1 03:53:49.308778 systemd[1]: Removed slice kubepods-burstable-podae1d4312_6c20_49c5_83d9_bcb34557cf62.slice. Nov 1 03:53:49.311138 env[1204]: time="2025-11-01T03:53:49.310820262Z" level=info msg="RemoveContainer for \"91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6\"" Nov 1 03:53:49.314019 env[1204]: time="2025-11-01T03:53:49.313945620Z" level=info msg="RemoveContainer for \"91a674500faeebb76e9fe51a57151b22b29bfbaa3483b15884db1345d27b47c6\" returns successfully" Nov 1 03:53:49.379886 kubelet[1945]: W1101 03:53:49.379833 1945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae1d4312_6c20_49c5_83d9_bcb34557cf62.slice/cri-containerd-d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe.scope WatchSource:0}: container "d90fcc1278895b59b98e2a527ee758f0370e6cd41fad152b5287f040b5b6b7fe" in namespace "k8s.io": not found Nov 1 03:53:49.394600 kubelet[1945]: I1101 03:53:49.394557 1945 memory_manager.go:355] "RemoveStaleState removing state" podUID="ae1d4312-6c20-49c5-83d9-bcb34557cf62" containerName="mount-cgroup" Nov 1 03:53:49.394799 kubelet[1945]: I1101 03:53:49.394787 1945 memory_manager.go:355] "RemoveStaleState removing state" podUID="ae1d4312-6c20-49c5-83d9-bcb34557cf62" containerName="mount-cgroup" Nov 1 03:53:49.404923 systemd[1]: Created slice kubepods-burstable-pod32757b63_93bc_43b2_9440_59867e39056e.slice. Nov 1 03:53:49.411486 kubelet[1945]: I1101 03:53:49.411454 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32757b63-93bc-43b2-9440-59867e39056e-etc-cni-netd\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411603 kubelet[1945]: I1101 03:53:49.411488 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbqsl\" (UniqueName: \"kubernetes.io/projected/32757b63-93bc-43b2-9440-59867e39056e-kube-api-access-kbqsl\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411603 kubelet[1945]: I1101 03:53:49.411512 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32757b63-93bc-43b2-9440-59867e39056e-bpf-maps\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411603 kubelet[1945]: I1101 03:53:49.411532 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32757b63-93bc-43b2-9440-59867e39056e-hubble-tls\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411603 kubelet[1945]: I1101 03:53:49.411547 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32757b63-93bc-43b2-9440-59867e39056e-cni-path\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411603 kubelet[1945]: I1101 03:53:49.411564 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32757b63-93bc-43b2-9440-59867e39056e-host-proc-sys-kernel\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411603 kubelet[1945]: I1101 03:53:49.411580 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32757b63-93bc-43b2-9440-59867e39056e-lib-modules\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411603 kubelet[1945]: I1101 03:53:49.411597 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32757b63-93bc-43b2-9440-59867e39056e-clustermesh-secrets\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411852 kubelet[1945]: I1101 03:53:49.411614 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32757b63-93bc-43b2-9440-59867e39056e-cilium-run\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411852 kubelet[1945]: I1101 03:53:49.411630 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32757b63-93bc-43b2-9440-59867e39056e-xtables-lock\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411852 kubelet[1945]: I1101 03:53:49.411646 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32757b63-93bc-43b2-9440-59867e39056e-cilium-ipsec-secrets\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411852 kubelet[1945]: I1101 03:53:49.411663 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32757b63-93bc-43b2-9440-59867e39056e-host-proc-sys-net\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411852 kubelet[1945]: I1101 03:53:49.411679 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32757b63-93bc-43b2-9440-59867e39056e-hostproc\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411852 kubelet[1945]: I1101 03:53:49.411698 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32757b63-93bc-43b2-9440-59867e39056e-cilium-cgroup\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.411852 kubelet[1945]: I1101 03:53:49.411715 1945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32757b63-93bc-43b2-9440-59867e39056e-cilium-config-path\") pod \"cilium-sk99s\" (UID: \"32757b63-93bc-43b2-9440-59867e39056e\") " pod="kube-system/cilium-sk99s" Nov 1 03:53:49.599310 kubelet[1945]: E1101 03:53:49.599174 1945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-fgdzc" podUID="70a5a370-f825-46a8-96e7-d3b8897c58b5" Nov 1 03:53:49.607025 kubelet[1945]: I1101 03:53:49.606986 1945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae1d4312-6c20-49c5-83d9-bcb34557cf62" path="/var/lib/kubelet/pods/ae1d4312-6c20-49c5-83d9-bcb34557cf62/volumes" Nov 1 03:53:49.689040 kubelet[1945]: E1101 03:53:49.688970 1945 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 03:53:49.709824 env[1204]: time="2025-11-01T03:53:49.709692753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sk99s,Uid:32757b63-93bc-43b2-9440-59867e39056e,Namespace:kube-system,Attempt:0,}" Nov 1 03:53:49.729815 env[1204]: time="2025-11-01T03:53:49.729727707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 03:53:49.729815 env[1204]: time="2025-11-01T03:53:49.729771070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 03:53:49.730159 env[1204]: time="2025-11-01T03:53:49.730088033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 03:53:49.730453 env[1204]: time="2025-11-01T03:53:49.730373012Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56 pid=3927 runtime=io.containerd.runc.v2 Nov 1 03:53:49.748086 systemd[1]: Started cri-containerd-22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56.scope. Nov 1 03:53:49.791158 env[1204]: time="2025-11-01T03:53:49.791111418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sk99s,Uid:32757b63-93bc-43b2-9440-59867e39056e,Namespace:kube-system,Attempt:0,} returns sandbox id \"22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56\"" Nov 1 03:53:49.794295 env[1204]: time="2025-11-01T03:53:49.794247062Z" level=info msg="CreateContainer within sandbox \"22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 03:53:49.801987 env[1204]: time="2025-11-01T03:53:49.801939314Z" level=info msg="CreateContainer within sandbox \"22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d92bbc6b522f3ed1d03ab31e2726f1551d83b98b85b3d055f835131a47a5f11a\"" Nov 1 03:53:49.805531 env[1204]: time="2025-11-01T03:53:49.805495101Z" level=info msg="StartContainer for \"d92bbc6b522f3ed1d03ab31e2726f1551d83b98b85b3d055f835131a47a5f11a\"" Nov 1 03:53:49.864806 systemd[1]: Started cri-containerd-d92bbc6b522f3ed1d03ab31e2726f1551d83b98b85b3d055f835131a47a5f11a.scope. Nov 1 03:53:49.966062 sshd[3825]: Failed password for invalid user devuser from 182.230.214.138 port 46062 ssh2 Nov 1 03:53:49.967372 env[1204]: time="2025-11-01T03:53:49.967262221Z" level=info msg="StartContainer for \"d92bbc6b522f3ed1d03ab31e2726f1551d83b98b85b3d055f835131a47a5f11a\" returns successfully" Nov 1 03:53:49.989694 systemd[1]: cri-containerd-d92bbc6b522f3ed1d03ab31e2726f1551d83b98b85b3d055f835131a47a5f11a.scope: Deactivated successfully. Nov 1 03:53:50.026643 env[1204]: time="2025-11-01T03:53:50.026580499Z" level=info msg="shim disconnected" id=d92bbc6b522f3ed1d03ab31e2726f1551d83b98b85b3d055f835131a47a5f11a Nov 1 03:53:50.027087 env[1204]: time="2025-11-01T03:53:50.027063942Z" level=warning msg="cleaning up after shim disconnected" id=d92bbc6b522f3ed1d03ab31e2726f1551d83b98b85b3d055f835131a47a5f11a namespace=k8s.io Nov 1 03:53:50.027199 env[1204]: time="2025-11-01T03:53:50.027184001Z" level=info msg="cleaning up dead shim" Nov 1 03:53:50.044192 env[1204]: time="2025-11-01T03:53:50.044119735Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:53:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n" Nov 1 03:53:50.318348 env[1204]: time="2025-11-01T03:53:50.318175175Z" level=info msg="CreateContainer within sandbox \"22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 03:53:50.331780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1452355562.mount: Deactivated successfully. Nov 1 03:53:50.345186 env[1204]: time="2025-11-01T03:53:50.339483229Z" level=info msg="CreateContainer within sandbox \"22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53165da655c41c834e843513e193de2e28e9a2f62aadff0cd31e6145e1451698\"" Nov 1 03:53:50.345186 env[1204]: time="2025-11-01T03:53:50.340235947Z" level=info msg="StartContainer for \"53165da655c41c834e843513e193de2e28e9a2f62aadff0cd31e6145e1451698\"" Nov 1 03:53:50.340812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount600011057.mount: Deactivated successfully. Nov 1 03:53:50.365377 systemd[1]: Started cri-containerd-53165da655c41c834e843513e193de2e28e9a2f62aadff0cd31e6145e1451698.scope. Nov 1 03:53:50.422074 env[1204]: time="2025-11-01T03:53:50.421931472Z" level=info msg="StartContainer for \"53165da655c41c834e843513e193de2e28e9a2f62aadff0cd31e6145e1451698\" returns successfully" Nov 1 03:53:50.434033 systemd[1]: cri-containerd-53165da655c41c834e843513e193de2e28e9a2f62aadff0cd31e6145e1451698.scope: Deactivated successfully. Nov 1 03:53:50.460378 env[1204]: time="2025-11-01T03:53:50.460241655Z" level=info msg="shim disconnected" id=53165da655c41c834e843513e193de2e28e9a2f62aadff0cd31e6145e1451698 Nov 1 03:53:50.461110 env[1204]: time="2025-11-01T03:53:50.461057931Z" level=warning msg="cleaning up after shim disconnected" id=53165da655c41c834e843513e193de2e28e9a2f62aadff0cd31e6145e1451698 namespace=k8s.io Nov 1 03:53:50.461312 env[1204]: time="2025-11-01T03:53:50.461274837Z" level=info msg="cleaning up dead shim" Nov 1 03:53:50.481518 env[1204]: time="2025-11-01T03:53:50.481438501Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:53:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4069 runtime=io.containerd.runc.v2\n" Nov 1 03:53:51.263669 sshd[3825]: Connection closed by invalid user devuser 182.230.214.138 port 46062 [preauth] Nov 1 03:53:51.265289 systemd[1]: sshd@42-10.244.101.254:22-182.230.214.138:46062.service: Deactivated successfully. Nov 1 03:53:51.326138 env[1204]: time="2025-11-01T03:53:51.326074887Z" level=info msg="CreateContainer within sandbox \"22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 03:53:51.345248 env[1204]: time="2025-11-01T03:53:51.344655663Z" level=info msg="CreateContainer within sandbox \"22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"22857e2b9adcbbadc4efe716abfd60daf7b459e29eedc4778b5f73cf021452c5\"" Nov 1 03:53:51.345748 env[1204]: time="2025-11-01T03:53:51.345678121Z" level=info msg="StartContainer for \"22857e2b9adcbbadc4efe716abfd60daf7b459e29eedc4778b5f73cf021452c5\"" Nov 1 03:53:51.387202 systemd[1]: Started cri-containerd-22857e2b9adcbbadc4efe716abfd60daf7b459e29eedc4778b5f73cf021452c5.scope. Nov 1 03:53:51.450433 env[1204]: time="2025-11-01T03:53:51.450366652Z" level=info msg="StartContainer for \"22857e2b9adcbbadc4efe716abfd60daf7b459e29eedc4778b5f73cf021452c5\" returns successfully" Nov 1 03:53:51.452736 systemd[1]: cri-containerd-22857e2b9adcbbadc4efe716abfd60daf7b459e29eedc4778b5f73cf021452c5.scope: Deactivated successfully. Nov 1 03:53:51.475605 env[1204]: time="2025-11-01T03:53:51.475560228Z" level=info msg="shim disconnected" id=22857e2b9adcbbadc4efe716abfd60daf7b459e29eedc4778b5f73cf021452c5 Nov 1 03:53:51.475869 env[1204]: time="2025-11-01T03:53:51.475850643Z" level=warning msg="cleaning up after shim disconnected" id=22857e2b9adcbbadc4efe716abfd60daf7b459e29eedc4778b5f73cf021452c5 namespace=k8s.io Nov 1 03:53:51.475967 env[1204]: time="2025-11-01T03:53:51.475953471Z" level=info msg="cleaning up dead shim" Nov 1 03:53:51.485005 env[1204]: time="2025-11-01T03:53:51.484961529Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:53:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4127 runtime=io.containerd.runc.v2\n" Nov 1 03:53:51.518558 systemd[1]: Started sshd@44-10.244.101.254:22-182.230.214.138:36124.service. Nov 1 03:53:51.598503 kubelet[1945]: E1101 03:53:51.598325 1945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-fgdzc" podUID="70a5a370-f825-46a8-96e7-d3b8897c58b5" Nov 1 03:53:52.098469 systemd[1]: run-containerd-runc-k8s.io-22857e2b9adcbbadc4efe716abfd60daf7b459e29eedc4778b5f73cf021452c5-runc.If9vw8.mount: Deactivated successfully. Nov 1 03:53:52.098764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22857e2b9adcbbadc4efe716abfd60daf7b459e29eedc4778b5f73cf021452c5-rootfs.mount: Deactivated successfully. Nov 1 03:53:52.331305 env[1204]: time="2025-11-01T03:53:52.331258087Z" level=info msg="CreateContainer within sandbox \"22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 03:53:52.351573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3662415550.mount: Deactivated successfully. Nov 1 03:53:52.358363 env[1204]: time="2025-11-01T03:53:52.358259427Z" level=info msg="CreateContainer within sandbox \"22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"21e3e161cb214de942c4cb53a8a5d19a492fe6cadaaf9b605d1469be23e01e76\"" Nov 1 03:53:52.360042 env[1204]: time="2025-11-01T03:53:52.359997042Z" level=info msg="StartContainer for \"21e3e161cb214de942c4cb53a8a5d19a492fe6cadaaf9b605d1469be23e01e76\"" Nov 1 03:53:52.390898 systemd[1]: Started cri-containerd-21e3e161cb214de942c4cb53a8a5d19a492fe6cadaaf9b605d1469be23e01e76.scope. Nov 1 03:53:52.422042 systemd[1]: cri-containerd-21e3e161cb214de942c4cb53a8a5d19a492fe6cadaaf9b605d1469be23e01e76.scope: Deactivated successfully. Nov 1 03:53:52.423640 env[1204]: time="2025-11-01T03:53:52.423518850Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32757b63_93bc_43b2_9440_59867e39056e.slice/cri-containerd-21e3e161cb214de942c4cb53a8a5d19a492fe6cadaaf9b605d1469be23e01e76.scope/memory.events\": no such file or directory" Nov 1 03:53:52.425202 env[1204]: time="2025-11-01T03:53:52.425159570Z" level=info msg="StartContainer for \"21e3e161cb214de942c4cb53a8a5d19a492fe6cadaaf9b605d1469be23e01e76\" returns successfully" Nov 1 03:53:52.450116 env[1204]: time="2025-11-01T03:53:52.450070738Z" level=info msg="shim disconnected" id=21e3e161cb214de942c4cb53a8a5d19a492fe6cadaaf9b605d1469be23e01e76 Nov 1 03:53:52.450116 env[1204]: time="2025-11-01T03:53:52.450113025Z" level=warning msg="cleaning up after shim disconnected" id=21e3e161cb214de942c4cb53a8a5d19a492fe6cadaaf9b605d1469be23e01e76 namespace=k8s.io Nov 1 03:53:52.450116 env[1204]: time="2025-11-01T03:53:52.450122226Z" level=info msg="cleaning up dead shim" Nov 1 03:53:52.462848 env[1204]: time="2025-11-01T03:53:52.462742512Z" level=warning msg="cleanup warnings time=\"2025-11-01T03:53:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4183 runtime=io.containerd.runc.v2\n" Nov 1 03:53:52.748120 sshd[4140]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 user=root Nov 1 03:53:52.748327 sshd[4140]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Nov 1 03:53:53.098117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21e3e161cb214de942c4cb53a8a5d19a492fe6cadaaf9b605d1469be23e01e76-rootfs.mount: Deactivated successfully. Nov 1 03:53:53.152675 kubelet[1945]: I1101 03:53:53.152592 1945 setters.go:602] "Node became not ready" node="srv-n2oyf.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T03:53:53Z","lastTransitionTime":"2025-11-01T03:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 03:53:53.336486 env[1204]: time="2025-11-01T03:53:53.336433075Z" level=info msg="CreateContainer within sandbox \"22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 03:53:53.350111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420203567.mount: Deactivated successfully. Nov 1 03:53:53.357642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443026932.mount: Deactivated successfully. Nov 1 03:53:53.362009 env[1204]: time="2025-11-01T03:53:53.361957860Z" level=info msg="CreateContainer within sandbox \"22bd641301bfa43fb611d9e89d5412c0ab8cce8c751e8aad674ebf3c73d35a56\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6267c7eb3498247b35f08d8e575f73ffb9405dd4174d27872b550a634e853b65\"" Nov 1 03:53:53.365595 env[1204]: time="2025-11-01T03:53:53.365562596Z" level=info msg="StartContainer for \"6267c7eb3498247b35f08d8e575f73ffb9405dd4174d27872b550a634e853b65\"" Nov 1 03:53:53.399134 systemd[1]: Started cri-containerd-6267c7eb3498247b35f08d8e575f73ffb9405dd4174d27872b550a634e853b65.scope. Nov 1 03:53:53.446056 env[1204]: time="2025-11-01T03:53:53.446013713Z" level=info msg="StartContainer for \"6267c7eb3498247b35f08d8e575f73ffb9405dd4174d27872b550a634e853b65\" returns successfully" Nov 1 03:53:53.599763 kubelet[1945]: E1101 03:53:53.598471 1945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-fgdzc" podUID="70a5a370-f825-46a8-96e7-d3b8897c58b5" Nov 1 03:53:53.874362 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 03:53:54.661318 sshd[4140]: Failed password for root from 182.230.214.138 port 36124 ssh2 Nov 1 03:53:55.740747 systemd[1]: run-containerd-runc-k8s.io-6267c7eb3498247b35f08d8e575f73ffb9405dd4174d27872b550a634e853b65-runc.URd8Cx.mount: Deactivated successfully. Nov 1 03:53:56.120074 sshd[4140]: Connection closed by authenticating user root 182.230.214.138 port 36124 [preauth] Nov 1 03:53:56.124464 systemd[1]: sshd@44-10.244.101.254:22-182.230.214.138:36124.service: Deactivated successfully. Nov 1 03:53:56.364089 systemd[1]: Started sshd@45-10.244.101.254:22-182.230.214.138:36134.service. Nov 1 03:53:57.229066 systemd-networkd[1029]: lxc_health: Link UP Nov 1 03:53:57.236487 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 03:53:57.237084 systemd-networkd[1029]: lxc_health: Gained carrier Nov 1 03:53:57.327023 sshd[4557]: Invalid user pi from 182.230.214.138 port 36134 Nov 1 03:53:57.561653 sshd[4557]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:57.563303 sshd[4557]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:53:57.563511 sshd[4557]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:53:57.564249 sshd[4557]: pam_faillock(sshd:auth): User unknown Nov 1 03:53:57.750601 kubelet[1945]: I1101 03:53:57.750508 1945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sk99s" podStartSLOduration=8.750442164 podStartE2EDuration="8.750442164s" podCreationTimestamp="2025-11-01 03:53:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 03:53:54.36367771 +0000 UTC m=+175.046149556" watchObservedRunningTime="2025-11-01 03:53:57.750442164 +0000 UTC m=+178.432914010" Nov 1 03:53:57.963573 systemd[1]: run-containerd-runc-k8s.io-6267c7eb3498247b35f08d8e575f73ffb9405dd4174d27872b550a634e853b65-runc.DJ3NGm.mount: Deactivated successfully. Nov 1 03:53:58.985909 systemd-networkd[1029]: lxc_health: Gained IPv6LL Nov 1 03:53:59.527185 env[1204]: time="2025-11-01T03:53:59.527104489Z" level=info msg="StopPodSandbox for \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\"" Nov 1 03:53:59.527949 env[1204]: time="2025-11-01T03:53:59.527259311Z" level=info msg="TearDown network for sandbox \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\" successfully" Nov 1 03:53:59.527949 env[1204]: time="2025-11-01T03:53:59.527325994Z" level=info msg="StopPodSandbox for \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\" returns successfully" Nov 1 03:53:59.528739 env[1204]: time="2025-11-01T03:53:59.528686639Z" level=info msg="RemovePodSandbox for \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\"" Nov 1 03:53:59.528898 env[1204]: time="2025-11-01T03:53:59.528727492Z" level=info msg="Forcibly stopping sandbox \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\"" Nov 1 03:53:59.528898 env[1204]: time="2025-11-01T03:53:59.528807707Z" level=info msg="TearDown network for sandbox \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\" successfully" Nov 1 03:53:59.532159 env[1204]: time="2025-11-01T03:53:59.532114505Z" level=info msg="RemovePodSandbox \"df19635cc4cde627ee520c30fe15c184e7c9544411e501592bd695cd968c8c82\" returns successfully" Nov 1 03:53:59.532580 env[1204]: time="2025-11-01T03:53:59.532555677Z" level=info msg="StopPodSandbox for \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\"" Nov 1 03:53:59.532682 env[1204]: time="2025-11-01T03:53:59.532639603Z" level=info msg="TearDown network for sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" successfully" Nov 1 03:53:59.532746 env[1204]: time="2025-11-01T03:53:59.532680365Z" level=info msg="StopPodSandbox for \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" returns successfully" Nov 1 03:53:59.533366 env[1204]: time="2025-11-01T03:53:59.533002151Z" level=info msg="RemovePodSandbox for \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\"" Nov 1 03:53:59.533366 env[1204]: time="2025-11-01T03:53:59.533037278Z" level=info msg="Forcibly stopping sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\"" Nov 1 03:53:59.533366 env[1204]: time="2025-11-01T03:53:59.533120157Z" level=info msg="TearDown network for sandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" successfully" Nov 1 03:53:59.537372 env[1204]: time="2025-11-01T03:53:59.537312316Z" level=info msg="RemovePodSandbox \"eb0175173881ce97f346cec8b19e71987495f37dbc3cb38a7d185911d4b89eb3\" returns successfully" Nov 1 03:53:59.537786 env[1204]: time="2025-11-01T03:53:59.537758715Z" level=info msg="StopPodSandbox for \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\"" Nov 1 03:53:59.538000 env[1204]: time="2025-11-01T03:53:59.537927834Z" level=info msg="TearDown network for sandbox \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\" successfully" Nov 1 03:53:59.538096 env[1204]: time="2025-11-01T03:53:59.538080010Z" level=info msg="StopPodSandbox for \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\" returns successfully" Nov 1 03:53:59.538470 env[1204]: time="2025-11-01T03:53:59.538444492Z" level=info msg="RemovePodSandbox for \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\"" Nov 1 03:53:59.538598 env[1204]: time="2025-11-01T03:53:59.538565437Z" level=info msg="Forcibly stopping sandbox \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\"" Nov 1 03:53:59.538735 env[1204]: time="2025-11-01T03:53:59.538712400Z" level=info msg="TearDown network for sandbox \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\" successfully" Nov 1 03:53:59.541696 env[1204]: time="2025-11-01T03:53:59.541667031Z" level=info msg="RemovePodSandbox \"8ce30c579fd7ac86a214b98143897eca0c4cb2a7c8f303052b2d066324e5c14d\" returns successfully" Nov 1 03:53:59.832593 sshd[4557]: Failed password for invalid user pi from 182.230.214.138 port 36134 ssh2 Nov 1 03:54:00.279475 systemd[1]: run-containerd-runc-k8s.io-6267c7eb3498247b35f08d8e575f73ffb9405dd4174d27872b550a634e853b65-runc.UG6cWX.mount: Deactivated successfully. Nov 1 03:54:00.599197 sshd[4557]: Connection closed by invalid user pi 182.230.214.138 port 36134 [preauth] Nov 1 03:54:00.600739 systemd[1]: sshd@45-10.244.101.254:22-182.230.214.138:36134.service: Deactivated successfully. Nov 1 03:54:00.845408 systemd[1]: Started sshd@46-10.244.101.254:22-182.230.214.138:59124.service. Nov 1 03:54:01.816149 sshd[4850]: Invalid user moxa from 182.230.214.138 port 59124 Nov 1 03:54:02.059535 sshd[4850]: pam_faillock(sshd:auth): User unknown Nov 1 03:54:02.060439 sshd[4850]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:54:02.060509 sshd[4850]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:54:02.061167 sshd[4850]: pam_faillock(sshd:auth): User unknown Nov 1 03:54:02.496364 systemd[1]: run-containerd-runc-k8s.io-6267c7eb3498247b35f08d8e575f73ffb9405dd4174d27872b550a634e853b65-runc.DbQ7yz.mount: Deactivated successfully. Nov 1 03:54:04.013854 sshd[4850]: Failed password for invalid user moxa from 182.230.214.138 port 59124 ssh2 Nov 1 03:54:04.688649 systemd[1]: run-containerd-runc-k8s.io-6267c7eb3498247b35f08d8e575f73ffb9405dd4174d27872b550a634e853b65-runc.TUlZ1n.mount: Deactivated successfully. Nov 1 03:54:04.862631 sshd[4850]: Connection closed by invalid user moxa 182.230.214.138 port 59124 [preauth] Nov 1 03:54:04.866104 systemd[1]: sshd@46-10.244.101.254:22-182.230.214.138:59124.service: Deactivated successfully. Nov 1 03:54:04.939743 sshd[3870]: pam_unix(sshd:session): session closed for user core Nov 1 03:54:04.948088 systemd[1]: sshd@43-10.244.101.254:22-139.178.89.65:33258.service: Deactivated successfully. Nov 1 03:54:04.949410 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 03:54:04.950255 systemd-logind[1186]: Session 25 logged out. Waiting for processes to exit. Nov 1 03:54:04.951645 systemd-logind[1186]: Removed session 25. Nov 1 03:54:05.111907 systemd[1]: Started sshd@47-10.244.101.254:22-182.230.214.138:59132.service. Nov 1 03:54:06.091364 sshd[4903]: Invalid user craft from 182.230.214.138 port 59132 Nov 1 03:54:06.333360 sshd[4903]: pam_faillock(sshd:auth): User unknown Nov 1 03:54:06.334901 sshd[4903]: pam_unix(sshd:auth): check pass; user unknown Nov 1 03:54:06.335003 sshd[4903]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.230.214.138 Nov 1 03:54:06.336182 sshd[4903]: pam_faillock(sshd:auth): User unknown Nov 1 03:54:07.837850 sshd[4903]: Failed password for invalid user craft from 182.230.214.138 port 59132 ssh2 Nov 1 03:54:08.547489 sshd[4903]: Connection closed by invalid user craft 182.230.214.138 port 59132 [preauth] Nov 1 03:54:08.550408 systemd[1]: sshd@47-10.244.101.254:22-182.230.214.138:59132.service: Deactivated successfully. Nov 1 03:54:08.785285 systemd[1]: Started sshd@48-10.244.101.254:22-182.230.214.138:48470.service.