Sep 6 00:28:40.470811 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:28:40.470860 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:28:40.470880 kernel: BIOS-provided physical RAM map: Sep 6 00:28:40.470897 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 6 00:28:40.470920 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 6 00:28:40.470935 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 6 00:28:40.470958 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 6 00:28:40.470973 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 6 00:28:40.470989 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd27afff] usable Sep 6 00:28:40.471005 kernel: BIOS-e820: [mem 0x00000000bd27b000-0x00000000bd284fff] ACPI data Sep 6 00:28:40.471021 kernel: BIOS-e820: [mem 0x00000000bd285000-0x00000000bf8ecfff] usable Sep 6 00:28:40.471037 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 6 00:28:40.471052 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 6 00:28:40.471068 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 6 00:28:40.471313 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 6 00:28:40.471330 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 6 00:28:40.471346 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 6 00:28:40.471363 kernel: NX (Execute Disable) protection: active Sep 6 00:28:40.471379 kernel: efi: EFI v2.70 by EDK II Sep 6 00:28:40.471396 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd27b018 Sep 6 00:28:40.471413 kernel: random: crng init done Sep 6 00:28:40.471430 kernel: SMBIOS 2.4 present. Sep 6 00:28:40.471450 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 6 00:28:40.471466 kernel: Hypervisor detected: KVM Sep 6 00:28:40.471483 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:28:40.471499 kernel: kvm-clock: cpu 0, msr 7919f001, primary cpu clock Sep 6 00:28:40.471527 kernel: kvm-clock: using sched offset of 14578536845 cycles Sep 6 00:28:40.471545 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:28:40.471562 kernel: tsc: Detected 2299.998 MHz processor Sep 6 00:28:40.471578 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:28:40.471596 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:28:40.471613 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 6 00:28:40.471634 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:28:40.471650 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 6 00:28:40.471667 kernel: Using GB pages for direct mapping Sep 6 00:28:40.471684 kernel: Secure boot disabled Sep 6 00:28:40.471731 kernel: ACPI: Early table checksum verification disabled Sep 6 00:28:40.471749 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 6 00:28:40.471766 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 6 00:28:40.471784 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 6 00:28:40.471812 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 6 00:28:40.471831 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 6 00:28:40.471849 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 6 00:28:40.471867 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 6 00:28:40.471887 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 6 00:28:40.471963 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 6 00:28:40.471986 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 6 00:28:40.472004 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 6 00:28:40.472023 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 6 00:28:40.472040 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 6 00:28:40.472059 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 6 00:28:40.472077 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 6 00:28:40.472095 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 6 00:28:40.472113 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 6 00:28:40.472133 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 6 00:28:40.472155 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 6 00:28:40.472173 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 6 00:28:40.472192 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 00:28:40.472210 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 00:28:40.472228 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 6 00:28:40.472247 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 6 00:28:40.472265 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 6 00:28:40.472283 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 6 00:28:40.472302 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 6 00:28:40.472324 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Sep 6 00:28:40.472343 kernel: Zone ranges: Sep 6 00:28:40.472362 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:28:40.472380 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 6 00:28:40.472398 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 6 00:28:40.472415 kernel: Movable zone start for each node Sep 6 00:28:40.472433 kernel: Early memory node ranges Sep 6 00:28:40.472452 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 6 00:28:40.472469 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 6 00:28:40.472491 kernel: node 0: [mem 0x0000000000100000-0x00000000bd27afff] Sep 6 00:28:40.472509 kernel: node 0: [mem 0x00000000bd285000-0x00000000bf8ecfff] Sep 6 00:28:40.472527 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 6 00:28:40.472544 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 6 00:28:40.472562 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 6 00:28:40.472581 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:28:40.472600 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 6 00:28:40.472618 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 6 00:28:40.472636 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Sep 6 00:28:40.472658 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 6 00:28:40.472677 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 6 00:28:40.472713 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 6 00:28:40.472732 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:28:40.472750 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:28:40.472768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:28:40.472787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:28:40.472805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:28:40.472824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:28:40.472846 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:28:40.472864 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 6 00:28:40.472882 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 6 00:28:40.472901 kernel: Booting paravirtualized kernel on KVM Sep 6 00:28:40.472958 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:28:40.472977 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 6 00:28:40.472997 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 6 00:28:40.473015 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 6 00:28:40.473033 kernel: pcpu-alloc: [0] 0 1 Sep 6 00:28:40.473055 kernel: kvm-guest: PV spinlocks enabled Sep 6 00:28:40.473075 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 00:28:40.473093 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Sep 6 00:28:40.473111 kernel: Policy zone: Normal Sep 6 00:28:40.473131 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:28:40.473151 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:28:40.473169 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 6 00:28:40.473187 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:28:40.473206 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:28:40.473229 kernel: Memory: 7515416K/7860544K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 344868K reserved, 0K cma-reserved) Sep 6 00:28:40.473247 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:28:40.473266 kernel: Kernel/User page tables isolation: enabled Sep 6 00:28:40.473284 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:28:40.473302 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:28:40.473320 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:28:40.473341 kernel: rcu: RCU event tracing is enabled. Sep 6 00:28:40.473360 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:28:40.473383 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:28:40.473415 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:28:40.473435 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:28:40.473458 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:28:40.473476 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 6 00:28:40.473495 kernel: Console: colour dummy device 80x25 Sep 6 00:28:40.473515 kernel: printk: console [ttyS0] enabled Sep 6 00:28:40.473539 kernel: ACPI: Core revision 20210730 Sep 6 00:28:40.473561 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:28:40.473578 kernel: x2apic enabled Sep 6 00:28:40.473599 kernel: Switched APIC routing to physical x2apic. Sep 6 00:28:40.473618 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 6 00:28:40.473638 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 6 00:28:40.473659 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 6 00:28:40.473679 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 6 00:28:40.473718 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 6 00:28:40.473739 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:28:40.473766 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 6 00:28:40.473787 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 6 00:28:40.473808 kernel: Spectre V2 : Mitigation: IBRS Sep 6 00:28:40.473829 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:28:40.473849 kernel: RETBleed: Mitigation: IBRS Sep 6 00:28:40.473869 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:28:40.473890 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Sep 6 00:28:40.473921 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:28:40.473942 kernel: MDS: Mitigation: Clear CPU buffers Sep 6 00:28:40.473970 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:28:40.473991 kernel: active return thunk: its_return_thunk Sep 6 00:28:40.474012 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 00:28:40.474033 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:28:40.474053 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:28:40.474072 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:28:40.474096 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:28:40.474115 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:28:40.474137 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:28:40.474158 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:28:40.474175 kernel: LSM: Security Framework initializing Sep 6 00:28:40.474192 kernel: SELinux: Initializing. Sep 6 00:28:40.474217 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:28:40.474234 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:28:40.474252 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 6 00:28:40.474268 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 6 00:28:40.474289 kernel: signal: max sigframe size: 1776 Sep 6 00:28:40.474309 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:28:40.474331 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 00:28:40.474349 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:28:40.474367 kernel: x86: Booting SMP configuration: Sep 6 00:28:40.474385 kernel: .... node #0, CPUs: #1 Sep 6 00:28:40.474404 kernel: kvm-clock: cpu 1, msr 7919f041, secondary cpu clock Sep 6 00:28:40.474423 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 6 00:28:40.474443 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 6 00:28:40.474461 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:28:40.474484 kernel: smpboot: Max logical packages: 1 Sep 6 00:28:40.474503 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 6 00:28:40.474521 kernel: devtmpfs: initialized Sep 6 00:28:40.474542 kernel: x86/mm: Memory block size: 128MB Sep 6 00:28:40.474568 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 6 00:28:40.474592 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:28:40.474615 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:28:40.474639 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:28:40.474662 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:28:40.474686 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:28:40.474738 kernel: audit: type=2000 audit(1757118518.076:1): state=initialized audit_enabled=0 res=1 Sep 6 00:28:40.474762 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:28:40.474788 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:28:40.474815 kernel: cpuidle: using governor menu Sep 6 00:28:40.474840 kernel: ACPI: bus type PCI registered Sep 6 00:28:40.474862 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:28:40.474885 kernel: dca service started, version 1.12.1 Sep 6 00:28:40.474920 kernel: PCI: Using configuration type 1 for base access Sep 6 00:28:40.474951 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:28:40.474977 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:28:40.475003 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:28:40.475027 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:28:40.475051 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:28:40.475070 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:28:40.475096 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:28:40.475121 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:28:40.475148 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:28:40.475177 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 6 00:28:40.475203 kernel: ACPI: Interpreter enabled Sep 6 00:28:40.475230 kernel: ACPI: PM: (supports S0 S3 S5) Sep 6 00:28:40.475257 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:28:40.475283 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:28:40.475310 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 6 00:28:40.475336 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:28:40.475652 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:28:40.475923 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 6 00:28:40.475956 kernel: PCI host bridge to bus 0000:00 Sep 6 00:28:40.479073 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:28:40.479316 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:28:40.479532 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:28:40.479756 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 6 00:28:40.479972 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:28:40.480241 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 6 00:28:40.480486 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 6 00:28:40.480744 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 6 00:28:40.480990 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 6 00:28:40.481234 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 6 00:28:40.481470 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 6 00:28:40.481729 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 6 00:28:40.481982 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:28:40.482214 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 6 00:28:40.482829 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 6 00:28:40.483097 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:28:40.483333 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 6 00:28:40.483563 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 6 00:28:40.483600 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:28:40.483626 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:28:40.483650 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:28:40.483675 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:28:40.483714 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 6 00:28:40.483738 kernel: iommu: Default domain type: Translated Sep 6 00:28:40.483763 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:28:40.483788 kernel: vgaarb: loaded Sep 6 00:28:40.483814 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:28:40.483842 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:28:40.483867 kernel: PTP clock support registered Sep 6 00:28:40.483892 kernel: Registered efivars operations Sep 6 00:28:40.483922 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:28:40.483946 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:28:40.483971 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 6 00:28:40.483996 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 6 00:28:40.484018 kernel: e820: reserve RAM buffer [mem 0xbd27b000-0xbfffffff] Sep 6 00:28:40.484042 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 6 00:28:40.484069 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 6 00:28:40.484094 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:28:40.484116 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:28:40.484141 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:28:40.484163 kernel: pnp: PnP ACPI init Sep 6 00:28:40.484187 kernel: pnp: PnP ACPI: found 7 devices Sep 6 00:28:40.484212 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:28:40.484237 kernel: NET: Registered PF_INET protocol family Sep 6 00:28:40.484264 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 00:28:40.484289 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 6 00:28:40.484314 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:28:40.484339 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:28:40.484362 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Sep 6 00:28:40.484386 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 6 00:28:40.484410 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 6 00:28:40.484434 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 6 00:28:40.484459 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:28:40.484487 kernel: NET: Registered PF_XDP protocol family Sep 6 00:28:40.484719 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:28:40.484944 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:28:40.485152 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:28:40.485355 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 6 00:28:40.485585 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 6 00:28:40.485615 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:28:40.485646 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 6 00:28:40.485670 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 6 00:28:40.485711 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 00:28:40.485736 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 6 00:28:40.485762 kernel: clocksource: Switched to clocksource tsc Sep 6 00:28:40.485784 kernel: Initialise system trusted keyrings Sep 6 00:28:40.485809 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 6 00:28:40.485833 kernel: Key type asymmetric registered Sep 6 00:28:40.485857 kernel: Asymmetric key parser 'x509' registered Sep 6 00:28:40.485885 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:28:40.485918 kernel: io scheduler mq-deadline registered Sep 6 00:28:40.485942 kernel: io scheduler kyber registered Sep 6 00:28:40.485965 kernel: io scheduler bfq registered Sep 6 00:28:40.485989 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:28:40.486015 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 6 00:28:40.486249 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 6 00:28:40.486271 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 6 00:28:40.486505 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 6 00:28:40.486540 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 6 00:28:40.486813 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 6 00:28:40.486845 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:28:40.486868 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:28:40.486892 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 6 00:28:40.486926 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 6 00:28:40.486950 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 6 00:28:40.487186 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 6 00:28:40.487224 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:28:40.487249 kernel: i8042: Warning: Keylock active Sep 6 00:28:40.487274 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:28:40.487298 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:28:40.487535 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 6 00:28:40.487773 kernel: rtc_cmos 00:00: registered as rtc0 Sep 6 00:28:40.487991 kernel: rtc_cmos 00:00: setting system clock to 2025-09-06T00:28:39 UTC (1757118519) Sep 6 00:28:40.488209 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 6 00:28:40.488244 kernel: intel_pstate: CPU model not supported Sep 6 00:28:40.488268 kernel: pstore: Registered efi as persistent store backend Sep 6 00:28:40.488293 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:28:40.488318 kernel: Segment Routing with IPv6 Sep 6 00:28:40.488342 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:28:40.488365 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:28:40.488390 kernel: Key type dns_resolver registered Sep 6 00:28:40.488414 kernel: IPI shorthand broadcast: enabled Sep 6 00:28:40.488439 kernel: sched_clock: Marking stable (944894156, 277304391)->(1374953388, -152754841) Sep 6 00:28:40.488468 kernel: registered taskstats version 1 Sep 6 00:28:40.488492 kernel: Loading compiled-in X.509 certificates Sep 6 00:28:40.488516 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:28:40.488541 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:28:40.488563 kernel: Key type .fscrypt registered Sep 6 00:28:40.488587 kernel: Key type fscrypt-provisioning registered Sep 6 00:28:40.488612 kernel: pstore: Using crash dump compression: deflate Sep 6 00:28:40.488637 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:28:40.488660 kernel: ima: No architecture policies found Sep 6 00:28:40.488690 kernel: clk: Disabling unused clocks Sep 6 00:28:40.488728 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:28:40.488752 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:28:40.488777 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:28:40.488802 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:28:40.488825 kernel: Run /init as init process Sep 6 00:28:40.488844 kernel: with arguments: Sep 6 00:28:40.488862 kernel: /init Sep 6 00:28:40.488886 kernel: with environment: Sep 6 00:28:40.488923 kernel: HOME=/ Sep 6 00:28:40.488946 kernel: TERM=linux Sep 6 00:28:40.488970 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:28:40.488998 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:28:40.489027 systemd[1]: Detected virtualization kvm. Sep 6 00:28:40.496761 systemd[1]: Detected architecture x86-64. Sep 6 00:28:40.496800 systemd[1]: Running in initrd. Sep 6 00:28:40.496856 systemd[1]: No hostname configured, using default hostname. Sep 6 00:28:40.496884 systemd[1]: Hostname set to . Sep 6 00:28:40.496922 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:28:40.496949 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:28:40.496977 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:28:40.497005 systemd[1]: Reached target cryptsetup.target. Sep 6 00:28:40.497032 systemd[1]: Reached target paths.target. Sep 6 00:28:40.497060 systemd[1]: Reached target slices.target. Sep 6 00:28:40.497092 systemd[1]: Reached target swap.target. Sep 6 00:28:40.497119 systemd[1]: Reached target timers.target. Sep 6 00:28:40.497150 systemd[1]: Listening on iscsid.socket. Sep 6 00:28:40.497176 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:28:40.497235 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:28:40.497499 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:28:40.497537 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:28:40.497565 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:28:40.497597 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:28:40.497626 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:28:40.497675 systemd[1]: Reached target sockets.target. Sep 6 00:28:40.497777 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:28:40.497807 systemd[1]: Finished network-cleanup.service. Sep 6 00:28:40.497836 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:28:40.497869 systemd[1]: Starting systemd-journald.service... Sep 6 00:28:40.497899 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:28:40.497934 systemd[1]: Starting systemd-resolved.service... Sep 6 00:28:40.497962 kernel: audit: type=1334 audit(1757118520.473:2): prog-id=6 op=LOAD Sep 6 00:28:40.497996 systemd-journald[189]: Journal started Sep 6 00:28:40.498131 systemd-journald[189]: Runtime Journal (/run/log/journal/36ec0532a61e970aee6899ab1b9bc662) is 8.0M, max 148.8M, 140.8M free. Sep 6 00:28:40.473000 audit: BPF prog-id=6 op=LOAD Sep 6 00:28:40.499375 systemd-modules-load[190]: Inserted module 'overlay' Sep 6 00:28:40.522077 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:28:40.536727 systemd[1]: Started systemd-journald.service. Sep 6 00:28:40.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.551572 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:28:40.618143 kernel: audit: type=1130 audit(1757118520.549:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.618185 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:28:40.618211 kernel: audit: type=1130 audit(1757118520.579:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.581233 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:28:40.625264 systemd-resolved[191]: Positive Trust Anchors: Sep 6 00:28:40.625737 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:28:40.625969 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:28:40.631922 systemd-modules-load[190]: Inserted module 'br_netfilter' Sep 6 00:28:40.632724 kernel: Bridge firewalling registered Sep 6 00:28:40.633502 systemd-resolved[191]: Defaulting to hostname 'linux'. Sep 6 00:28:40.662745 kernel: SCSI subsystem initialized Sep 6 00:28:40.694800 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:28:40.694929 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:28:40.694964 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:28:40.709422 systemd-modules-load[190]: Inserted module 'dm_multipath' Sep 6 00:28:40.769898 kernel: audit: type=1130 audit(1757118520.739:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.741515 systemd[1]: Started systemd-resolved.service. Sep 6 00:28:40.805916 kernel: audit: type=1130 audit(1757118520.777:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.779389 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:28:40.844901 kernel: audit: type=1130 audit(1757118520.813:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.815440 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:28:40.881886 kernel: audit: type=1130 audit(1757118520.851:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.853348 systemd[1]: Reached target nss-lookup.target. Sep 6 00:28:40.892724 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:28:40.899434 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:28:40.901444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:28:40.919312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:28:40.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.924475 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:28:40.942790 kernel: audit: type=1130 audit(1757118520.917:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.956413 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:28:40.989910 kernel: audit: type=1130 audit(1757118520.954:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:40.977556 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:28:40.996919 dracut-cmdline[210]: dracut-dracut-053 Sep 6 00:28:40.996919 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Sep 6 00:28:40.996919 dracut-cmdline[210]: BEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:28:41.096770 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:28:41.123741 kernel: iscsi: registered transport (tcp) Sep 6 00:28:41.161950 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:28:41.162039 kernel: QLogic iSCSI HBA Driver Sep 6 00:28:41.214957 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:28:41.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:41.216584 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:28:41.284775 kernel: raid6: avx2x4 gen() 17544 MB/s Sep 6 00:28:41.305748 kernel: raid6: avx2x4 xor() 7669 MB/s Sep 6 00:28:41.326742 kernel: raid6: avx2x2 gen() 17800 MB/s Sep 6 00:28:41.347765 kernel: raid6: avx2x2 xor() 17822 MB/s Sep 6 00:28:41.368747 kernel: raid6: avx2x1 gen() 13788 MB/s Sep 6 00:28:41.389746 kernel: raid6: avx2x1 xor() 15674 MB/s Sep 6 00:28:41.410779 kernel: raid6: sse2x4 gen() 10932 MB/s Sep 6 00:28:41.431746 kernel: raid6: sse2x4 xor() 6241 MB/s Sep 6 00:28:41.452741 kernel: raid6: sse2x2 gen() 11762 MB/s Sep 6 00:28:41.473734 kernel: raid6: sse2x2 xor() 7264 MB/s Sep 6 00:28:41.494749 kernel: raid6: sse2x1 gen() 10319 MB/s Sep 6 00:28:41.520931 kernel: raid6: sse2x1 xor() 5083 MB/s Sep 6 00:28:41.521043 kernel: raid6: using algorithm avx2x2 gen() 17800 MB/s Sep 6 00:28:41.521077 kernel: raid6: .... xor() 17822 MB/s, rmw enabled Sep 6 00:28:41.526265 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:28:41.552746 kernel: xor: automatically using best checksumming function avx Sep 6 00:28:41.677745 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:28:41.691324 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:28:41.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:41.699000 audit: BPF prog-id=7 op=LOAD Sep 6 00:28:41.699000 audit: BPF prog-id=8 op=LOAD Sep 6 00:28:41.701799 systemd[1]: Starting systemd-udevd.service... Sep 6 00:28:41.720920 systemd-udevd[387]: Using default interface naming scheme 'v252'. Sep 6 00:28:41.730045 systemd[1]: Started systemd-udevd.service. Sep 6 00:28:41.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:41.750264 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:28:41.767666 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation Sep 6 00:28:41.811240 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:28:41.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:41.812721 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:28:41.895744 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:28:41.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:42.004728 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:28:42.035813 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:28:42.035917 kernel: AES CTR mode by8 optimization enabled Sep 6 00:28:42.048736 kernel: scsi host0: Virtio SCSI HBA Sep 6 00:28:42.077337 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 6 00:28:42.182208 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 6 00:28:42.245952 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 6 00:28:42.246247 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 6 00:28:42.246502 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 6 00:28:42.246901 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 6 00:28:42.247198 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:28:42.247227 kernel: GPT:17805311 != 25165823 Sep 6 00:28:42.247251 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:28:42.247273 kernel: GPT:17805311 != 25165823 Sep 6 00:28:42.247293 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:28:42.247308 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:28:42.247325 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 6 00:28:42.313269 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:28:42.326875 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (433) Sep 6 00:28:42.336928 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:28:42.363169 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:28:42.373211 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:28:42.394538 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:28:42.409259 systemd[1]: Starting disk-uuid.service... Sep 6 00:28:42.424360 disk-uuid[507]: Primary Header is updated. Sep 6 00:28:42.424360 disk-uuid[507]: Secondary Entries is updated. Sep 6 00:28:42.424360 disk-uuid[507]: Secondary Header is updated. Sep 6 00:28:42.454888 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:28:42.461837 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:28:42.482751 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:28:43.494501 disk-uuid[508]: The operation has completed successfully. Sep 6 00:28:43.504924 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:28:43.585549 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:28:43.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:43.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:43.585754 systemd[1]: Finished disk-uuid.service. Sep 6 00:28:43.599930 systemd[1]: Starting verity-setup.service... Sep 6 00:28:43.631754 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 00:28:43.714866 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:28:43.717560 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:28:43.743304 systemd[1]: Finished verity-setup.service. Sep 6 00:28:43.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:43.837763 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:28:43.838205 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:28:43.838782 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:28:43.895068 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:28:43.895111 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:28:43.895128 kernel: BTRFS info (device sda6): has skinny extents Sep 6 00:28:43.839852 systemd[1]: Starting ignition-setup.service... Sep 6 00:28:43.916893 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 00:28:43.863207 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:28:43.928494 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:28:43.945777 systemd[1]: Finished ignition-setup.service. Sep 6 00:28:43.984927 kernel: kauditd_printk_skb: 11 callbacks suppressed Sep 6 00:28:43.984983 kernel: audit: type=1130 audit(1757118523.944:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:43.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:43.948893 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:28:44.037854 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:28:44.078104 kernel: audit: type=1130 audit(1757118524.036:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.078152 kernel: audit: type=1334 audit(1757118524.058:24): prog-id=9 op=LOAD Sep 6 00:28:44.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.058000 audit: BPF prog-id=9 op=LOAD Sep 6 00:28:44.061400 systemd[1]: Starting systemd-networkd.service... Sep 6 00:28:44.109797 systemd-networkd[682]: lo: Link UP Sep 6 00:28:44.109811 systemd-networkd[682]: lo: Gained carrier Sep 6 00:28:44.110994 systemd-networkd[682]: Enumeration completed Sep 6 00:28:44.111168 systemd[1]: Started systemd-networkd.service. Sep 6 00:28:44.111800 systemd-networkd[682]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:28:44.114477 systemd-networkd[682]: eth0: Link UP Sep 6 00:28:44.186900 kernel: audit: type=1130 audit(1757118524.158:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.114486 systemd-networkd[682]: eth0: Gained carrier Sep 6 00:28:44.124874 systemd-networkd[682]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081.c.flatcar-212911.internal' to 'ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081' Sep 6 00:28:44.124892 systemd-networkd[682]: eth0: DHCPv4 address 10.128.0.49/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 6 00:28:44.160127 systemd[1]: Reached target network.target. Sep 6 00:28:44.196372 systemd[1]: Starting iscsiuio.service... Sep 6 00:28:44.262060 systemd[1]: Started iscsiuio.service. Sep 6 00:28:44.295960 kernel: audit: type=1130 audit(1757118524.267:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.271191 systemd[1]: Starting iscsid.service... Sep 6 00:28:44.309939 iscsid[691]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:28:44.309939 iscsid[691]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 00:28:44.309939 iscsid[691]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:28:44.309939 iscsid[691]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:28:44.309939 iscsid[691]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:28:44.309939 iscsid[691]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:28:44.309939 iscsid[691]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:28:44.477169 kernel: audit: type=1130 audit(1757118524.315:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.477220 kernel: audit: type=1130 audit(1757118524.380:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.303308 systemd[1]: Started iscsid.service. Sep 6 00:28:44.360497 ignition[632]: Ignition 2.14.0 Sep 6 00:28:44.319123 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:28:44.360515 ignition[632]: Stage: fetch-offline Sep 6 00:28:44.349500 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:28:44.360600 ignition[632]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:44.575949 kernel: audit: type=1130 audit(1757118524.543:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.382371 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:28:44.611928 kernel: audit: type=1130 audit(1757118524.583:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.360651 ignition[632]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:28:44.430187 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:28:44.385608 ignition[632]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:28:44.463955 systemd[1]: Reached target remote-fs.target. Sep 6 00:28:44.386053 ignition[632]: parsed url from cmdline: "" Sep 6 00:28:44.495673 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:28:44.685169 kernel: audit: type=1130 audit(1757118524.654:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.386063 ignition[632]: no config URL provided Sep 6 00:28:44.524597 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:28:44.386079 ignition[632]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:28:44.545739 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:28:44.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.386098 ignition[632]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:28:44.587222 systemd[1]: Starting ignition-fetch.service... Sep 6 00:28:44.386115 ignition[632]: failed to fetch config: resource requires networking Sep 6 00:28:44.627061 unknown[706]: fetched base config from "system" Sep 6 00:28:44.386328 ignition[632]: Ignition finished successfully Sep 6 00:28:44.627071 unknown[706]: fetched base config from "system" Sep 6 00:28:44.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:44.604554 ignition[706]: Ignition 2.14.0 Sep 6 00:28:44.627079 unknown[706]: fetched user config from "gcp" Sep 6 00:28:44.604569 ignition[706]: Stage: fetch Sep 6 00:28:44.634559 systemd[1]: Finished ignition-fetch.service. Sep 6 00:28:44.604907 ignition[706]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:44.658373 systemd[1]: Starting ignition-kargs.service... Sep 6 00:28:44.604992 ignition[706]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:28:44.704144 systemd[1]: Finished ignition-kargs.service. Sep 6 00:28:44.615124 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:28:44.726848 systemd[1]: Starting ignition-disks.service... Sep 6 00:28:44.615348 ignition[706]: parsed url from cmdline: "" Sep 6 00:28:44.769605 systemd[1]: Finished ignition-disks.service. Sep 6 00:28:44.615355 ignition[706]: no config URL provided Sep 6 00:28:44.790327 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:28:44.615363 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:28:44.797249 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:28:44.615378 ignition[706]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:28:44.823173 systemd[1]: Reached target local-fs.target. Sep 6 00:28:44.615424 ignition[706]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 6 00:28:44.842095 systemd[1]: Reached target sysinit.target. Sep 6 00:28:44.621573 ignition[706]: GET result: OK Sep 6 00:28:44.852219 systemd[1]: Reached target basic.target. Sep 6 00:28:44.621767 ignition[706]: parsing config with SHA512: adba911c38b916be37d696bf2fe7e41d204813ad7c7431dc4f547971c554b805184e91ad2e44e912f9ebea8978464e92ec95503e60527e794c071a461b143e80 Sep 6 00:28:44.879682 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:28:44.628674 ignition[706]: fetch: fetch complete Sep 6 00:28:44.628685 ignition[706]: fetch: fetch passed Sep 6 00:28:44.628815 ignition[706]: Ignition finished successfully Sep 6 00:28:44.692407 ignition[712]: Ignition 2.14.0 Sep 6 00:28:44.692417 ignition[712]: Stage: kargs Sep 6 00:28:44.692557 ignition[712]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:44.692595 ignition[712]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:28:44.701064 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:28:44.702569 ignition[712]: kargs: kargs passed Sep 6 00:28:44.702633 ignition[712]: Ignition finished successfully Sep 6 00:28:44.740860 ignition[718]: Ignition 2.14.0 Sep 6 00:28:44.740872 ignition[718]: Stage: disks Sep 6 00:28:44.741031 ignition[718]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:44.741065 ignition[718]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:28:44.750082 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:28:44.752025 ignition[718]: disks: disks passed Sep 6 00:28:44.752096 ignition[718]: Ignition finished successfully Sep 6 00:28:44.921144 systemd-fsck[726]: ROOT: clean, 629/1628000 files, 124065/1617920 blocks Sep 6 00:28:45.066886 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:28:45.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:45.068395 systemd[1]: Mounting sysroot.mount... Sep 6 00:28:45.102903 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:28:45.097661 systemd[1]: Mounted sysroot.mount. Sep 6 00:28:45.110337 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:28:45.129096 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:28:45.134820 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:28:45.134898 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:28:45.134936 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:28:45.167867 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:28:45.203453 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:28:45.228755 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (732) Sep 6 00:28:45.230513 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:28:45.271929 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:28:45.271971 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:28:45.271997 kernel: BTRFS info (device sda6): has skinny extents Sep 6 00:28:45.272022 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 00:28:45.272262 initrd-setup-root[737]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:28:45.282886 initrd-setup-root[761]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:28:45.273047 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:28:45.309874 initrd-setup-root[771]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:28:45.321892 initrd-setup-root[779]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:28:45.364635 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:28:45.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:45.366305 systemd[1]: Starting ignition-mount.service... Sep 6 00:28:45.396037 systemd[1]: Starting sysroot-boot.service... Sep 6 00:28:45.404148 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 00:28:45.404270 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 00:28:45.431411 ignition[797]: INFO : Ignition 2.14.0 Sep 6 00:28:45.431411 ignition[797]: INFO : Stage: mount Sep 6 00:28:45.431411 ignition[797]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:45.431411 ignition[797]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:28:45.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:45.443043 systemd[1]: Finished ignition-mount.service. Sep 6 00:28:45.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:45.503006 ignition[797]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:28:45.503006 ignition[797]: INFO : mount: mount passed Sep 6 00:28:45.503006 ignition[797]: INFO : Ignition finished successfully Sep 6 00:28:45.469199 systemd[1]: Finished sysroot-boot.service. Sep 6 00:28:45.566807 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (807) Sep 6 00:28:45.566862 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:28:45.566896 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:28:45.497336 systemd[1]: Starting ignition-files.service... Sep 6 00:28:45.590889 kernel: BTRFS info (device sda6): has skinny extents Sep 6 00:28:45.590928 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 00:28:45.532851 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:28:45.594929 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:28:45.629102 ignition[826]: INFO : Ignition 2.14.0 Sep 6 00:28:45.629102 ignition[826]: INFO : Stage: files Sep 6 00:28:45.643930 ignition[826]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:45.643930 ignition[826]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:28:45.643930 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:28:45.643930 ignition[826]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:28:45.696886 ignition[826]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:28:45.696886 ignition[826]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:28:45.696886 ignition[826]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:28:45.696886 ignition[826]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:28:45.696886 ignition[826]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:28:45.696886 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Sep 6 00:28:45.696886 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:28:45.696886 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2083876685" Sep 6 00:28:45.696886 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2083876685": device or resource busy Sep 6 00:28:45.696886 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2083876685", trying btrfs: device or resource busy Sep 6 00:28:45.696886 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2083876685" Sep 6 00:28:45.696886 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2083876685" Sep 6 00:28:45.696886 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem2083876685" Sep 6 00:28:45.696886 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem2083876685" Sep 6 00:28:45.696886 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Sep 6 00:28:45.696886 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 6 00:28:45.696886 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 6 00:28:45.655832 unknown[826]: wrote ssh authorized keys file for user: core Sep 6 00:28:45.969885 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Sep 6 00:28:45.780932 systemd-networkd[682]: eth0: Gained IPv6LL Sep 6 00:28:46.569187 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 6 00:28:46.585875 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:28:46.585875 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:28:46.786412 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Sep 6 00:28:46.951609 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:28:46.951609 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem344055449" Sep 6 00:28:46.984900 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem344055449": device or resource busy Sep 6 00:28:46.984900 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem344055449", trying btrfs: device or resource busy Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem344055449" Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem344055449" Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem344055449" Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem344055449" Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:28:46.984900 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:28:46.977306 systemd[1]: mnt-oem344055449.mount: Deactivated successfully. Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2418352925" Sep 6 00:28:47.247923 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2418352925": device or resource busy Sep 6 00:28:47.247923 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2418352925", trying btrfs: device or resource busy Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2418352925" Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2418352925" Sep 6 00:28:47.247923 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem2418352925" Sep 6 00:28:46.998100 systemd[1]: mnt-oem2418352925.mount: Deactivated successfully. Sep 6 00:28:47.511941 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem2418352925" Sep 6 00:28:47.511941 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Sep 6 00:28:47.511941 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:28:47.511941 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 6 00:28:47.511941 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Sep 6 00:28:47.787429 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:28:47.787429 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Sep 6 00:28:47.824954 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:28:47.824954 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1957530868" Sep 6 00:28:47.824954 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1957530868": device or resource busy Sep 6 00:28:47.824954 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1957530868", trying btrfs: device or resource busy Sep 6 00:28:47.824954 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1957530868" Sep 6 00:28:47.824954 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1957530868" Sep 6 00:28:47.824954 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem1957530868" Sep 6 00:28:47.824954 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem1957530868" Sep 6 00:28:47.824954 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Sep 6 00:28:47.824954 ignition[826]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:28:47.824954 ignition[826]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:28:47.824954 ignition[826]: INFO : files: op(1d): [started] processing unit "oem-gce.service" Sep 6 00:28:47.824954 ignition[826]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" Sep 6 00:28:47.824954 ignition[826]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" Sep 6 00:28:47.824954 ignition[826]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" Sep 6 00:28:47.824954 ignition[826]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" Sep 6 00:28:47.824954 ignition[826]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:28:47.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:47.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:47.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:47.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:47.811046 systemd[1]: Finished ignition-files.service. Sep 6 00:28:48.197084 ignition[826]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:28:48.197084 ignition[826]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" Sep 6 00:28:48.197084 ignition[826]: INFO : files: op(21): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:28:48.197084 ignition[826]: INFO : files: op(21): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:28:48.197084 ignition[826]: INFO : files: op(22): [started] setting preset to enabled for "oem-gce.service" Sep 6 00:28:48.197084 ignition[826]: INFO : files: op(22): [finished] setting preset to enabled for "oem-gce.service" Sep 6 00:28:48.197084 ignition[826]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Sep 6 00:28:48.197084 ignition[826]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Sep 6 00:28:48.197084 ignition[826]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:28:48.197084 ignition[826]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:28:48.197084 ignition[826]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:28:48.197084 ignition[826]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:28:48.197084 ignition[826]: INFO : files: files passed Sep 6 00:28:48.197084 ignition[826]: INFO : Ignition finished successfully Sep 6 00:28:48.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:47.833615 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:28:47.848248 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:28:48.488921 initrd-setup-root-after-ignition[849]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:28:47.849693 systemd[1]: Starting ignition-quench.service... Sep 6 00:28:47.879352 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:28:48.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:47.909392 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:28:47.909530 systemd[1]: Finished ignition-quench.service. Sep 6 00:28:48.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:47.940251 systemd[1]: Reached target ignition-complete.target. Sep 6 00:28:48.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:47.982398 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:28:48.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.020415 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:28:48.020566 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:28:48.628903 ignition[864]: INFO : Ignition 2.14.0 Sep 6 00:28:48.628903 ignition[864]: INFO : Stage: umount Sep 6 00:28:48.628903 ignition[864]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:48.628903 ignition[864]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:28:48.628903 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:28:48.628903 ignition[864]: INFO : umount: umount passed Sep 6 00:28:48.628903 ignition[864]: INFO : Ignition finished successfully Sep 6 00:28:48.046234 systemd[1]: Reached target initrd-fs.target. Sep 6 00:28:48.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.739139 iscsid[691]: iscsid shutting down. Sep 6 00:28:48.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.066987 systemd[1]: Reached target initrd.target. Sep 6 00:28:48.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.085096 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:28:48.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.086529 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:28:48.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.115369 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:28:48.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.138439 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:28:48.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.159243 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:28:48.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.190354 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:28:48.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.206363 systemd[1]: Stopped target timers.target. Sep 6 00:28:48.224522 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:28:48.224802 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:28:48.242683 systemd[1]: Stopped target initrd.target. Sep 6 00:28:48.263477 systemd[1]: Stopped target basic.target. Sep 6 00:28:48.301452 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:28:48.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.334482 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:28:48.996948 kernel: kauditd_printk_skb: 28 callbacks suppressed Sep 6 00:28:48.997015 kernel: audit: type=1131 audit(1757118528.961:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.368335 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:28:48.381406 systemd[1]: Stopped target remote-fs.target. Sep 6 00:28:49.043941 kernel: audit: type=1131 audit(1757118529.010:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:49.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.424452 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:28:49.100939 kernel: audit: type=1130 audit(1757118529.050:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:49.100998 kernel: audit: type=1131 audit(1757118529.050:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:49.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:49.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.431409 systemd[1]: Stopped target sysinit.target. Sep 6 00:28:48.453351 systemd[1]: Stopped target local-fs.target. Sep 6 00:28:48.472328 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:28:48.497274 systemd[1]: Stopped target swap.target. Sep 6 00:28:48.518252 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:28:48.518483 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:28:48.534485 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:28:48.551239 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:28:48.551455 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:28:49.223959 kernel: audit: type=1131 audit(1757118529.194:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:49.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.569545 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:28:49.270926 kernel: audit: type=1131 audit(1757118529.231:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:49.270997 kernel: audit: type=1334 audit(1757118529.253:66): prog-id=6 op=UNLOAD Sep 6 00:28:49.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:49.253000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:28:48.569810 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:28:48.586227 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:28:48.586479 systemd[1]: Stopped ignition-files.service. Sep 6 00:28:49.342945 kernel: audit: type=1131 audit(1757118529.313:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:49.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.605067 systemd[1]: Stopping ignition-mount.service... Sep 6 00:28:49.378956 kernel: audit: type=1131 audit(1757118529.350:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:49.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.650606 systemd[1]: Stopping iscsid.service... Sep 6 00:28:49.415964 kernel: audit: type=1131 audit(1757118529.386:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:49.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.686531 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:28:48.708009 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:28:48.708417 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:28:49.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.724524 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:28:48.724749 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:28:48.752876 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:28:49.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.754213 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:28:49.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.754362 systemd[1]: Stopped iscsid.service. Sep 6 00:28:49.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.763955 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:28:48.764098 systemd[1]: Stopped ignition-mount.service. Sep 6 00:28:48.779879 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:28:48.780070 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:28:49.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.799454 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:28:49.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.799656 systemd[1]: Stopped ignition-disks.service. Sep 6 00:28:49.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.816082 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:28:49.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.816188 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:28:49.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:49.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:48.832216 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:28:48.832288 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:28:48.847232 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:28:48.847342 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:28:48.854315 systemd[1]: Stopped target paths.target. Sep 6 00:28:48.874904 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:28:49.721928 systemd-journald[189]: Received SIGTERM from PID 1 (n/a). Sep 6 00:28:48.876846 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:28:48.882211 systemd[1]: Stopped target slices.target. Sep 6 00:28:48.898255 systemd[1]: Stopped target sockets.target. Sep 6 00:28:48.911255 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:28:48.911321 systemd[1]: Closed iscsid.socket. Sep 6 00:28:48.930962 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:28:48.931091 systemd[1]: Stopped ignition-setup.service. Sep 6 00:28:48.947032 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:28:48.947147 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:28:48.990544 systemd[1]: Stopping iscsiuio.service... Sep 6 00:28:49.004494 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:28:49.004644 systemd[1]: Stopped iscsiuio.service. Sep 6 00:28:49.012690 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:28:49.012908 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:28:49.053918 systemd[1]: Stopped target network.target. Sep 6 00:28:49.109149 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:28:49.109219 systemd[1]: Closed iscsiuio.socket. Sep 6 00:28:49.130443 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:28:49.133841 systemd-networkd[682]: eth0: DHCPv6 lease lost Sep 6 00:28:49.144208 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:28:49.173472 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:28:49.173622 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:28:49.218426 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:28:49.218628 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:28:49.233591 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:28:49.233669 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:28:49.281217 systemd[1]: Stopping network-cleanup.service... Sep 6 00:28:49.295970 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:28:49.296133 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:28:49.315137 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:28:49.315257 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:28:49.373804 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:28:49.373882 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:28:49.388334 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:28:49.425017 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:28:49.425762 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:28:49.426137 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:28:49.448039 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:28:49.448141 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:28:49.463067 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:28:49.463166 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:28:49.480949 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:28:49.481111 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:28:49.497070 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:28:49.497176 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:28:49.514029 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:28:49.514149 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:28:49.533284 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:28:49.556092 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:28:49.556196 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 6 00:28:49.579322 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:28:49.579395 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:28:49.595161 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:28:49.595246 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:28:49.612853 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 00:28:49.613608 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:28:49.613760 systemd[1]: Stopped network-cleanup.service. Sep 6 00:28:49.626490 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:28:49.626626 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:28:49.643348 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:28:49.659114 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:28:49.682981 systemd[1]: Switching root. Sep 6 00:28:49.734481 systemd-journald[189]: Journal stopped Sep 6 00:28:54.755407 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:28:54.755550 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:28:54.755602 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:28:54.755641 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:28:54.755672 kernel: SELinux: policy capability open_perms=1 Sep 6 00:28:54.755735 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:28:54.755778 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:28:54.755824 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:28:54.755866 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:28:54.755903 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:28:54.755943 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:28:54.755978 systemd[1]: Successfully loaded SELinux policy in 114.948ms. Sep 6 00:28:54.756019 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.705ms. Sep 6 00:28:54.756055 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:28:54.756089 systemd[1]: Detected virtualization kvm. Sep 6 00:28:54.756124 systemd[1]: Detected architecture x86-64. Sep 6 00:28:54.756163 systemd[1]: Detected first boot. Sep 6 00:28:54.756195 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:28:54.756229 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:28:54.756264 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:28:54.756300 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:28:54.756343 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:28:54.756390 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:28:54.756430 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:28:54.756462 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:28:54.756496 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:28:54.756531 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:28:54.756567 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:28:54.756602 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:28:54.756636 systemd[1]: Created slice system-getty.slice. Sep 6 00:28:54.756670 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:28:54.756751 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:28:54.756787 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:28:54.756820 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:28:54.756855 systemd[1]: Created slice user.slice. Sep 6 00:28:54.756889 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:28:54.756924 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:28:54.756959 systemd[1]: Set up automount boot.automount. Sep 6 00:28:54.756994 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:28:54.757028 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:28:54.757066 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:28:54.757101 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:28:54.757135 systemd[1]: Reached target integritysetup.target. Sep 6 00:28:54.757170 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:28:54.757205 systemd[1]: Reached target remote-fs.target. Sep 6 00:28:54.757239 systemd[1]: Reached target slices.target. Sep 6 00:28:54.757273 systemd[1]: Reached target swap.target. Sep 6 00:28:54.757307 systemd[1]: Reached target torcx.target. Sep 6 00:28:54.757344 systemd[1]: Reached target veritysetup.target. Sep 6 00:28:54.757382 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:28:54.757415 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:28:54.757448 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:28:54.757485 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:28:54.757518 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:28:54.757553 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:28:54.757587 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:28:54.757621 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:28:54.757659 systemd[1]: Mounting media.mount... Sep 6 00:28:54.757914 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:54.757963 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:28:54.757998 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:28:54.758032 systemd[1]: Mounting tmp.mount... Sep 6 00:28:54.758067 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:28:54.758102 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:28:54.758137 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:28:54.758172 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:28:54.758214 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:28:54.758249 systemd[1]: Starting modprobe@drm.service... Sep 6 00:28:54.758284 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:28:54.758319 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:28:54.758352 systemd[1]: Starting modprobe@loop.service... Sep 6 00:28:54.758392 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:28:54.758427 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:28:54.758457 kernel: fuse: init (API version 7.34) Sep 6 00:28:54.758491 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:28:54.758529 kernel: loop: module loaded Sep 6 00:28:54.758562 kernel: kauditd_printk_skb: 37 callbacks suppressed Sep 6 00:28:54.758595 kernel: audit: type=1131 audit(1757118534.535:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.758626 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:28:54.758660 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:28:54.758721 kernel: audit: type=1131 audit(1757118534.585:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.758756 systemd[1]: Stopped systemd-journald.service. Sep 6 00:28:54.758790 kernel: audit: type=1130 audit(1757118534.622:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.758829 kernel: audit: type=1131 audit(1757118534.622:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.758861 kernel: audit: type=1334 audit(1757118534.644:104): prog-id=15 op=LOAD Sep 6 00:28:54.758891 kernel: audit: type=1334 audit(1757118534.673:105): prog-id=16 op=LOAD Sep 6 00:28:54.758924 kernel: audit: type=1334 audit(1757118534.681:106): prog-id=17 op=LOAD Sep 6 00:28:54.758955 kernel: audit: type=1334 audit(1757118534.688:107): prog-id=13 op=UNLOAD Sep 6 00:28:54.758988 systemd[1]: Starting systemd-journald.service... Sep 6 00:28:54.759021 kernel: audit: type=1334 audit(1757118534.688:108): prog-id=14 op=UNLOAD Sep 6 00:28:54.759054 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:28:54.759088 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:28:54.759128 kernel: audit: type=1305 audit(1757118534.750:109): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:28:54.759169 systemd-journald[988]: Journal started Sep 6 00:28:54.759307 systemd-journald[988]: Runtime Journal (/run/log/journal/36ec0532a61e970aee6899ab1b9bc662) is 8.0M, max 148.8M, 140.8M free. Sep 6 00:28:49.733000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:28:50.023000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:28:50.171000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:28:50.171000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:28:50.171000 audit: BPF prog-id=10 op=LOAD Sep 6 00:28:50.171000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:28:50.171000 audit: BPF prog-id=11 op=LOAD Sep 6 00:28:50.171000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:28:50.349000 audit[897]: AVC avc: denied { associate } for pid=897 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:28:50.349000 audit[897]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878cc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=880 pid=897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:28:50.349000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:28:50.359000 audit[897]: AVC avc: denied { associate } for pid=897 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:28:50.359000 audit[897]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879a5 a2=1ed a3=0 items=2 ppid=880 pid=897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:28:50.359000 audit: CWD cwd="/" Sep 6 00:28:50.359000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:50.359000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:50.359000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:28:53.768000 audit: BPF prog-id=12 op=LOAD Sep 6 00:28:53.768000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:28:53.768000 audit: BPF prog-id=13 op=LOAD Sep 6 00:28:53.768000 audit: BPF prog-id=14 op=LOAD Sep 6 00:28:53.768000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:28:53.768000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:28:53.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:53.790000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:28:53.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:53.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.644000 audit: BPF prog-id=15 op=LOAD Sep 6 00:28:54.673000 audit: BPF prog-id=16 op=LOAD Sep 6 00:28:54.681000 audit: BPF prog-id=17 op=LOAD Sep 6 00:28:54.688000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:28:54.688000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:28:54.750000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:28:50.344268 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:28:53.766483 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:28:50.345637 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:28:53.766501 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 6 00:28:50.345682 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:28:53.771204 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:28:50.345777 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:28:50.345800 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:28:50.345870 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:28:50.345900 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:28:50.346292 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:28:50.346383 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:28:50.346411 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:28:50.349537 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:28:50.349623 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:28:50.349660 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:28:50.349691 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:28:50.349745 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:28:50.349774 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:28:53.061453 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:53Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:28:53.061826 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:53Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:28:53.062003 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:53Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:28:53.062275 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:53Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:28:53.062347 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:53Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:28:53.062429 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2025-09-06T00:28:53Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:28:54.750000 audit[988]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd675b2570 a2=4000 a3=7ffd675b260c items=0 ppid=1 pid=988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:28:54.750000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:28:54.779765 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:28:54.796750 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:28:54.815509 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:28:54.815839 systemd[1]: Stopped verity-setup.service. Sep 6 00:28:54.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.835980 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:54.846799 systemd[1]: Started systemd-journald.service. Sep 6 00:28:54.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.857719 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:28:54.866190 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:28:54.873127 systemd[1]: Mounted media.mount. Sep 6 00:28:54.880153 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:28:54.889154 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:28:54.898147 systemd[1]: Mounted tmp.mount. Sep 6 00:28:54.905294 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:28:54.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.914554 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:28:54.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.923376 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:28:54.923629 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:28:54.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.932432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:28:54.932721 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:28:54.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.942472 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:28:54.942799 systemd[1]: Finished modprobe@drm.service. Sep 6 00:28:54.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.951476 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:28:54.951759 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:28:54.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.961497 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:28:54.961808 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:28:54.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.970465 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:28:54.970732 systemd[1]: Finished modprobe@loop.service. Sep 6 00:28:54.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.980506 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:28:54.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.989517 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:28:54.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:54.998468 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:28:55.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:55.007353 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:28:55.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:55.016786 systemd[1]: Reached target network-pre.target. Sep 6 00:28:55.027629 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:28:55.037630 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:28:55.044863 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:28:55.047880 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:28:55.057202 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:28:55.065946 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:28:55.068213 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:28:55.075955 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:28:55.078154 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:28:55.080514 systemd-journald[988]: Time spent on flushing to /var/log/journal/36ec0532a61e970aee6899ab1b9bc662 is 73.352ms for 1174 entries. Sep 6 00:28:55.080514 systemd-journald[988]: System Journal (/var/log/journal/36ec0532a61e970aee6899ab1b9bc662) is 8.0M, max 584.8M, 576.8M free. Sep 6 00:28:55.186059 systemd-journald[988]: Received client request to flush runtime journal. Sep 6 00:28:55.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:55.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:55.095882 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:28:55.106597 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:28:55.117921 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:28:55.187161 udevadm[1002]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:28:55.127139 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:28:55.136284 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:28:55.145423 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:28:55.157811 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:28:55.185516 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:28:55.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:55.194582 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:28:55.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:55.206238 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:28:55.269134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:28:55.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:55.860380 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:28:55.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:55.868000 audit: BPF prog-id=18 op=LOAD Sep 6 00:28:55.868000 audit: BPF prog-id=19 op=LOAD Sep 6 00:28:55.868000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:28:55.868000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:28:55.871123 systemd[1]: Starting systemd-udevd.service... Sep 6 00:28:55.897637 systemd-udevd[1007]: Using default interface naming scheme 'v252'. Sep 6 00:28:55.950722 systemd[1]: Started systemd-udevd.service. Sep 6 00:28:55.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:55.961000 audit: BPF prog-id=20 op=LOAD Sep 6 00:28:55.964566 systemd[1]: Starting systemd-networkd.service... Sep 6 00:28:55.979000 audit: BPF prog-id=21 op=LOAD Sep 6 00:28:55.979000 audit: BPF prog-id=22 op=LOAD Sep 6 00:28:55.979000 audit: BPF prog-id=23 op=LOAD Sep 6 00:28:55.982438 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:28:56.062358 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:28:56.076035 systemd[1]: Started systemd-userdbd.service. Sep 6 00:28:56.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:56.188730 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:28:56.209755 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:28:56.217918 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 6 00:28:56.232745 kernel: ACPI: button: Sleep Button [SLPF] Sep 6 00:28:56.266489 systemd-networkd[1019]: lo: Link UP Sep 6 00:28:56.266515 systemd-networkd[1019]: lo: Gained carrier Sep 6 00:28:56.267602 systemd-networkd[1019]: Enumeration completed Sep 6 00:28:56.267815 systemd[1]: Started systemd-networkd.service. Sep 6 00:28:56.268635 systemd-networkd[1019]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:28:56.271386 systemd-networkd[1019]: eth0: Link UP Sep 6 00:28:56.271410 systemd-networkd[1019]: eth0: Gained carrier Sep 6 00:28:56.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:56.289912 systemd-networkd[1019]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081.c.flatcar-212911.internal' to 'ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081' Sep 6 00:28:56.289941 systemd-networkd[1019]: eth0: DHCPv4 address 10.128.0.49/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 6 00:28:56.355731 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:28:56.363583 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:28:56.356000 audit[1020]: AVC avc: denied { confidentiality } for pid=1020 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:28:56.356000 audit[1020]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556d96adfb50 a1=338ec a2=7f16ff82abc5 a3=5 items=110 ppid=1007 pid=1020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:28:56.356000 audit: CWD cwd="/" Sep 6 00:28:56.356000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=1 name=(null) inode=12960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=2 name=(null) inode=12960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=3 name=(null) inode=12961 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=4 name=(null) inode=12960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=5 name=(null) inode=12962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=6 name=(null) inode=12960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=7 name=(null) inode=12963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=8 name=(null) inode=12963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=9 name=(null) inode=12964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=10 name=(null) inode=12963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=11 name=(null) inode=12965 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=12 name=(null) inode=12963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=13 name=(null) inode=12966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=14 name=(null) inode=12963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=15 name=(null) inode=12967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=16 name=(null) inode=12963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=17 name=(null) inode=12968 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=18 name=(null) inode=12960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=19 name=(null) inode=12969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=20 name=(null) inode=12969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=21 name=(null) inode=12970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=22 name=(null) inode=12969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=23 name=(null) inode=12971 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=24 name=(null) inode=12969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=25 name=(null) inode=12972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=26 name=(null) inode=12969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=27 name=(null) inode=12973 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=28 name=(null) inode=12969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=29 name=(null) inode=12974 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=30 name=(null) inode=12960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=31 name=(null) inode=12975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=32 name=(null) inode=12975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=33 name=(null) inode=12976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=34 name=(null) inode=12975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=35 name=(null) inode=12977 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=36 name=(null) inode=12975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=37 name=(null) inode=12978 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=38 name=(null) inode=12975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=39 name=(null) inode=12979 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=40 name=(null) inode=12975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=41 name=(null) inode=12980 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=42 name=(null) inode=12960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=43 name=(null) inode=12981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=44 name=(null) inode=12981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=45 name=(null) inode=12982 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=46 name=(null) inode=12981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=47 name=(null) inode=12983 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=48 name=(null) inode=12981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=49 name=(null) inode=12984 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=50 name=(null) inode=12981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=51 name=(null) inode=12985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=52 name=(null) inode=12981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=53 name=(null) inode=12986 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=55 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=56 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=57 name=(null) inode=12988 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=58 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=59 name=(null) inode=12989 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=60 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=61 name=(null) inode=12990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=62 name=(null) inode=12990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=63 name=(null) inode=12991 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=64 name=(null) inode=12990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=65 name=(null) inode=12992 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=66 name=(null) inode=12990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=67 name=(null) inode=12993 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=68 name=(null) inode=12990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=69 name=(null) inode=12994 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=70 name=(null) inode=12990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=71 name=(null) inode=12995 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=72 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=73 name=(null) inode=12996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=74 name=(null) inode=12996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=75 name=(null) inode=12997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=76 name=(null) inode=12996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=77 name=(null) inode=12998 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=78 name=(null) inode=12996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=79 name=(null) inode=12999 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=80 name=(null) inode=12996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=81 name=(null) inode=13000 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=82 name=(null) inode=12996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=83 name=(null) inode=13001 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=84 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=85 name=(null) inode=13002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=86 name=(null) inode=13002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=87 name=(null) inode=13003 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=88 name=(null) inode=13002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=89 name=(null) inode=13004 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=90 name=(null) inode=13002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=91 name=(null) inode=13005 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=92 name=(null) inode=13002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=93 name=(null) inode=13006 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=94 name=(null) inode=13002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=95 name=(null) inode=13007 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=96 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=97 name=(null) inode=13008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=98 name=(null) inode=13008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=99 name=(null) inode=13009 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=100 name=(null) inode=13008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=101 name=(null) inode=13010 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=102 name=(null) inode=13008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=103 name=(null) inode=13011 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=104 name=(null) inode=13008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=105 name=(null) inode=13012 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=106 name=(null) inode=13008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=107 name=(null) inode=13013 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PATH item=109 name=(null) inode=13014 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:56.356000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:28:56.430727 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 6 00:28:56.447756 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 6 00:28:56.457731 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:28:56.475306 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:28:56.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:56.485889 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:28:56.515242 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:28:56.553286 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:28:56.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:56.562324 systemd[1]: Reached target cryptsetup.target. Sep 6 00:28:56.572756 systemd[1]: Starting lvm2-activation.service... Sep 6 00:28:56.579453 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:28:56.610307 systemd[1]: Finished lvm2-activation.service. Sep 6 00:28:56.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:56.619132 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:28:56.627950 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:28:56.628010 systemd[1]: Reached target local-fs.target. Sep 6 00:28:56.636930 systemd[1]: Reached target machines.target. Sep 6 00:28:56.647922 systemd[1]: Starting ldconfig.service... Sep 6 00:28:56.656184 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:28:56.656312 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:56.658993 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:28:56.669168 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:28:56.683358 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:28:56.686294 systemd[1]: Starting systemd-sysext.service... Sep 6 00:28:56.687263 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1047 (bootctl) Sep 6 00:28:56.689958 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:28:56.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:56.710043 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:28:56.722452 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:28:56.734074 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:28:56.734412 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:28:56.759796 kernel: loop0: detected capacity change from 0 to 229808 Sep 6 00:28:56.868947 systemd-fsck[1059]: fsck.fat 4.2 (2021-01-31) Sep 6 00:28:56.868947 systemd-fsck[1059]: /dev/sda1: 790 files, 120761/258078 clusters Sep 6 00:28:56.874042 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:28:56.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:56.886333 systemd[1]: Mounting boot.mount... Sep 6 00:28:56.919644 systemd[1]: Mounted boot.mount. Sep 6 00:28:56.965960 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:28:56.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.054440 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:28:57.146762 kernel: loop1: detected capacity change from 0 to 229808 Sep 6 00:28:57.177188 (sd-sysext)[1063]: Using extensions 'kubernetes'. Sep 6 00:28:57.178063 (sd-sysext)[1063]: Merged extensions into '/usr'. Sep 6 00:28:57.214500 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:57.218824 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:28:57.227170 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:28:57.230101 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:28:57.240555 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:28:57.250278 systemd[1]: Starting modprobe@loop.service... Sep 6 00:28:57.258065 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:28:57.258501 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:57.258812 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:57.264368 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:28:57.274377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:28:57.274659 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:28:57.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.284774 systemd[1]: Finished systemd-sysext.service. Sep 6 00:28:57.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.294502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:28:57.294943 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:28:57.304400 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:28:57.304679 systemd[1]: Finished modprobe@loop.service. Sep 6 00:28:57.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.316122 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:28:57.317114 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:28:57.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.330056 systemd[1]: Starting ensure-sysext.service... Sep 6 00:28:57.336938 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:28:57.337096 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:28:57.339866 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:28:57.353207 systemd[1]: Reloading. Sep 6 00:28:57.370668 systemd-tmpfiles[1070]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:28:57.377331 systemd-tmpfiles[1070]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:28:57.386061 systemd-tmpfiles[1070]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:28:57.428423 ldconfig[1046]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:28:57.522135 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2025-09-06T00:28:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:28:57.531868 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2025-09-06T00:28:57Z" level=info msg="torcx already run" Sep 6 00:28:57.728424 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:28:57.728458 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:28:57.756148 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:28:57.841000 audit: BPF prog-id=24 op=LOAD Sep 6 00:28:57.841000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:28:57.842000 audit: BPF prog-id=25 op=LOAD Sep 6 00:28:57.842000 audit: BPF prog-id=26 op=LOAD Sep 6 00:28:57.842000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:28:57.842000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:28:57.843000 audit: BPF prog-id=27 op=LOAD Sep 6 00:28:57.843000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:28:57.843000 audit: BPF prog-id=28 op=LOAD Sep 6 00:28:57.843000 audit: BPF prog-id=29 op=LOAD Sep 6 00:28:57.843000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:28:57.843000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:28:57.844000 audit: BPF prog-id=30 op=LOAD Sep 6 00:28:57.845000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:28:57.845000 audit: BPF prog-id=31 op=LOAD Sep 6 00:28:57.845000 audit: BPF prog-id=32 op=LOAD Sep 6 00:28:57.845000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:28:57.845000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:28:57.856840 systemd[1]: Finished ldconfig.service. Sep 6 00:28:57.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.866109 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:28:57.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.882772 systemd[1]: Starting audit-rules.service... Sep 6 00:28:57.892193 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:28:57.903836 systemd[1]: Starting oem-gce-enable-oslogin.service... Sep 6 00:28:57.915807 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:28:57.925000 audit: BPF prog-id=33 op=LOAD Sep 6 00:28:57.928903 systemd[1]: Starting systemd-resolved.service... Sep 6 00:28:57.935000 audit: BPF prog-id=34 op=LOAD Sep 6 00:28:57.939164 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:28:57.949522 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:28:57.959884 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:28:57.963000 audit[1156]: SYSTEM_BOOT pid=1156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.970072 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Sep 6 00:28:57.970410 systemd[1]: Finished oem-gce-enable-oslogin.service. Sep 6 00:28:57.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:57.987773 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:57.988524 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:28:57.991994 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:28:58.001642 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:28:58.011399 systemd[1]: Starting modprobe@loop.service... Sep 6 00:28:58.013000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:28:58.013000 audit[1164]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc53fa13d0 a2=420 a3=0 items=0 ppid=1134 pid=1164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:28:58.013000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:28:58.016091 augenrules[1164]: No rules Sep 6 00:28:58.021507 systemd[1]: Starting oem-gce-enable-oslogin.service... Sep 6 00:28:58.029997 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:28:58.030896 enable-oslogin[1172]: /etc/pam.d/sshd already exists. Not enabling OS Login Sep 6 00:28:58.030319 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:58.030655 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:28:58.030859 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:58.034466 systemd[1]: Finished audit-rules.service. Sep 6 00:28:58.042884 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:28:58.053811 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:28:58.062837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:28:58.063093 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:28:58.068925 systemd-networkd[1019]: eth0: Gained IPv6LL Sep 6 00:28:58.072824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:28:58.073096 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:28:58.082808 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:28:58.083057 systemd[1]: Finished modprobe@loop.service. Sep 6 00:28:58.092922 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Sep 6 00:28:58.093220 systemd[1]: Finished oem-gce-enable-oslogin.service. Sep 6 00:28:58.104768 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:28:58.105188 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:28:58.108308 systemd[1]: Starting systemd-update-done.service... Sep 6 00:28:58.119874 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:58.120387 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:28:58.124460 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:28:58.134548 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:28:58.144826 systemd[1]: Starting modprobe@loop.service... Sep 6 00:28:58.154652 systemd[1]: Starting oem-gce-enable-oslogin.service... Sep 6 00:28:58.157025 systemd-resolved[1148]: Positive Trust Anchors: Sep 6 00:28:58.157046 systemd-resolved[1148]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:28:58.157157 systemd-resolved[1148]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:28:58.162948 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:28:58.163301 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:58.163600 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:28:58.163806 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:58.166731 systemd[1]: Finished systemd-update-done.service. Sep 6 00:28:58.168800 enable-oslogin[1177]: /etc/pam.d/sshd already exists. Not enabling OS Login Sep 6 00:28:57.763611 systemd-timesyncd[1153]: Contacted time server 169.254.169.254:123 (169.254.169.254). Sep 6 00:28:57.812551 systemd-journald[988]: Time jumped backwards, rotating. Sep 6 00:28:57.763733 systemd-timesyncd[1153]: Initial clock synchronization to Sat 2025-09-06 00:28:57.763450 UTC. Sep 6 00:28:57.774770 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:28:57.785405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:28:57.785656 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:28:57.794432 systemd-resolved[1148]: Defaulting to hostname 'linux'. Sep 6 00:28:57.795916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:28:57.796163 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:28:57.805608 systemd[1]: Started systemd-resolved.service. Sep 6 00:28:57.815664 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:28:57.815956 systemd[1]: Finished modprobe@loop.service. Sep 6 00:28:57.825722 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Sep 6 00:28:57.826036 systemd[1]: Finished oem-gce-enable-oslogin.service. Sep 6 00:28:57.836092 systemd[1]: Reached target network.target. Sep 6 00:28:57.845234 systemd[1]: Reached target nss-lookup.target. Sep 6 00:28:57.854136 systemd[1]: Reached target time-set.target. Sep 6 00:28:57.863179 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:28:57.863506 systemd[1]: Reached target sysinit.target. Sep 6 00:28:57.873376 systemd[1]: Started motdgen.path. Sep 6 00:28:57.881273 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:28:57.892524 systemd[1]: Started logrotate.timer. Sep 6 00:28:57.900342 systemd[1]: Started mdadm.timer. Sep 6 00:28:57.908193 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:28:57.917116 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:28:57.917467 systemd[1]: Reached target paths.target. Sep 6 00:28:57.925122 systemd[1]: Reached target timers.target. Sep 6 00:28:57.934162 systemd[1]: Listening on dbus.socket. Sep 6 00:28:57.943100 systemd[1]: Starting docker.socket... Sep 6 00:28:57.955195 systemd[1]: Listening on sshd.socket. Sep 6 00:28:57.963354 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:57.963653 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:28:57.967102 systemd[1]: Listening on docker.socket. Sep 6 00:28:57.978169 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:28:57.978517 systemd[1]: Reached target sockets.target. Sep 6 00:28:57.988144 systemd[1]: Reached target basic.target. Sep 6 00:28:57.996126 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:28:57.996457 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:28:57.998889 systemd[1]: Starting containerd.service... Sep 6 00:28:58.008601 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:28:58.020236 systemd[1]: Starting dbus.service... Sep 6 00:28:58.030896 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:28:58.040457 systemd[1]: Starting extend-filesystems.service... Sep 6 00:28:58.056530 jq[1185]: false Sep 6 00:28:58.047889 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:28:58.051037 systemd[1]: Starting modprobe@drm.service... Sep 6 00:28:58.061541 systemd[1]: Starting motdgen.service... Sep 6 00:28:58.071575 systemd[1]: Starting prepare-helm.service... Sep 6 00:28:58.081478 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:28:58.091594 systemd[1]: Starting sshd-keygen.service... Sep 6 00:28:58.101773 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:28:58.110890 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:58.111263 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 6 00:28:58.112401 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:28:58.114349 systemd[1]: Starting update-engine.service... Sep 6 00:28:58.125175 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:28:58.133072 jq[1207]: true Sep 6 00:28:58.141872 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:28:58.142231 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:28:58.143421 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:28:58.143664 systemd[1]: Finished modprobe@drm.service. Sep 6 00:28:58.154890 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:28:58.155220 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:28:58.163112 extend-filesystems[1186]: Found loop1 Sep 6 00:28:58.207866 extend-filesystems[1186]: Found sda Sep 6 00:28:58.207866 extend-filesystems[1186]: Found sda1 Sep 6 00:28:58.207866 extend-filesystems[1186]: Found sda2 Sep 6 00:28:58.207866 extend-filesystems[1186]: Found sda3 Sep 6 00:28:58.207866 extend-filesystems[1186]: Found usr Sep 6 00:28:58.207866 extend-filesystems[1186]: Found sda4 Sep 6 00:28:58.207866 extend-filesystems[1186]: Found sda6 Sep 6 00:28:58.207866 extend-filesystems[1186]: Found sda7 Sep 6 00:28:58.207866 extend-filesystems[1186]: Found sda9 Sep 6 00:28:58.207866 extend-filesystems[1186]: Checking size of /dev/sda9 Sep 6 00:28:58.345941 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 6 00:28:58.165072 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:28:58.346277 tar[1209]: linux-amd64/LICENSE Sep 6 00:28:58.346277 tar[1209]: linux-amd64/helm Sep 6 00:28:58.346724 extend-filesystems[1186]: Resized partition /dev/sda9 Sep 6 00:28:58.359824 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 6 00:28:58.182922 systemd[1]: Reached target network-online.target. Sep 6 00:28:58.360275 extend-filesystems[1225]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:28:58.194842 systemd[1]: Starting kubelet.service... Sep 6 00:28:58.384734 jq[1213]: true Sep 6 00:28:58.217827 systemd[1]: Starting oem-gce.service... Sep 6 00:28:58.241162 systemd[1]: Starting systemd-logind.service... Sep 6 00:28:58.248896 systemd[1]: Finished ensure-sysext.service. Sep 6 00:28:58.259119 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:28:58.259468 systemd[1]: Finished motdgen.service. Sep 6 00:28:58.387632 mkfs.ext4[1230]: mke2fs 1.46.5 (30-Dec-2021) Sep 6 00:28:58.387632 mkfs.ext4[1230]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Sep 6 00:28:58.387632 mkfs.ext4[1230]: Creating filesystem with 262144 4k blocks and 65536 inodes Sep 6 00:28:58.387632 mkfs.ext4[1230]: Filesystem UUID: 21053619-afad-4fc0-82ea-053c3bebdb01 Sep 6 00:28:58.387632 mkfs.ext4[1230]: Superblock backups stored on blocks: Sep 6 00:28:58.387632 mkfs.ext4[1230]: 32768, 98304, 163840, 229376 Sep 6 00:28:58.387632 mkfs.ext4[1230]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Sep 6 00:28:58.387632 mkfs.ext4[1230]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Sep 6 00:28:58.387632 mkfs.ext4[1230]: Creating journal (8192 blocks): done Sep 6 00:28:58.387632 mkfs.ext4[1230]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Sep 6 00:28:58.388360 extend-filesystems[1225]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 6 00:28:58.388360 extend-filesystems[1225]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 6 00:28:58.388360 extend-filesystems[1225]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 6 00:28:58.429401 extend-filesystems[1186]: Resized filesystem in /dev/sda9 Sep 6 00:28:58.388687 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:28:58.426727 dbus-daemon[1184]: [system] SELinux support is enabled Sep 6 00:28:58.477581 update_engine[1205]: I0906 00:28:58.413080 1205 main.cc:92] Flatcar Update Engine starting Sep 6 00:28:58.477581 update_engine[1205]: I0906 00:28:58.464080 1205 update_check_scheduler.cc:74] Next update check in 2m40s Sep 6 00:28:58.389038 systemd[1]: Finished extend-filesystems.service. Sep 6 00:28:58.455213 dbus-daemon[1184]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1019 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 6 00:28:58.478819 umount[1241]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Sep 6 00:28:58.427045 systemd[1]: Started dbus.service. Sep 6 00:28:58.468963 dbus-daemon[1184]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 6 00:28:58.440449 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:28:58.440506 systemd[1]: Reached target system-config.target. Sep 6 00:28:58.448973 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:28:58.449038 systemd[1]: Reached target user-config.target. Sep 6 00:28:58.468849 systemd[1]: Started update-engine.service. Sep 6 00:28:58.484577 env[1214]: time="2025-09-06T00:28:58.484465541Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:28:58.499314 kernel: loop2: detected capacity change from 0 to 2097152 Sep 6 00:28:58.500907 systemd[1]: Started locksmithd.service. Sep 6 00:28:58.513089 systemd[1]: Starting systemd-hostnamed.service... Sep 6 00:28:58.515785 bash[1251]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:28:58.521898 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:28:58.567762 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:28:58.637740 coreos-metadata[1183]: Sep 06 00:28:58.637 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 6 00:28:58.642554 coreos-metadata[1183]: Sep 06 00:28:58.642 INFO Fetch failed with 404: resource not found Sep 6 00:28:58.642554 coreos-metadata[1183]: Sep 06 00:28:58.642 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 6 00:28:58.644753 coreos-metadata[1183]: Sep 06 00:28:58.644 INFO Fetch successful Sep 6 00:28:58.644753 coreos-metadata[1183]: Sep 06 00:28:58.644 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 6 00:28:58.646354 coreos-metadata[1183]: Sep 06 00:28:58.646 INFO Fetch failed with 404: resource not found Sep 6 00:28:58.646354 coreos-metadata[1183]: Sep 06 00:28:58.646 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 6 00:28:58.648778 coreos-metadata[1183]: Sep 06 00:28:58.648 INFO Fetch failed with 404: resource not found Sep 6 00:28:58.648778 coreos-metadata[1183]: Sep 06 00:28:58.648 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 6 00:28:58.649819 coreos-metadata[1183]: Sep 06 00:28:58.649 INFO Fetch successful Sep 6 00:28:58.653970 unknown[1183]: wrote ssh authorized keys file for user: core Sep 6 00:28:58.690456 update-ssh-keys[1260]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:28:58.691367 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:28:58.711521 env[1214]: time="2025-09-06T00:28:58.711419496Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:28:58.711940 env[1214]: time="2025-09-06T00:28:58.711893357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:58.716092 env[1214]: time="2025-09-06T00:28:58.716019719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:28:58.716308 env[1214]: time="2025-09-06T00:28:58.716273720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:58.716913 env[1214]: time="2025-09-06T00:28:58.716871775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:28:58.717078 env[1214]: time="2025-09-06T00:28:58.717049167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:58.719078 env[1214]: time="2025-09-06T00:28:58.719028493Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:28:58.719234 env[1214]: time="2025-09-06T00:28:58.719204866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:58.719522 env[1214]: time="2025-09-06T00:28:58.719492464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:58.725302 env[1214]: time="2025-09-06T00:28:58.725253154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:58.731034 env[1214]: time="2025-09-06T00:28:58.730962995Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:28:58.732127 env[1214]: time="2025-09-06T00:28:58.732085494Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:28:58.732448 env[1214]: time="2025-09-06T00:28:58.732397767Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:28:58.732594 env[1214]: time="2025-09-06T00:28:58.732569832Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:28:58.752545 env[1214]: time="2025-09-06T00:28:58.752455783Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:28:58.752728 env[1214]: time="2025-09-06T00:28:58.752554875Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:28:58.752728 env[1214]: time="2025-09-06T00:28:58.752604546Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:28:58.752728 env[1214]: time="2025-09-06T00:28:58.752676497Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:28:58.752891 env[1214]: time="2025-09-06T00:28:58.752728589Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:28:58.752891 env[1214]: time="2025-09-06T00:28:58.752756905Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:28:58.752891 env[1214]: time="2025-09-06T00:28:58.752806333Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:28:58.752891 env[1214]: time="2025-09-06T00:28:58.752832919Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:28:58.752891 env[1214]: time="2025-09-06T00:28:58.752856916Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:28:58.753181 env[1214]: time="2025-09-06T00:28:58.752903985Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:28:58.753181 env[1214]: time="2025-09-06T00:28:58.752931078Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:28:58.753181 env[1214]: time="2025-09-06T00:28:58.752975800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:28:58.753377 env[1214]: time="2025-09-06T00:28:58.753326244Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:28:58.753604 env[1214]: time="2025-09-06T00:28:58.753562695Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:28:58.754161 env[1214]: time="2025-09-06T00:28:58.754108508Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:28:58.754255 env[1214]: time="2025-09-06T00:28:58.754193134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.754255 env[1214]: time="2025-09-06T00:28:58.754244893Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:28:58.754367 env[1214]: time="2025-09-06T00:28:58.754352137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.754425 env[1214]: time="2025-09-06T00:28:58.754399804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.754484 env[1214]: time="2025-09-06T00:28:58.754425086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.754484 env[1214]: time="2025-09-06T00:28:58.754449477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.754601 env[1214]: time="2025-09-06T00:28:58.754493726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.754601 env[1214]: time="2025-09-06T00:28:58.754520391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.754601 env[1214]: time="2025-09-06T00:28:58.754562044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.754601 env[1214]: time="2025-09-06T00:28:58.754586941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.755033 env[1214]: time="2025-09-06T00:28:58.754614125Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:28:58.755033 env[1214]: time="2025-09-06T00:28:58.754928643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.755033 env[1214]: time="2025-09-06T00:28:58.754960740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.755033 env[1214]: time="2025-09-06T00:28:58.755010415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.755236 env[1214]: time="2025-09-06T00:28:58.755034318Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:28:58.755236 env[1214]: time="2025-09-06T00:28:58.755084183Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:28:58.755236 env[1214]: time="2025-09-06T00:28:58.755108150Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:28:58.755236 env[1214]: time="2025-09-06T00:28:58.755158304Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:28:58.755433 env[1214]: time="2025-09-06T00:28:58.755239516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:28:58.762267 systemd[1]: Started containerd.service. Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.755673595Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.759257132Z" level=info msg="Connect containerd service" Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.759361018Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.760640283Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.761279832Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.761413253Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.761513101Z" level=info msg="containerd successfully booted in 0.308031s" Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.762795783Z" level=info msg="Start subscribing containerd event" Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.762884868Z" level=info msg="Start recovering state" Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.763005239Z" level=info msg="Start event monitor" Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.763031672Z" level=info msg="Start snapshots syncer" Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.763150501Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:28:58.763968 env[1214]: time="2025-09-06T00:28:58.763172755Z" level=info msg="Start streaming server" Sep 6 00:28:58.846139 systemd-logind[1223]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:28:58.846811 systemd-logind[1223]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 6 00:28:58.849984 systemd-logind[1223]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:28:58.850522 systemd-logind[1223]: New seat seat0. Sep 6 00:28:58.861226 systemd[1]: Started systemd-logind.service. Sep 6 00:28:58.920796 dbus-daemon[1184]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 6 00:28:58.921119 systemd[1]: Started systemd-hostnamed.service. Sep 6 00:28:58.921854 dbus-daemon[1184]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1256 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 6 00:28:58.935816 systemd[1]: Starting polkit.service... Sep 6 00:28:59.011293 polkitd[1276]: Started polkitd version 121 Sep 6 00:28:59.043377 polkitd[1276]: Loading rules from directory /etc/polkit-1/rules.d Sep 6 00:28:59.044844 polkitd[1276]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 6 00:28:59.051661 polkitd[1276]: Finished loading, compiling and executing 2 rules Sep 6 00:28:59.052434 dbus-daemon[1184]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 6 00:28:59.052742 systemd[1]: Started polkit.service. Sep 6 00:28:59.053479 polkitd[1276]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 6 00:28:59.089747 systemd-hostnamed[1256]: Hostname set to (transient) Sep 6 00:28:59.093542 systemd-resolved[1148]: System hostname changed to 'ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081'. Sep 6 00:29:00.192930 sshd_keygen[1210]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:29:00.314012 systemd[1]: Finished sshd-keygen.service. Sep 6 00:29:00.324113 systemd[1]: Starting issuegen.service... Sep 6 00:29:00.354750 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:29:00.355100 systemd[1]: Finished issuegen.service. Sep 6 00:29:00.367034 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:29:00.387172 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:29:00.397188 systemd[1]: Started getty@tty1.service. Sep 6 00:29:00.408398 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:29:00.418425 systemd[1]: Reached target getty.target. Sep 6 00:29:00.451580 tar[1209]: linux-amd64/README.md Sep 6 00:29:00.464571 systemd[1]: Finished prepare-helm.service. Sep 6 00:29:00.609400 locksmithd[1255]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:29:00.726156 systemd[1]: Started kubelet.service. Sep 6 00:29:01.761229 kubelet[1305]: E0906 00:29:01.761157 1305 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:29:01.766140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:29:01.766391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:29:01.766873 systemd[1]: kubelet.service: Consumed 1.607s CPU time. Sep 6 00:29:04.380097 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Sep 6 00:29:06.431972 kernel: loop2: detected capacity change from 0 to 2097152 Sep 6 00:29:06.451089 systemd-nspawn[1312]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Sep 6 00:29:06.451089 systemd-nspawn[1312]: Press ^] three times within 1s to kill container. Sep 6 00:29:06.468765 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:29:06.553296 systemd[1]: Started oem-gce.service. Sep 6 00:29:06.553845 systemd[1]: Reached target multi-user.target. Sep 6 00:29:06.556206 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:29:06.569748 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:29:06.570043 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:29:06.570729 systemd[1]: Startup finished in 1.310s (kernel) + 9.979s (initrd) + 17.085s (userspace) = 28.374s. Sep 6 00:29:06.613534 systemd-nspawn[1312]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 6 00:29:06.613534 systemd-nspawn[1312]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 6 00:29:06.613866 systemd-nspawn[1312]: + /usr/bin/google_instance_setup Sep 6 00:29:07.105860 systemd[1]: Created slice system-sshd.slice. Sep 6 00:29:07.110256 systemd[1]: Started sshd@0-10.128.0.49:22-139.178.89.65:47756.service. Sep 6 00:29:07.344642 instance-setup[1319]: INFO Running google_set_multiqueue. Sep 6 00:29:07.365877 instance-setup[1319]: INFO Set channels for eth0 to 2. Sep 6 00:29:07.373957 instance-setup[1319]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 6 00:29:07.374942 instance-setup[1319]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 6 00:29:07.375800 instance-setup[1319]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 6 00:29:07.383194 instance-setup[1319]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 6 00:29:07.384184 instance-setup[1319]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 6 00:29:07.384598 instance-setup[1319]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 6 00:29:07.385022 instance-setup[1319]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 6 00:29:07.385379 instance-setup[1319]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 6 00:29:07.396799 instance-setup[1319]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 6 00:29:07.397173 instance-setup[1319]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 6 00:29:07.443747 sshd[1323]: Accepted publickey for core from 139.178.89.65 port 47756 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:29:07.447561 sshd[1323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:29:07.449099 systemd-nspawn[1312]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 6 00:29:07.468576 systemd[1]: Created slice user-500.slice. Sep 6 00:29:07.473821 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:29:07.494458 systemd-logind[1223]: New session 1 of user core. Sep 6 00:29:07.506355 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:29:07.510571 systemd[1]: Starting user@500.service... Sep 6 00:29:07.529837 (systemd)[1355]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:29:07.711647 systemd[1355]: Queued start job for default target default.target. Sep 6 00:29:07.714178 systemd[1355]: Reached target paths.target. Sep 6 00:29:07.714445 systemd[1355]: Reached target sockets.target. Sep 6 00:29:07.714479 systemd[1355]: Reached target timers.target. Sep 6 00:29:07.714505 systemd[1355]: Reached target basic.target. Sep 6 00:29:07.714679 systemd[1]: Started user@500.service. Sep 6 00:29:07.716424 systemd[1]: Started session-1.scope. Sep 6 00:29:07.720612 systemd[1355]: Reached target default.target. Sep 6 00:29:07.720958 systemd[1355]: Startup finished in 176ms. Sep 6 00:29:07.949291 systemd[1]: Started sshd@1-10.128.0.49:22-139.178.89.65:47762.service. Sep 6 00:29:07.963031 startup-script[1353]: INFO Starting startup scripts. Sep 6 00:29:07.989343 startup-script[1353]: INFO No startup scripts found in metadata. Sep 6 00:29:07.989877 startup-script[1353]: INFO Finished running startup scripts. Sep 6 00:29:08.039961 systemd-nspawn[1312]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 6 00:29:08.040892 systemd-nspawn[1312]: + daemon_pids=() Sep 6 00:29:08.040892 systemd-nspawn[1312]: + for d in accounts clock_skew network Sep 6 00:29:08.040892 systemd-nspawn[1312]: + daemon_pids+=($!) Sep 6 00:29:08.040892 systemd-nspawn[1312]: + for d in accounts clock_skew network Sep 6 00:29:08.041288 systemd-nspawn[1312]: + daemon_pids+=($!) Sep 6 00:29:08.041288 systemd-nspawn[1312]: + for d in accounts clock_skew network Sep 6 00:29:08.041501 systemd-nspawn[1312]: + daemon_pids+=($!) Sep 6 00:29:08.041759 systemd-nspawn[1312]: + NOTIFY_SOCKET=/run/systemd/notify Sep 6 00:29:08.041875 systemd-nspawn[1312]: + /usr/bin/systemd-notify --ready Sep 6 00:29:08.042207 systemd-nspawn[1312]: + /usr/bin/google_network_daemon Sep 6 00:29:08.042640 systemd-nspawn[1312]: + /usr/bin/google_accounts_daemon Sep 6 00:29:08.043169 systemd-nspawn[1312]: + /usr/bin/google_clock_skew_daemon Sep 6 00:29:08.117189 systemd-nspawn[1312]: + wait -n 36 37 38 Sep 6 00:29:08.299810 sshd[1366]: Accepted publickey for core from 139.178.89.65 port 47762 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:29:08.300092 sshd[1366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:29:08.312998 systemd[1]: Started session-2.scope. Sep 6 00:29:08.314791 systemd-logind[1223]: New session 2 of user core. Sep 6 00:29:08.529160 sshd[1366]: pam_unix(sshd:session): session closed for user core Sep 6 00:29:08.537018 systemd[1]: sshd@1-10.128.0.49:22-139.178.89.65:47762.service: Deactivated successfully. Sep 6 00:29:08.538394 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:29:08.541668 systemd-logind[1223]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:29:08.544090 systemd-logind[1223]: Removed session 2. Sep 6 00:29:08.578605 systemd[1]: Started sshd@2-10.128.0.49:22-139.178.89.65:47768.service. Sep 6 00:29:08.813431 google-networking[1370]: INFO Starting Google Networking daemon. Sep 6 00:29:08.901668 sshd[1376]: Accepted publickey for core from 139.178.89.65 port 47768 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:29:08.902685 sshd[1376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:29:08.915289 systemd[1]: Started session-3.scope. Sep 6 00:29:08.916094 systemd-logind[1223]: New session 3 of user core. Sep 6 00:29:09.035991 google-clock-skew[1369]: INFO Starting Google Clock Skew daemon. Sep 6 00:29:09.048980 google-clock-skew[1369]: INFO Clock drift token has changed: 0. Sep 6 00:29:09.052845 systemd-nspawn[1312]: hwclock: Cannot access the Hardware Clock via any known method. Sep 6 00:29:09.053543 systemd-nspawn[1312]: hwclock: Use the --verbose option to see the details of our search for an access method. Sep 6 00:29:09.053805 google-clock-skew[1369]: WARNING Failed to sync system time with hardware clock. Sep 6 00:29:09.113042 sshd[1376]: pam_unix(sshd:session): session closed for user core Sep 6 00:29:09.118441 systemd[1]: sshd@2-10.128.0.49:22-139.178.89.65:47768.service: Deactivated successfully. Sep 6 00:29:09.119856 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:29:09.120925 systemd-logind[1223]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:29:09.123152 systemd-logind[1223]: Removed session 3. Sep 6 00:29:09.136741 groupadd[1386]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 6 00:29:09.155721 groupadd[1386]: group added to /etc/gshadow: name=google-sudoers Sep 6 00:29:09.161157 systemd[1]: Started sshd@3-10.128.0.49:22-139.178.89.65:47780.service. Sep 6 00:29:09.167100 groupadd[1386]: new group: name=google-sudoers, GID=1000 Sep 6 00:29:09.184267 google-accounts[1368]: INFO Starting Google Accounts daemon. Sep 6 00:29:09.212587 google-accounts[1368]: WARNING OS Login not installed. Sep 6 00:29:09.213855 google-accounts[1368]: INFO Creating a new user account for 0. Sep 6 00:29:09.220393 systemd-nspawn[1312]: useradd: invalid user name '0': use --badname to ignore Sep 6 00:29:09.221382 google-accounts[1368]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 6 00:29:09.461137 sshd[1393]: Accepted publickey for core from 139.178.89.65 port 47780 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:29:09.463720 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:29:09.471268 systemd[1]: Started session-4.scope. Sep 6 00:29:09.472027 systemd-logind[1223]: New session 4 of user core. Sep 6 00:29:09.681121 sshd[1393]: pam_unix(sshd:session): session closed for user core Sep 6 00:29:09.686170 systemd[1]: sshd@3-10.128.0.49:22-139.178.89.65:47780.service: Deactivated successfully. Sep 6 00:29:09.687658 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:29:09.688916 systemd-logind[1223]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:29:09.690853 systemd-logind[1223]: Removed session 4. Sep 6 00:29:09.729175 systemd[1]: Started sshd@4-10.128.0.49:22-139.178.89.65:47790.service. Sep 6 00:29:10.027627 sshd[1407]: Accepted publickey for core from 139.178.89.65 port 47790 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:29:10.029585 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:29:10.037366 systemd[1]: Started session-5.scope. Sep 6 00:29:10.038486 systemd-logind[1223]: New session 5 of user core. Sep 6 00:29:10.230567 sudo[1410]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:29:10.231127 sudo[1410]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:29:10.266504 systemd[1]: Starting docker.service... Sep 6 00:29:10.323746 env[1420]: time="2025-09-06T00:29:10.322742567Z" level=info msg="Starting up" Sep 6 00:29:10.325318 env[1420]: time="2025-09-06T00:29:10.325281659Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:29:10.325512 env[1420]: time="2025-09-06T00:29:10.325482257Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:29:10.326070 env[1420]: time="2025-09-06T00:29:10.325751399Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:29:10.326202 env[1420]: time="2025-09-06T00:29:10.326177072Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:29:10.328354 env[1420]: time="2025-09-06T00:29:10.328327469Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:29:10.328465 env[1420]: time="2025-09-06T00:29:10.328445645Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:29:10.328549 env[1420]: time="2025-09-06T00:29:10.328530358Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:29:10.328636 env[1420]: time="2025-09-06T00:29:10.328619176Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:29:10.370619 env[1420]: time="2025-09-06T00:29:10.370513148Z" level=info msg="Loading containers: start." Sep 6 00:29:10.564219 kernel: Initializing XFRM netlink socket Sep 6 00:29:10.614891 env[1420]: time="2025-09-06T00:29:10.613198206Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:29:10.705887 systemd-networkd[1019]: docker0: Link UP Sep 6 00:29:10.727339 env[1420]: time="2025-09-06T00:29:10.727264169Z" level=info msg="Loading containers: done." Sep 6 00:29:10.747415 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck90767634-merged.mount: Deactivated successfully. Sep 6 00:29:10.752964 env[1420]: time="2025-09-06T00:29:10.752894226Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:29:10.753264 env[1420]: time="2025-09-06T00:29:10.753218301Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:29:10.753433 env[1420]: time="2025-09-06T00:29:10.753380370Z" level=info msg="Daemon has completed initialization" Sep 6 00:29:10.777375 systemd[1]: Started docker.service. Sep 6 00:29:10.791809 env[1420]: time="2025-09-06T00:29:10.791645008Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:29:11.811968 env[1214]: time="2025-09-06T00:29:11.811890062Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 6 00:29:12.017634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:29:12.018094 systemd[1]: Stopped kubelet.service. Sep 6 00:29:12.018171 systemd[1]: kubelet.service: Consumed 1.607s CPU time. Sep 6 00:29:12.020764 systemd[1]: Starting kubelet.service... Sep 6 00:29:12.330119 systemd[1]: Started kubelet.service. Sep 6 00:29:12.394533 kubelet[1546]: E0906 00:29:12.394477 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:29:12.399264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:29:12.399564 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:29:12.670906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070331657.mount: Deactivated successfully. Sep 6 00:29:14.535254 env[1214]: time="2025-09-06T00:29:14.535169342Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:14.538198 env[1214]: time="2025-09-06T00:29:14.538137781Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:14.541004 env[1214]: time="2025-09-06T00:29:14.540945835Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:14.543720 env[1214]: time="2025-09-06T00:29:14.543636841Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:14.544904 env[1214]: time="2025-09-06T00:29:14.544853666Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 6 00:29:14.545849 env[1214]: time="2025-09-06T00:29:14.545811525Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 6 00:29:16.240871 env[1214]: time="2025-09-06T00:29:16.240792261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:16.245010 env[1214]: time="2025-09-06T00:29:16.244943230Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:16.248107 env[1214]: time="2025-09-06T00:29:16.248043923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:16.251231 env[1214]: time="2025-09-06T00:29:16.251162412Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:16.252975 env[1214]: time="2025-09-06T00:29:16.252896148Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 6 00:29:16.253943 env[1214]: time="2025-09-06T00:29:16.253904328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 6 00:29:17.600977 env[1214]: time="2025-09-06T00:29:17.600888709Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:17.604163 env[1214]: time="2025-09-06T00:29:17.604097941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:17.606927 env[1214]: time="2025-09-06T00:29:17.606868182Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:17.609649 env[1214]: time="2025-09-06T00:29:17.609594195Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:17.610889 env[1214]: time="2025-09-06T00:29:17.610840132Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 6 00:29:17.611749 env[1214]: time="2025-09-06T00:29:17.611687463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 6 00:29:18.862358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3070963598.mount: Deactivated successfully. Sep 6 00:29:19.717265 env[1214]: time="2025-09-06T00:29:19.717175481Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:19.720463 env[1214]: time="2025-09-06T00:29:19.720396632Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:19.722797 env[1214]: time="2025-09-06T00:29:19.722667001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:19.724847 env[1214]: time="2025-09-06T00:29:19.724794085Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:19.725663 env[1214]: time="2025-09-06T00:29:19.725579643Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 6 00:29:19.726315 env[1214]: time="2025-09-06T00:29:19.726272278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 6 00:29:20.145456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3841423448.mount: Deactivated successfully. Sep 6 00:29:21.717609 env[1214]: time="2025-09-06T00:29:21.717520915Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:21.722621 env[1214]: time="2025-09-06T00:29:21.722553484Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:21.726970 env[1214]: time="2025-09-06T00:29:21.726902230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:21.730015 env[1214]: time="2025-09-06T00:29:21.729958105Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:21.731590 env[1214]: time="2025-09-06T00:29:21.731520398Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 6 00:29:21.732393 env[1214]: time="2025-09-06T00:29:21.732353101Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:29:22.205491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3038968640.mount: Deactivated successfully. Sep 6 00:29:22.214924 env[1214]: time="2025-09-06T00:29:22.214831333Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:22.217848 env[1214]: time="2025-09-06T00:29:22.217787141Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:22.221438 env[1214]: time="2025-09-06T00:29:22.221380387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:22.225600 env[1214]: time="2025-09-06T00:29:22.225543163Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:22.226629 env[1214]: time="2025-09-06T00:29:22.226572613Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:29:22.227560 env[1214]: time="2025-09-06T00:29:22.227507100Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 6 00:29:22.650888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:29:22.651249 systemd[1]: Stopped kubelet.service. Sep 6 00:29:22.653862 systemd[1]: Starting kubelet.service... Sep 6 00:29:22.906688 systemd[1]: Started kubelet.service. Sep 6 00:29:22.972744 kubelet[1557]: E0906 00:29:22.972660 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:29:22.975848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:29:22.976156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:29:23.025630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3081805200.mount: Deactivated successfully. Sep 6 00:29:25.710608 env[1214]: time="2025-09-06T00:29:25.710521075Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:25.713847 env[1214]: time="2025-09-06T00:29:25.713768462Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:25.717362 env[1214]: time="2025-09-06T00:29:25.717309817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:25.721009 env[1214]: time="2025-09-06T00:29:25.720954119Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:25.722730 env[1214]: time="2025-09-06T00:29:25.722660283Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 6 00:29:29.120113 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 6 00:29:30.893203 systemd[1]: Stopped kubelet.service. Sep 6 00:29:30.897303 systemd[1]: Starting kubelet.service... Sep 6 00:29:30.956791 systemd[1]: Reloading. Sep 6 00:29:31.160390 /usr/lib/systemd/system-generators/torcx-generator[1615]: time="2025-09-06T00:29:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:29:31.167663 /usr/lib/systemd/system-generators/torcx-generator[1615]: time="2025-09-06T00:29:31Z" level=info msg="torcx already run" Sep 6 00:29:31.277268 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:29:31.277300 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:29:31.302912 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:29:31.481097 systemd[1]: Started kubelet.service. Sep 6 00:29:31.485466 systemd[1]: Stopping kubelet.service... Sep 6 00:29:31.486145 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:29:31.486885 systemd[1]: Stopped kubelet.service. Sep 6 00:29:31.489902 systemd[1]: Starting kubelet.service... Sep 6 00:29:31.814962 systemd[1]: Started kubelet.service. Sep 6 00:29:31.890061 kubelet[1663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:29:31.891415 kubelet[1663]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:29:31.891572 kubelet[1663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:29:31.891872 kubelet[1663]: I0906 00:29:31.891621 1663 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:29:32.763669 kubelet[1663]: I0906 00:29:32.763600 1663 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 00:29:32.763669 kubelet[1663]: I0906 00:29:32.763640 1663 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:29:32.764135 kubelet[1663]: I0906 00:29:32.764088 1663 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 00:29:32.823842 kubelet[1663]: E0906 00:29:32.823782 1663 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 6 00:29:32.824085 kubelet[1663]: I0906 00:29:32.823934 1663 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:29:32.837216 kubelet[1663]: E0906 00:29:32.837136 1663 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:29:32.837216 kubelet[1663]: I0906 00:29:32.837200 1663 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:29:32.843104 kubelet[1663]: I0906 00:29:32.843034 1663 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:29:32.843583 kubelet[1663]: I0906 00:29:32.843515 1663 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:29:32.843907 kubelet[1663]: I0906 00:29:32.843571 1663 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:29:32.843907 kubelet[1663]: I0906 00:29:32.843906 1663 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:29:32.844220 kubelet[1663]: I0906 00:29:32.843927 1663 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 00:29:32.845371 kubelet[1663]: I0906 00:29:32.845328 1663 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:29:32.851103 kubelet[1663]: I0906 00:29:32.851042 1663 kubelet.go:480] "Attempting to sync node with API server" Sep 6 00:29:32.851103 kubelet[1663]: I0906 00:29:32.851103 1663 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:29:32.854313 kubelet[1663]: I0906 00:29:32.854259 1663 kubelet.go:386] "Adding apiserver pod source" Sep 6 00:29:32.865924 kubelet[1663]: I0906 00:29:32.865867 1663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:29:32.890345 kubelet[1663]: E0906 00:29:32.889254 1663 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 00:29:32.890345 kubelet[1663]: I0906 00:29:32.889464 1663 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:29:32.891060 kubelet[1663]: I0906 00:29:32.890420 1663 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 00:29:32.895130 kubelet[1663]: W0906 00:29:32.895065 1663 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:29:32.907276 kubelet[1663]: E0906 00:29:32.907233 1663 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 00:29:32.913177 kubelet[1663]: I0906 00:29:32.913126 1663 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:29:32.913327 kubelet[1663]: I0906 00:29:32.913235 1663 server.go:1289] "Started kubelet" Sep 6 00:29:32.924305 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:29:32.931129 kubelet[1663]: I0906 00:29:32.930160 1663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:29:32.931943 kubelet[1663]: I0906 00:29:32.931864 1663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:29:32.932557 kubelet[1663]: I0906 00:29:32.932529 1663 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:29:32.937490 kubelet[1663]: I0906 00:29:32.937411 1663 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:29:32.939230 kubelet[1663]: I0906 00:29:32.939201 1663 server.go:317] "Adding debug handlers to kubelet server" Sep 6 00:29:32.941624 kubelet[1663]: I0906 00:29:32.941590 1663 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:29:32.947746 kubelet[1663]: I0906 00:29:32.947657 1663 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:29:32.948050 kubelet[1663]: E0906 00:29:32.947999 1663 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" Sep 6 00:29:32.949464 kubelet[1663]: I0906 00:29:32.948971 1663 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:29:32.949464 kubelet[1663]: I0906 00:29:32.949061 1663 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:29:32.950410 kubelet[1663]: E0906 00:29:32.947432 1663 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081.18628a084ebafef8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,UID:ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,},FirstTimestamp:2025-09-06 00:29:32.913164024 +0000 UTC m=+1.087562455,LastTimestamp:2025-09-06 00:29:32.913164024 +0000 UTC m=+1.087562455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,}" Sep 6 00:29:32.951873 kubelet[1663]: I0906 00:29:32.951838 1663 factory.go:223] Registration of the systemd container factory successfully Sep 6 00:29:32.952024 kubelet[1663]: I0906 00:29:32.951990 1663 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:29:32.952540 kubelet[1663]: E0906 00:29:32.950580 1663 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="200ms" Sep 6 00:29:32.952834 kubelet[1663]: E0906 00:29:32.950458 1663 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 00:29:32.953058 kubelet[1663]: I0906 00:29:32.953025 1663 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 00:29:32.955492 kubelet[1663]: I0906 00:29:32.955456 1663 factory.go:223] Registration of the containerd container factory successfully Sep 6 00:29:32.983877 kubelet[1663]: I0906 00:29:32.983839 1663 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:29:32.983877 kubelet[1663]: I0906 00:29:32.983871 1663 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:29:32.984214 kubelet[1663]: I0906 00:29:32.983899 1663 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:29:32.986870 kubelet[1663]: I0906 00:29:32.986831 1663 policy_none.go:49] "None policy: Start" Sep 6 00:29:32.986870 kubelet[1663]: I0906 00:29:32.986874 1663 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:29:32.987080 kubelet[1663]: I0906 00:29:32.986896 1663 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:29:32.996295 systemd[1]: Created slice kubepods.slice. Sep 6 00:29:33.009242 kubelet[1663]: I0906 00:29:33.009204 1663 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 00:29:33.009480 kubelet[1663]: I0906 00:29:33.009458 1663 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 00:29:33.009489 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:29:33.009822 kubelet[1663]: I0906 00:29:33.009798 1663 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:29:33.009975 kubelet[1663]: I0906 00:29:33.009957 1663 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 00:29:33.010224 kubelet[1663]: E0906 00:29:33.010126 1663 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:29:33.020686 kubelet[1663]: E0906 00:29:33.016468 1663 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 00:29:33.018938 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:29:33.025986 kubelet[1663]: E0906 00:29:33.025928 1663 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 00:29:33.026216 kubelet[1663]: I0906 00:29:33.026188 1663 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:29:33.026304 kubelet[1663]: I0906 00:29:33.026222 1663 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:29:33.027904 kubelet[1663]: I0906 00:29:33.027879 1663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:29:33.029721 kubelet[1663]: E0906 00:29:33.029668 1663 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:29:33.029934 kubelet[1663]: E0906 00:29:33.029910 1663 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" Sep 6 00:29:33.132218 systemd[1]: Created slice kubepods-burstable-pod9812e3fd9b51c57eff9cdc18a0e63e4a.slice. Sep 6 00:29:33.135165 kubelet[1663]: I0906 00:29:33.135131 1663 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.135875 kubelet[1663]: E0906 00:29:33.135834 1663 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.141920 kubelet[1663]: E0906 00:29:33.141871 1663 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.148235 systemd[1]: Created slice kubepods-burstable-pod924bbab61d941429b0d862028c9f4616.slice. Sep 6 00:29:33.151079 kubelet[1663]: I0906 00:29:33.151040 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9812e3fd9b51c57eff9cdc18a0e63e4a-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"9812e3fd9b51c57eff9cdc18a0e63e4a\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.151329 kubelet[1663]: I0906 00:29:33.151287 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/924bbab61d941429b0d862028c9f4616-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"924bbab61d941429b0d862028c9f4616\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.151653 kubelet[1663]: I0906 00:29:33.151617 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/924bbab61d941429b0d862028c9f4616-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"924bbab61d941429b0d862028c9f4616\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.151890 kubelet[1663]: I0906 00:29:33.151861 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/924bbab61d941429b0d862028c9f4616-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"924bbab61d941429b0d862028c9f4616\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.152061 kubelet[1663]: I0906 00:29:33.152031 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9812e3fd9b51c57eff9cdc18a0e63e4a-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"9812e3fd9b51c57eff9cdc18a0e63e4a\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.152233 kubelet[1663]: I0906 00:29:33.152205 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9812e3fd9b51c57eff9cdc18a0e63e4a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"9812e3fd9b51c57eff9cdc18a0e63e4a\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.152469 kubelet[1663]: I0906 00:29:33.152418 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/924bbab61d941429b0d862028c9f4616-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"924bbab61d941429b0d862028c9f4616\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.152567 kubelet[1663]: I0906 00:29:33.152482 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/924bbab61d941429b0d862028c9f4616-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"924bbab61d941429b0d862028c9f4616\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.152567 kubelet[1663]: I0906 00:29:33.152533 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c69ec219403f86f67485d0cefb92fc9-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"4c69ec219403f86f67485d0cefb92fc9\") " pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.153053 kubelet[1663]: E0906 00:29:33.152998 1663 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="400ms" Sep 6 00:29:33.153357 kubelet[1663]: E0906 00:29:33.153314 1663 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.157116 systemd[1]: Created slice kubepods-burstable-pod4c69ec219403f86f67485d0cefb92fc9.slice. Sep 6 00:29:33.159966 kubelet[1663]: E0906 00:29:33.159918 1663 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.354684 kubelet[1663]: I0906 00:29:33.353946 1663 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.354684 kubelet[1663]: E0906 00:29:33.354551 1663 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.443888 env[1214]: time="2025-09-06T00:29:33.443811486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,Uid:9812e3fd9b51c57eff9cdc18a0e63e4a,Namespace:kube-system,Attempt:0,}" Sep 6 00:29:33.455992 env[1214]: time="2025-09-06T00:29:33.455902474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,Uid:924bbab61d941429b0d862028c9f4616,Namespace:kube-system,Attempt:0,}" Sep 6 00:29:33.461996 env[1214]: time="2025-09-06T00:29:33.461918255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,Uid:4c69ec219403f86f67485d0cefb92fc9,Namespace:kube-system,Attempt:0,}" Sep 6 00:29:33.554012 kubelet[1663]: E0906 00:29:33.553939 1663 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="800ms" Sep 6 00:29:33.759950 kubelet[1663]: I0906 00:29:33.759905 1663 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:33.760385 kubelet[1663]: E0906 00:29:33.760342 1663 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:34.055418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2581348219.mount: Deactivated successfully. Sep 6 00:29:34.066473 env[1214]: time="2025-09-06T00:29:34.066405437Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.068399 env[1214]: time="2025-09-06T00:29:34.068328558Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.072113 env[1214]: time="2025-09-06T00:29:34.072057579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.073778 env[1214]: time="2025-09-06T00:29:34.073656693Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.075480 env[1214]: time="2025-09-06T00:29:34.075404462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.077964 env[1214]: time="2025-09-06T00:29:34.077906664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.079103 env[1214]: time="2025-09-06T00:29:34.079050721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.080253 env[1214]: time="2025-09-06T00:29:34.080212963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.084355 env[1214]: time="2025-09-06T00:29:34.084272641Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.085514 env[1214]: time="2025-09-06T00:29:34.085471893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.090910 env[1214]: time="2025-09-06T00:29:34.090849763Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.092117 env[1214]: time="2025-09-06T00:29:34.092057085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:34.120968 env[1214]: time="2025-09-06T00:29:34.120791208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:29:34.121271 env[1214]: time="2025-09-06T00:29:34.121224620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:29:34.121864 env[1214]: time="2025-09-06T00:29:34.121802262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:29:34.153099 env[1214]: time="2025-09-06T00:29:34.125180255Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f3dd54c3f3aad63f8ca7467cde5b6d6e7926c007ca49f0f6d41bc292bffb754 pid=1705 runtime=io.containerd.runc.v2 Sep 6 00:29:34.164303 env[1214]: time="2025-09-06T00:29:34.164201043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:29:34.164583 env[1214]: time="2025-09-06T00:29:34.164535282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:29:34.164782 env[1214]: time="2025-09-06T00:29:34.164738154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:29:34.165185 env[1214]: time="2025-09-06T00:29:34.165140733Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/89815876a106664bb8a27ab60e83638e1e1b9a409b1573414c4f35df3b73e101 pid=1729 runtime=io.containerd.runc.v2 Sep 6 00:29:34.169234 systemd[1]: Started cri-containerd-4f3dd54c3f3aad63f8ca7467cde5b6d6e7926c007ca49f0f6d41bc292bffb754.scope. Sep 6 00:29:34.192632 env[1214]: time="2025-09-06T00:29:34.189061090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:29:34.192632 env[1214]: time="2025-09-06T00:29:34.189216435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:29:34.192632 env[1214]: time="2025-09-06T00:29:34.189294836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:29:34.192632 env[1214]: time="2025-09-06T00:29:34.189588591Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7bb5419847265b1dd23e2c15059216377fbce155c31b9d5bf6072c56e84cd7b pid=1742 runtime=io.containerd.runc.v2 Sep 6 00:29:34.215026 kubelet[1663]: E0906 00:29:34.214944 1663 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 00:29:34.233578 systemd[1]: Started cri-containerd-89815876a106664bb8a27ab60e83638e1e1b9a409b1573414c4f35df3b73e101.scope. Sep 6 00:29:34.268389 systemd[1]: Started cri-containerd-f7bb5419847265b1dd23e2c15059216377fbce155c31b9d5bf6072c56e84cd7b.scope. Sep 6 00:29:34.319156 env[1214]: time="2025-09-06T00:29:34.318990013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,Uid:924bbab61d941429b0d862028c9f4616,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f3dd54c3f3aad63f8ca7467cde5b6d6e7926c007ca49f0f6d41bc292bffb754\"" Sep 6 00:29:34.322432 kubelet[1663]: E0906 00:29:34.322378 1663 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b333" Sep 6 00:29:34.328963 env[1214]: time="2025-09-06T00:29:34.328896733Z" level=info msg="CreateContainer within sandbox \"4f3dd54c3f3aad63f8ca7467cde5b6d6e7926c007ca49f0f6d41bc292bffb754\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:29:34.338091 kubelet[1663]: E0906 00:29:34.338007 1663 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 00:29:34.354957 kubelet[1663]: E0906 00:29:34.354894 1663 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="1.6s" Sep 6 00:29:34.366091 env[1214]: time="2025-09-06T00:29:34.366005234Z" level=info msg="CreateContainer within sandbox \"4f3dd54c3f3aad63f8ca7467cde5b6d6e7926c007ca49f0f6d41bc292bffb754\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3784ea3fb1201d2f4548dcd16890a53339282c35d2d4154697e801f19273efeb\"" Sep 6 00:29:34.368301 env[1214]: time="2025-09-06T00:29:34.368238525Z" level=info msg="StartContainer for \"3784ea3fb1201d2f4548dcd16890a53339282c35d2d4154697e801f19273efeb\"" Sep 6 00:29:34.371102 env[1214]: time="2025-09-06T00:29:34.370923134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,Uid:9812e3fd9b51c57eff9cdc18a0e63e4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"89815876a106664bb8a27ab60e83638e1e1b9a409b1573414c4f35df3b73e101\"" Sep 6 00:29:34.374293 kubelet[1663]: E0906 00:29:34.374244 1663 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30" Sep 6 00:29:34.384789 env[1214]: time="2025-09-06T00:29:34.384732255Z" level=info msg="CreateContainer within sandbox \"89815876a106664bb8a27ab60e83638e1e1b9a409b1573414c4f35df3b73e101\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:29:34.405072 env[1214]: time="2025-09-06T00:29:34.404992628Z" level=info msg="CreateContainer within sandbox \"89815876a106664bb8a27ab60e83638e1e1b9a409b1573414c4f35df3b73e101\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23b67a223b66bbed7e306c5bc8540fe1e44e5326625fa6f080fccc5a27862c57\"" Sep 6 00:29:34.405814 env[1214]: time="2025-09-06T00:29:34.405754297Z" level=info msg="StartContainer for \"23b67a223b66bbed7e306c5bc8540fe1e44e5326625fa6f080fccc5a27862c57\"" Sep 6 00:29:34.419724 systemd[1]: Started cri-containerd-3784ea3fb1201d2f4548dcd16890a53339282c35d2d4154697e801f19273efeb.scope. Sep 6 00:29:34.461226 systemd[1]: Started cri-containerd-23b67a223b66bbed7e306c5bc8540fe1e44e5326625fa6f080fccc5a27862c57.scope. Sep 6 00:29:34.475543 kubelet[1663]: E0906 00:29:34.475473 1663 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 00:29:34.497983 env[1214]: time="2025-09-06T00:29:34.497919617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,Uid:4c69ec219403f86f67485d0cefb92fc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7bb5419847265b1dd23e2c15059216377fbce155c31b9d5bf6072c56e84cd7b\"" Sep 6 00:29:34.505506 kubelet[1663]: E0906 00:29:34.505431 1663 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 00:29:34.506098 kubelet[1663]: E0906 00:29:34.506056 1663 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30" Sep 6 00:29:34.515598 env[1214]: time="2025-09-06T00:29:34.514822363Z" level=info msg="CreateContainer within sandbox \"f7bb5419847265b1dd23e2c15059216377fbce155c31b9d5bf6072c56e84cd7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:29:34.548743 env[1214]: time="2025-09-06T00:29:34.545886541Z" level=info msg="StartContainer for \"3784ea3fb1201d2f4548dcd16890a53339282c35d2d4154697e801f19273efeb\" returns successfully" Sep 6 00:29:34.551758 env[1214]: time="2025-09-06T00:29:34.551645099Z" level=info msg="CreateContainer within sandbox \"f7bb5419847265b1dd23e2c15059216377fbce155c31b9d5bf6072c56e84cd7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"197aab54875fe1e6d9fb834407f85e79f7ce266ced619acecba434daeb1fe724\"" Sep 6 00:29:34.552643 env[1214]: time="2025-09-06T00:29:34.552597439Z" level=info msg="StartContainer for \"197aab54875fe1e6d9fb834407f85e79f7ce266ced619acecba434daeb1fe724\"" Sep 6 00:29:34.567947 kubelet[1663]: I0906 00:29:34.567364 1663 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:34.567947 kubelet[1663]: E0906 00:29:34.567876 1663 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:34.600000 systemd[1]: Started cri-containerd-197aab54875fe1e6d9fb834407f85e79f7ce266ced619acecba434daeb1fe724.scope. Sep 6 00:29:34.630397 env[1214]: time="2025-09-06T00:29:34.630331255Z" level=info msg="StartContainer for \"23b67a223b66bbed7e306c5bc8540fe1e44e5326625fa6f080fccc5a27862c57\" returns successfully" Sep 6 00:29:34.724926 env[1214]: time="2025-09-06T00:29:34.724849838Z" level=info msg="StartContainer for \"197aab54875fe1e6d9fb834407f85e79f7ce266ced619acecba434daeb1fe724\" returns successfully" Sep 6 00:29:35.033016 kubelet[1663]: E0906 00:29:35.032977 1663 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:35.034579 kubelet[1663]: E0906 00:29:35.034546 1663 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:35.035252 kubelet[1663]: E0906 00:29:35.035205 1663 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:36.039026 kubelet[1663]: E0906 00:29:36.038972 1663 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:36.040441 kubelet[1663]: E0906 00:29:36.040408 1663 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:36.174175 kubelet[1663]: I0906 00:29:36.174135 1663 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:37.332770 kubelet[1663]: E0906 00:29:37.332728 1663 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:38.030919 kubelet[1663]: E0906 00:29:38.030864 1663 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" not found" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:38.097735 kubelet[1663]: I0906 00:29:38.097658 1663 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:38.148496 kubelet[1663]: E0906 00:29:38.148321 1663 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081.18628a084ebafef8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,UID:ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,},FirstTimestamp:2025-09-06 00:29:32.913164024 +0000 UTC m=+1.087562455,LastTimestamp:2025-09-06 00:29:32.913164024 +0000 UTC m=+1.087562455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081,}" Sep 6 00:29:38.148907 kubelet[1663]: I0906 00:29:38.148871 1663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:38.217051 kubelet[1663]: E0906 00:29:38.216994 1663 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:38.217349 kubelet[1663]: I0906 00:29:38.217320 1663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:38.228630 kubelet[1663]: E0906 00:29:38.228582 1663 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:38.228932 kubelet[1663]: I0906 00:29:38.228906 1663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:38.247155 kubelet[1663]: E0906 00:29:38.247109 1663 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:38.895276 kubelet[1663]: I0906 00:29:38.895215 1663 apiserver.go:52] "Watching apiserver" Sep 6 00:29:38.949514 kubelet[1663]: I0906 00:29:38.949462 1663 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:29:40.312628 systemd[1]: Reloading. Sep 6 00:29:40.466167 /usr/lib/systemd/system-generators/torcx-generator[1965]: time="2025-09-06T00:29:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:29:40.466230 /usr/lib/systemd/system-generators/torcx-generator[1965]: time="2025-09-06T00:29:40Z" level=info msg="torcx already run" Sep 6 00:29:40.583152 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:29:40.583182 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:29:40.612864 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:29:40.798960 systemd[1]: Stopping kubelet.service... Sep 6 00:29:40.823759 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:29:40.824121 systemd[1]: Stopped kubelet.service. Sep 6 00:29:40.824281 systemd[1]: kubelet.service: Consumed 1.662s CPU time. Sep 6 00:29:40.827540 systemd[1]: Starting kubelet.service... Sep 6 00:29:41.126045 systemd[1]: Started kubelet.service. Sep 6 00:29:41.218285 kubelet[2012]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:29:41.218285 kubelet[2012]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:29:41.218285 kubelet[2012]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:29:41.218954 kubelet[2012]: I0906 00:29:41.218433 2012 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:29:41.228031 kubelet[2012]: I0906 00:29:41.227988 2012 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 00:29:41.228225 kubelet[2012]: I0906 00:29:41.228208 2012 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:29:41.228562 kubelet[2012]: I0906 00:29:41.228540 2012 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 00:29:41.230351 kubelet[2012]: I0906 00:29:41.230315 2012 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 6 00:29:41.233103 kubelet[2012]: I0906 00:29:41.233069 2012 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:29:41.238523 kubelet[2012]: E0906 00:29:41.238490 2012 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:29:41.238717 kubelet[2012]: I0906 00:29:41.238685 2012 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:29:41.244357 kubelet[2012]: I0906 00:29:41.244288 2012 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:29:41.245353 kubelet[2012]: I0906 00:29:41.245310 2012 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:29:41.246067 kubelet[2012]: I0906 00:29:41.245584 2012 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:29:41.246420 kubelet[2012]: I0906 00:29:41.246380 2012 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:29:41.246611 kubelet[2012]: I0906 00:29:41.246585 2012 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 00:29:41.246822 kubelet[2012]: I0906 00:29:41.246804 2012 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:29:41.247204 kubelet[2012]: I0906 00:29:41.247182 2012 kubelet.go:480] "Attempting to sync node with API server" Sep 6 00:29:41.247401 kubelet[2012]: I0906 00:29:41.247373 2012 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:29:41.247649 kubelet[2012]: I0906 00:29:41.247629 2012 kubelet.go:386] "Adding apiserver pod source" Sep 6 00:29:41.247972 kubelet[2012]: I0906 00:29:41.247952 2012 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:29:41.258414 kubelet[2012]: I0906 00:29:41.258370 2012 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:29:41.259305 kubelet[2012]: I0906 00:29:41.259271 2012 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 00:29:41.291557 kubelet[2012]: I0906 00:29:41.291194 2012 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:29:41.291557 kubelet[2012]: I0906 00:29:41.291279 2012 server.go:1289] "Started kubelet" Sep 6 00:29:41.295051 kubelet[2012]: I0906 00:29:41.292882 2012 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:29:41.295677 kubelet[2012]: I0906 00:29:41.295123 2012 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:29:41.295677 kubelet[2012]: I0906 00:29:41.295662 2012 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:29:41.297065 kubelet[2012]: I0906 00:29:41.296542 2012 server.go:317] "Adding debug handlers to kubelet server" Sep 6 00:29:41.304603 kubelet[2012]: I0906 00:29:41.300685 2012 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:29:41.309431 kubelet[2012]: I0906 00:29:41.309395 2012 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:29:41.317169 kubelet[2012]: I0906 00:29:41.317132 2012 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:29:41.319185 kubelet[2012]: I0906 00:29:41.319154 2012 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:29:41.319589 kubelet[2012]: I0906 00:29:41.319567 2012 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:29:41.323535 kubelet[2012]: I0906 00:29:41.323505 2012 factory.go:223] Registration of the systemd container factory successfully Sep 6 00:29:41.323976 kubelet[2012]: I0906 00:29:41.323911 2012 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:29:41.330008 kubelet[2012]: I0906 00:29:41.329969 2012 factory.go:223] Registration of the containerd container factory successfully Sep 6 00:29:41.338560 kubelet[2012]: E0906 00:29:41.338514 2012 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:29:41.343138 sudo[2034]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:29:41.344436 sudo[2034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:29:41.410061 kubelet[2012]: I0906 00:29:41.407475 2012 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 00:29:41.420535 kubelet[2012]: I0906 00:29:41.420337 2012 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 00:29:41.420535 kubelet[2012]: I0906 00:29:41.420374 2012 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 00:29:41.420535 kubelet[2012]: I0906 00:29:41.420406 2012 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:29:41.420535 kubelet[2012]: I0906 00:29:41.420420 2012 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 00:29:41.420535 kubelet[2012]: E0906 00:29:41.420494 2012 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:29:41.482834 kubelet[2012]: I0906 00:29:41.482793 2012 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:29:41.482834 kubelet[2012]: I0906 00:29:41.482826 2012 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:29:41.483097 kubelet[2012]: I0906 00:29:41.482861 2012 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:29:41.483171 kubelet[2012]: I0906 00:29:41.483103 2012 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:29:41.483171 kubelet[2012]: I0906 00:29:41.483122 2012 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:29:41.483171 kubelet[2012]: I0906 00:29:41.483149 2012 policy_none.go:49] "None policy: Start" Sep 6 00:29:41.483171 kubelet[2012]: I0906 00:29:41.483165 2012 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:29:41.483487 kubelet[2012]: I0906 00:29:41.483184 2012 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:29:41.483487 kubelet[2012]: I0906 00:29:41.483350 2012 state_mem.go:75] "Updated machine memory state" Sep 6 00:29:41.491931 kubelet[2012]: E0906 00:29:41.491892 2012 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 00:29:41.492261 kubelet[2012]: I0906 00:29:41.492238 2012 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:29:41.492392 kubelet[2012]: I0906 00:29:41.492265 2012 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:29:41.494057 kubelet[2012]: I0906 00:29:41.494024 2012 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:29:41.501372 kubelet[2012]: E0906 00:29:41.501335 2012 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:29:41.523731 kubelet[2012]: I0906 00:29:41.522872 2012 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.529337 kubelet[2012]: I0906 00:29:41.529164 2012 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.532713 kubelet[2012]: I0906 00:29:41.532655 2012 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 6 00:29:41.538189 kubelet[2012]: I0906 00:29:41.538038 2012 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.549424 kubelet[2012]: I0906 00:29:41.549384 2012 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 6 00:29:41.549668 kubelet[2012]: I0906 00:29:41.549641 2012 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 6 00:29:41.610928 kubelet[2012]: I0906 00:29:41.610892 2012 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.621725 kubelet[2012]: I0906 00:29:41.621665 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c69ec219403f86f67485d0cefb92fc9-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"4c69ec219403f86f67485d0cefb92fc9\") " pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.622051 kubelet[2012]: I0906 00:29:41.621979 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/924bbab61d941429b0d862028c9f4616-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"924bbab61d941429b0d862028c9f4616\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.622249 kubelet[2012]: I0906 00:29:41.622225 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/924bbab61d941429b0d862028c9f4616-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"924bbab61d941429b0d862028c9f4616\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.622470 kubelet[2012]: I0906 00:29:41.622427 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9812e3fd9b51c57eff9cdc18a0e63e4a-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"9812e3fd9b51c57eff9cdc18a0e63e4a\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.622668 kubelet[2012]: I0906 00:29:41.622626 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9812e3fd9b51c57eff9cdc18a0e63e4a-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"9812e3fd9b51c57eff9cdc18a0e63e4a\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.622915 kubelet[2012]: I0906 00:29:41.622854 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9812e3fd9b51c57eff9cdc18a0e63e4a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"9812e3fd9b51c57eff9cdc18a0e63e4a\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.623177 kubelet[2012]: I0906 00:29:41.623100 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/924bbab61d941429b0d862028c9f4616-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"924bbab61d941429b0d862028c9f4616\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.623488 kubelet[2012]: I0906 00:29:41.623446 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/924bbab61d941429b0d862028c9f4616-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"924bbab61d941429b0d862028c9f4616\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.623689 kubelet[2012]: I0906 00:29:41.623648 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/924bbab61d941429b0d862028c9f4616-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" (UID: \"924bbab61d941429b0d862028c9f4616\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.626908 kubelet[2012]: I0906 00:29:41.626854 2012 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:41.627190 kubelet[2012]: I0906 00:29:41.627172 2012 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:42.253327 sudo[2034]: pam_unix(sudo:session): session closed for user root Sep 6 00:29:42.258366 kubelet[2012]: I0906 00:29:42.258291 2012 apiserver.go:52] "Watching apiserver" Sep 6 00:29:42.319824 kubelet[2012]: I0906 00:29:42.319680 2012 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:29:42.456566 kubelet[2012]: I0906 00:29:42.456500 2012 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:42.456956 kubelet[2012]: I0906 00:29:42.456920 2012 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:42.457281 kubelet[2012]: I0906 00:29:42.457246 2012 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:42.477559 kubelet[2012]: I0906 00:29:42.477513 2012 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 6 00:29:42.477809 kubelet[2012]: E0906 00:29:42.477614 2012 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:42.477996 kubelet[2012]: I0906 00:29:42.477950 2012 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 6 00:29:42.478099 kubelet[2012]: E0906 00:29:42.478023 2012 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:42.478299 kubelet[2012]: I0906 00:29:42.478259 2012 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 6 00:29:42.478414 kubelet[2012]: E0906 00:29:42.478327 2012 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" Sep 6 00:29:42.509418 kubelet[2012]: I0906 00:29:42.509238 2012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" podStartSLOduration=1.509185953 podStartE2EDuration="1.509185953s" podCreationTimestamp="2025-09-06 00:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:29:42.506574579 +0000 UTC m=+1.369336986" watchObservedRunningTime="2025-09-06 00:29:42.509185953 +0000 UTC m=+1.371948353" Sep 6 00:29:42.538025 kubelet[2012]: I0906 00:29:42.537945 2012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" podStartSLOduration=1.537920829 podStartE2EDuration="1.537920829s" podCreationTimestamp="2025-09-06 00:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:29:42.526151078 +0000 UTC m=+1.388913485" watchObservedRunningTime="2025-09-06 00:29:42.537920829 +0000 UTC m=+1.400683230" Sep 6 00:29:42.553982 kubelet[2012]: I0906 00:29:42.553903 2012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" podStartSLOduration=1.553876953 podStartE2EDuration="1.553876953s" podCreationTimestamp="2025-09-06 00:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:29:42.539953312 +0000 UTC m=+1.402715719" watchObservedRunningTime="2025-09-06 00:29:42.553876953 +0000 UTC m=+1.416639360" Sep 6 00:29:44.199667 update_engine[1205]: I0906 00:29:44.198776 1205 update_attempter.cc:509] Updating boot flags... Sep 6 00:29:44.473422 sudo[1410]: pam_unix(sudo:session): session closed for user root Sep 6 00:29:44.517746 sshd[1407]: pam_unix(sshd:session): session closed for user core Sep 6 00:29:44.523244 systemd[1]: sshd@4-10.128.0.49:22-139.178.89.65:47790.service: Deactivated successfully. Sep 6 00:29:44.524503 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:29:44.524862 systemd-logind[1223]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:29:44.525607 systemd[1]: session-5.scope: Consumed 8.071s CPU time. Sep 6 00:29:44.526720 systemd-logind[1223]: Removed session 5. Sep 6 00:29:46.835832 kubelet[2012]: I0906 00:29:46.835767 2012 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:29:46.836513 env[1214]: time="2025-09-06T00:29:46.836425140Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:29:46.837067 kubelet[2012]: I0906 00:29:46.836752 2012 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:29:47.897840 systemd[1]: Created slice kubepods-besteffort-pod520778b2_4d28_4b79_85e3_f39c6b444731.slice. Sep 6 00:29:47.962161 systemd[1]: Created slice kubepods-burstable-pode1e5d6d6_6a59_4e88_b053_3ede0a0fe056.slice. Sep 6 00:29:47.975110 kubelet[2012]: I0906 00:29:47.975067 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-xtables-lock\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.975785 kubelet[2012]: I0906 00:29:47.975754 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-clustermesh-secrets\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.975976 kubelet[2012]: I0906 00:29:47.975943 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkd56\" (UniqueName: \"kubernetes.io/projected/520778b2-4d28-4b79-85e3-f39c6b444731-kube-api-access-jkd56\") pod \"kube-proxy-mmkdv\" (UID: \"520778b2-4d28-4b79-85e3-f39c6b444731\") " pod="kube-system/kube-proxy-mmkdv" Sep 6 00:29:47.976123 kubelet[2012]: I0906 00:29:47.976097 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-run\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.976267 kubelet[2012]: I0906 00:29:47.976243 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-bpf-maps\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.976407 kubelet[2012]: I0906 00:29:47.976383 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-lib-modules\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.976542 kubelet[2012]: I0906 00:29:47.976520 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-host-proc-sys-net\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.976681 kubelet[2012]: I0906 00:29:47.976644 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-host-proc-sys-kernel\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.976868 kubelet[2012]: I0906 00:29:47.976822 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/520778b2-4d28-4b79-85e3-f39c6b444731-lib-modules\") pod \"kube-proxy-mmkdv\" (UID: \"520778b2-4d28-4b79-85e3-f39c6b444731\") " pod="kube-system/kube-proxy-mmkdv" Sep 6 00:29:47.977013 kubelet[2012]: I0906 00:29:47.976990 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-hostproc\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.977156 kubelet[2012]: I0906 00:29:47.977134 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cni-path\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.977300 kubelet[2012]: I0906 00:29:47.977276 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8p28\" (UniqueName: \"kubernetes.io/projected/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-kube-api-access-q8p28\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.977424 kubelet[2012]: I0906 00:29:47.977402 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/520778b2-4d28-4b79-85e3-f39c6b444731-kube-proxy\") pod \"kube-proxy-mmkdv\" (UID: \"520778b2-4d28-4b79-85e3-f39c6b444731\") " pod="kube-system/kube-proxy-mmkdv" Sep 6 00:29:47.977554 kubelet[2012]: I0906 00:29:47.977532 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-config-path\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.977690 kubelet[2012]: I0906 00:29:47.977668 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-hubble-tls\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.977865 kubelet[2012]: I0906 00:29:47.977840 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/520778b2-4d28-4b79-85e3-f39c6b444731-xtables-lock\") pod \"kube-proxy-mmkdv\" (UID: \"520778b2-4d28-4b79-85e3-f39c6b444731\") " pod="kube-system/kube-proxy-mmkdv" Sep 6 00:29:47.978015 kubelet[2012]: I0906 00:29:47.977971 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-cgroup\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:47.978157 kubelet[2012]: I0906 00:29:47.978123 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-etc-cni-netd\") pod \"cilium-szqff\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " pod="kube-system/cilium-szqff" Sep 6 00:29:48.016213 systemd[1]: Created slice kubepods-besteffort-podc08ff725_6646_4e0e_95aa_9140e30e039c.slice. Sep 6 00:29:48.079370 kubelet[2012]: I0906 00:29:48.079314 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp5j5\" (UniqueName: \"kubernetes.io/projected/c08ff725-6646-4e0e-95aa-9140e30e039c-kube-api-access-mp5j5\") pod \"cilium-operator-6c4d7847fc-x2hld\" (UID: \"c08ff725-6646-4e0e-95aa-9140e30e039c\") " pod="kube-system/cilium-operator-6c4d7847fc-x2hld" Sep 6 00:29:48.079691 kubelet[2012]: I0906 00:29:48.079641 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c08ff725-6646-4e0e-95aa-9140e30e039c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-x2hld\" (UID: \"c08ff725-6646-4e0e-95aa-9140e30e039c\") " pod="kube-system/cilium-operator-6c4d7847fc-x2hld" Sep 6 00:29:48.081928 kubelet[2012]: I0906 00:29:48.081842 2012 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:29:48.210246 env[1214]: time="2025-09-06T00:29:48.210174525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mmkdv,Uid:520778b2-4d28-4b79-85e3-f39c6b444731,Namespace:kube-system,Attempt:0,}" Sep 6 00:29:48.232881 env[1214]: time="2025-09-06T00:29:48.232783323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:29:48.233110 env[1214]: time="2025-09-06T00:29:48.232896654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:29:48.233110 env[1214]: time="2025-09-06T00:29:48.232946598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:29:48.233273 env[1214]: time="2025-09-06T00:29:48.233162720Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2149a2d20e2d8692d4a2cb764cf38f2c411e2942f0d92e723536000a55bd9de pid=2116 runtime=io.containerd.runc.v2 Sep 6 00:29:48.254870 systemd[1]: Started cri-containerd-e2149a2d20e2d8692d4a2cb764cf38f2c411e2942f0d92e723536000a55bd9de.scope. Sep 6 00:29:48.270973 env[1214]: time="2025-09-06T00:29:48.270908779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szqff,Uid:e1e5d6d6-6a59-4e88-b053-3ede0a0fe056,Namespace:kube-system,Attempt:0,}" Sep 6 00:29:48.299213 env[1214]: time="2025-09-06T00:29:48.299104092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:29:48.299427 env[1214]: time="2025-09-06T00:29:48.299212261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:29:48.299427 env[1214]: time="2025-09-06T00:29:48.299274434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:29:48.299589 env[1214]: time="2025-09-06T00:29:48.299505711Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086 pid=2148 runtime=io.containerd.runc.v2 Sep 6 00:29:48.312444 env[1214]: time="2025-09-06T00:29:48.312372960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mmkdv,Uid:520778b2-4d28-4b79-85e3-f39c6b444731,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2149a2d20e2d8692d4a2cb764cf38f2c411e2942f0d92e723536000a55bd9de\"" Sep 6 00:29:48.323471 env[1214]: time="2025-09-06T00:29:48.323393194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-x2hld,Uid:c08ff725-6646-4e0e-95aa-9140e30e039c,Namespace:kube-system,Attempt:0,}" Sep 6 00:29:48.324279 env[1214]: time="2025-09-06T00:29:48.324204234Z" level=info msg="CreateContainer within sandbox \"e2149a2d20e2d8692d4a2cb764cf38f2c411e2942f0d92e723536000a55bd9de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:29:48.334866 systemd[1]: Started cri-containerd-bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086.scope. Sep 6 00:29:48.368542 env[1214]: time="2025-09-06T00:29:48.368469649Z" level=info msg="CreateContainer within sandbox \"e2149a2d20e2d8692d4a2cb764cf38f2c411e2942f0d92e723536000a55bd9de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8eb21a760b42a789dddb35f96597483a17d2ea2a58330774cb8926bd9da99cf8\"" Sep 6 00:29:48.370540 env[1214]: time="2025-09-06T00:29:48.370495781Z" level=info msg="StartContainer for \"8eb21a760b42a789dddb35f96597483a17d2ea2a58330774cb8926bd9da99cf8\"" Sep 6 00:29:48.395650 env[1214]: time="2025-09-06T00:29:48.395222526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:29:48.395650 env[1214]: time="2025-09-06T00:29:48.395286814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:29:48.395650 env[1214]: time="2025-09-06T00:29:48.395308623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:29:48.395650 env[1214]: time="2025-09-06T00:29:48.395527855Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc pid=2191 runtime=io.containerd.runc.v2 Sep 6 00:29:48.417075 env[1214]: time="2025-09-06T00:29:48.417010552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szqff,Uid:e1e5d6d6-6a59-4e88-b053-3ede0a0fe056,Namespace:kube-system,Attempt:0,} returns sandbox id \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\"" Sep 6 00:29:48.420505 env[1214]: time="2025-09-06T00:29:48.420427815Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:29:48.442010 systemd[1]: Started cri-containerd-8eb21a760b42a789dddb35f96597483a17d2ea2a58330774cb8926bd9da99cf8.scope. Sep 6 00:29:48.444330 systemd[1]: Started cri-containerd-f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc.scope. Sep 6 00:29:48.542363 env[1214]: time="2025-09-06T00:29:48.542222788Z" level=info msg="StartContainer for \"8eb21a760b42a789dddb35f96597483a17d2ea2a58330774cb8926bd9da99cf8\" returns successfully" Sep 6 00:29:48.563748 env[1214]: time="2025-09-06T00:29:48.563641119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-x2hld,Uid:c08ff725-6646-4e0e-95aa-9140e30e039c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\"" Sep 6 00:29:49.513961 kubelet[2012]: I0906 00:29:49.513880 2012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mmkdv" podStartSLOduration=2.513847729 podStartE2EDuration="2.513847729s" podCreationTimestamp="2025-09-06 00:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:29:49.492559705 +0000 UTC m=+8.355322126" watchObservedRunningTime="2025-09-06 00:29:49.513847729 +0000 UTC m=+8.376610135" Sep 6 00:29:55.784398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1429345131.mount: Deactivated successfully. Sep 6 00:29:59.325648 env[1214]: time="2025-09-06T00:29:59.325566298Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:59.329518 env[1214]: time="2025-09-06T00:29:59.329460549Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:59.332879 env[1214]: time="2025-09-06T00:29:59.332824129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:59.333506 env[1214]: time="2025-09-06T00:29:59.333439629Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:29:59.339757 env[1214]: time="2025-09-06T00:29:59.338965289Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:29:59.342447 env[1214]: time="2025-09-06T00:29:59.342367446Z" level=info msg="CreateContainer within sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:29:59.363850 env[1214]: time="2025-09-06T00:29:59.363405056Z" level=info msg="CreateContainer within sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b\"" Sep 6 00:29:59.367872 env[1214]: time="2025-09-06T00:29:59.367814830Z" level=info msg="StartContainer for \"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b\"" Sep 6 00:29:59.415737 systemd[1]: Started cri-containerd-449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b.scope. Sep 6 00:29:59.466437 env[1214]: time="2025-09-06T00:29:59.466361283Z" level=info msg="StartContainer for \"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b\" returns successfully" Sep 6 00:29:59.488401 systemd[1]: cri-containerd-449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b.scope: Deactivated successfully. Sep 6 00:30:00.356811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b-rootfs.mount: Deactivated successfully. Sep 6 00:30:01.326486 env[1214]: time="2025-09-06T00:30:01.326366573Z" level=info msg="shim disconnected" id=449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b Sep 6 00:30:01.326486 env[1214]: time="2025-09-06T00:30:01.326437285Z" level=warning msg="cleaning up after shim disconnected" id=449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b namespace=k8s.io Sep 6 00:30:01.326486 env[1214]: time="2025-09-06T00:30:01.326456629Z" level=info msg="cleaning up dead shim" Sep 6 00:30:01.343884 env[1214]: time="2025-09-06T00:30:01.343808445Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:30:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2452 runtime=io.containerd.runc.v2\n" Sep 6 00:30:01.543755 env[1214]: time="2025-09-06T00:30:01.543675369Z" level=info msg="CreateContainer within sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:30:01.563405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1479650130.mount: Deactivated successfully. Sep 6 00:30:01.593059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount755656630.mount: Deactivated successfully. Sep 6 00:30:01.624559 env[1214]: time="2025-09-06T00:30:01.624488435Z" level=info msg="CreateContainer within sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8\"" Sep 6 00:30:01.626679 env[1214]: time="2025-09-06T00:30:01.625582580Z" level=info msg="StartContainer for \"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8\"" Sep 6 00:30:01.660530 systemd[1]: Started cri-containerd-72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8.scope. Sep 6 00:30:01.732004 env[1214]: time="2025-09-06T00:30:01.731943538Z" level=info msg="StartContainer for \"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8\" returns successfully" Sep 6 00:30:01.746137 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:30:01.747391 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:30:01.748304 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:30:01.755653 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:30:01.761453 systemd[1]: cri-containerd-72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8.scope: Deactivated successfully. Sep 6 00:30:01.775930 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:30:01.819270 env[1214]: time="2025-09-06T00:30:01.819193212Z" level=info msg="shim disconnected" id=72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8 Sep 6 00:30:01.819633 env[1214]: time="2025-09-06T00:30:01.819598969Z" level=warning msg="cleaning up after shim disconnected" id=72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8 namespace=k8s.io Sep 6 00:30:01.819798 env[1214]: time="2025-09-06T00:30:01.819772346Z" level=info msg="cleaning up dead shim" Sep 6 00:30:01.889129 env[1214]: time="2025-09-06T00:30:01.887849853Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:30:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2521 runtime=io.containerd.runc.v2\n" Sep 6 00:30:02.539690 env[1214]: time="2025-09-06T00:30:02.535000849Z" level=info msg="CreateContainer within sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:30:02.537373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906408632.mount: Deactivated successfully. Sep 6 00:30:02.577852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount352408982.mount: Deactivated successfully. Sep 6 00:30:02.593106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498329626.mount: Deactivated successfully. Sep 6 00:30:02.601176 env[1214]: time="2025-09-06T00:30:02.601058785Z" level=info msg="CreateContainer within sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da\"" Sep 6 00:30:02.604854 env[1214]: time="2025-09-06T00:30:02.604800480Z" level=info msg="StartContainer for \"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da\"" Sep 6 00:30:02.640152 systemd[1]: Started cri-containerd-5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da.scope. Sep 6 00:30:02.713293 systemd[1]: cri-containerd-5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da.scope: Deactivated successfully. Sep 6 00:30:02.714678 env[1214]: time="2025-09-06T00:30:02.714620990Z" level=info msg="StartContainer for \"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da\" returns successfully" Sep 6 00:30:02.841213 env[1214]: time="2025-09-06T00:30:02.841051103Z" level=info msg="shim disconnected" id=5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da Sep 6 00:30:02.841213 env[1214]: time="2025-09-06T00:30:02.841123184Z" level=warning msg="cleaning up after shim disconnected" id=5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da namespace=k8s.io Sep 6 00:30:02.841213 env[1214]: time="2025-09-06T00:30:02.841141194Z" level=info msg="cleaning up dead shim" Sep 6 00:30:02.880026 env[1214]: time="2025-09-06T00:30:02.879960674Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:30:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2577 runtime=io.containerd.runc.v2\n" Sep 6 00:30:03.099574 env[1214]: time="2025-09-06T00:30:03.099140412Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:30:03.101680 env[1214]: time="2025-09-06T00:30:03.101614914Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:30:03.104059 env[1214]: time="2025-09-06T00:30:03.104009638Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:30:03.105201 env[1214]: time="2025-09-06T00:30:03.105130924Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:30:03.113070 env[1214]: time="2025-09-06T00:30:03.112995208Z" level=info msg="CreateContainer within sandbox \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:30:03.127280 env[1214]: time="2025-09-06T00:30:03.127211032Z" level=info msg="CreateContainer within sandbox \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\"" Sep 6 00:30:03.128420 env[1214]: time="2025-09-06T00:30:03.127975980Z" level=info msg="StartContainer for \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\"" Sep 6 00:30:03.163081 systemd[1]: Started cri-containerd-1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8.scope. Sep 6 00:30:03.214965 env[1214]: time="2025-09-06T00:30:03.211899994Z" level=info msg="StartContainer for \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\" returns successfully" Sep 6 00:30:03.531521 env[1214]: time="2025-09-06T00:30:03.531464646Z" level=info msg="CreateContainer within sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:30:03.570618 env[1214]: time="2025-09-06T00:30:03.570540464Z" level=info msg="CreateContainer within sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec\"" Sep 6 00:30:03.571589 env[1214]: time="2025-09-06T00:30:03.571516743Z" level=info msg="StartContainer for \"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec\"" Sep 6 00:30:03.574025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3127467343.mount: Deactivated successfully. Sep 6 00:30:03.585909 kubelet[2012]: I0906 00:30:03.585530 2012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-x2hld" podStartSLOduration=2.044866289 podStartE2EDuration="16.585501255s" podCreationTimestamp="2025-09-06 00:29:47 +0000 UTC" firstStartedPulling="2025-09-06 00:29:48.566236625 +0000 UTC m=+7.428999025" lastFinishedPulling="2025-09-06 00:30:03.106871591 +0000 UTC m=+21.969633991" observedRunningTime="2025-09-06 00:30:03.582479975 +0000 UTC m=+22.445242381" watchObservedRunningTime="2025-09-06 00:30:03.585501255 +0000 UTC m=+22.448263658" Sep 6 00:30:03.650156 systemd[1]: Started cri-containerd-db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec.scope. Sep 6 00:30:03.710237 systemd[1]: cri-containerd-db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec.scope: Deactivated successfully. Sep 6 00:30:03.713676 env[1214]: time="2025-09-06T00:30:03.713615964Z" level=info msg="StartContainer for \"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec\" returns successfully" Sep 6 00:30:03.794818 env[1214]: time="2025-09-06T00:30:03.794629925Z" level=info msg="shim disconnected" id=db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec Sep 6 00:30:03.794818 env[1214]: time="2025-09-06T00:30:03.794742467Z" level=warning msg="cleaning up after shim disconnected" id=db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec namespace=k8s.io Sep 6 00:30:03.794818 env[1214]: time="2025-09-06T00:30:03.794764024Z" level=info msg="cleaning up dead shim" Sep 6 00:30:03.822840 env[1214]: time="2025-09-06T00:30:03.822773086Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:30:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2669 runtime=io.containerd.runc.v2\n" Sep 6 00:30:04.538625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec-rootfs.mount: Deactivated successfully. Sep 6 00:30:04.542968 env[1214]: time="2025-09-06T00:30:04.539655850Z" level=info msg="CreateContainer within sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:30:04.565915 env[1214]: time="2025-09-06T00:30:04.565856519Z" level=info msg="CreateContainer within sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\"" Sep 6 00:30:04.566959 env[1214]: time="2025-09-06T00:30:04.566916777Z" level=info msg="StartContainer for \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\"" Sep 6 00:30:04.620387 systemd[1]: Started cri-containerd-4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4.scope. Sep 6 00:30:04.743845 env[1214]: time="2025-09-06T00:30:04.743775946Z" level=info msg="StartContainer for \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\" returns successfully" Sep 6 00:30:05.032145 kubelet[2012]: I0906 00:30:05.031318 2012 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 00:30:05.092477 systemd[1]: Created slice kubepods-burstable-podf930a583_9367_4cb1_9b63_8cb42ded0143.slice. Sep 6 00:30:05.113345 systemd[1]: Created slice kubepods-burstable-pod981b2bd2_b1a6_401c_9f01_e5a04bad393a.slice. Sep 6 00:30:05.143807 kubelet[2012]: I0906 00:30:05.143643 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6h5\" (UniqueName: \"kubernetes.io/projected/981b2bd2-b1a6-401c-9f01-e5a04bad393a-kube-api-access-gm6h5\") pod \"coredns-674b8bbfcf-f7q8h\" (UID: \"981b2bd2-b1a6-401c-9f01-e5a04bad393a\") " pod="kube-system/coredns-674b8bbfcf-f7q8h" Sep 6 00:30:05.144194 kubelet[2012]: I0906 00:30:05.144150 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/981b2bd2-b1a6-401c-9f01-e5a04bad393a-config-volume\") pod \"coredns-674b8bbfcf-f7q8h\" (UID: \"981b2bd2-b1a6-401c-9f01-e5a04bad393a\") " pod="kube-system/coredns-674b8bbfcf-f7q8h" Sep 6 00:30:05.144462 kubelet[2012]: I0906 00:30:05.144421 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f930a583-9367-4cb1-9b63-8cb42ded0143-config-volume\") pod \"coredns-674b8bbfcf-g4x4w\" (UID: \"f930a583-9367-4cb1-9b63-8cb42ded0143\") " pod="kube-system/coredns-674b8bbfcf-g4x4w" Sep 6 00:30:05.144692 kubelet[2012]: I0906 00:30:05.144658 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd5bw\" (UniqueName: \"kubernetes.io/projected/f930a583-9367-4cb1-9b63-8cb42ded0143-kube-api-access-sd5bw\") pod \"coredns-674b8bbfcf-g4x4w\" (UID: \"f930a583-9367-4cb1-9b63-8cb42ded0143\") " pod="kube-system/coredns-674b8bbfcf-g4x4w" Sep 6 00:30:05.425782 env[1214]: time="2025-09-06T00:30:05.425338661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f7q8h,Uid:981b2bd2-b1a6-401c-9f01-e5a04bad393a,Namespace:kube-system,Attempt:0,}" Sep 6 00:30:05.426675 env[1214]: time="2025-09-06T00:30:05.426160067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g4x4w,Uid:f930a583-9367-4cb1-9b63-8cb42ded0143,Namespace:kube-system,Attempt:0,}" Sep 6 00:30:05.564507 systemd[1]: run-containerd-runc-k8s.io-4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4-runc.d6AH1K.mount: Deactivated successfully. Sep 6 00:30:07.512306 systemd-networkd[1019]: cilium_host: Link UP Sep 6 00:30:07.517884 systemd-networkd[1019]: cilium_net: Link UP Sep 6 00:30:07.523732 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 00:30:07.536778 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:30:07.524411 systemd-networkd[1019]: cilium_net: Gained carrier Sep 6 00:30:07.542158 systemd-networkd[1019]: cilium_host: Gained carrier Sep 6 00:30:07.697312 systemd-networkd[1019]: cilium_vxlan: Link UP Sep 6 00:30:07.697326 systemd-networkd[1019]: cilium_vxlan: Gained carrier Sep 6 00:30:07.757476 systemd-networkd[1019]: cilium_net: Gained IPv6LL Sep 6 00:30:07.986744 kernel: NET: Registered PF_ALG protocol family Sep 6 00:30:08.318499 systemd-networkd[1019]: cilium_host: Gained IPv6LL Sep 6 00:30:08.932897 systemd-networkd[1019]: lxc_health: Link UP Sep 6 00:30:08.958817 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:30:08.960763 systemd-networkd[1019]: lxc_health: Gained carrier Sep 6 00:30:09.085442 systemd-networkd[1019]: cilium_vxlan: Gained IPv6LL Sep 6 00:30:09.503599 systemd-networkd[1019]: lxca9eda0973d4f: Link UP Sep 6 00:30:09.518749 kernel: eth0: renamed from tmpd71df Sep 6 00:30:09.535741 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca9eda0973d4f: link becomes ready Sep 6 00:30:09.535830 systemd-networkd[1019]: lxca9eda0973d4f: Gained carrier Sep 6 00:30:09.546074 systemd-networkd[1019]: lxcba6cfb507f55: Link UP Sep 6 00:30:09.557783 kernel: eth0: renamed from tmp13d1c Sep 6 00:30:09.582838 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcba6cfb507f55: link becomes ready Sep 6 00:30:09.580621 systemd-networkd[1019]: lxcba6cfb507f55: Gained carrier Sep 6 00:30:10.309655 kubelet[2012]: I0906 00:30:10.309558 2012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-szqff" podStartSLOduration=12.393121582 podStartE2EDuration="23.309532449s" podCreationTimestamp="2025-09-06 00:29:47 +0000 UTC" firstStartedPulling="2025-09-06 00:29:48.419346382 +0000 UTC m=+7.282108765" lastFinishedPulling="2025-09-06 00:29:59.335757239 +0000 UTC m=+18.198519632" observedRunningTime="2025-09-06 00:30:05.607587296 +0000 UTC m=+24.470349702" watchObservedRunningTime="2025-09-06 00:30:10.309532449 +0000 UTC m=+29.172294869" Sep 6 00:30:10.621806 systemd-networkd[1019]: lxcba6cfb507f55: Gained IPv6LL Sep 6 00:30:10.942284 systemd-networkd[1019]: lxc_health: Gained IPv6LL Sep 6 00:30:11.261440 systemd-networkd[1019]: lxca9eda0973d4f: Gained IPv6LL Sep 6 00:30:14.736108 env[1214]: time="2025-09-06T00:30:14.735976211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:30:14.736999 env[1214]: time="2025-09-06T00:30:14.736922056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:30:14.737234 env[1214]: time="2025-09-06T00:30:14.737174467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:30:14.737808 env[1214]: time="2025-09-06T00:30:14.737733470Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13d1cc679c3d7159d4740c8f3ab4d2bf9a09d17ed013f8c4767ec2d02bd2a45e pid=3221 runtime=io.containerd.runc.v2 Sep 6 00:30:14.742631 env[1214]: time="2025-09-06T00:30:14.742486417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:30:14.743036 env[1214]: time="2025-09-06T00:30:14.742916809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:30:14.743263 env[1214]: time="2025-09-06T00:30:14.743205175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:30:14.745831 env[1214]: time="2025-09-06T00:30:14.745759984Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d71df72955b44744560e871143bb20c5c324abd6df7696561f04bf61d30f8f90 pid=3218 runtime=io.containerd.runc.v2 Sep 6 00:30:14.825710 systemd[1]: Started cri-containerd-13d1cc679c3d7159d4740c8f3ab4d2bf9a09d17ed013f8c4767ec2d02bd2a45e.scope. Sep 6 00:30:14.834841 systemd[1]: Started cri-containerd-d71df72955b44744560e871143bb20c5c324abd6df7696561f04bf61d30f8f90.scope. Sep 6 00:30:14.951268 env[1214]: time="2025-09-06T00:30:14.951192582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f7q8h,Uid:981b2bd2-b1a6-401c-9f01-e5a04bad393a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d71df72955b44744560e871143bb20c5c324abd6df7696561f04bf61d30f8f90\"" Sep 6 00:30:14.958249 env[1214]: time="2025-09-06T00:30:14.958184873Z" level=info msg="CreateContainer within sandbox \"d71df72955b44744560e871143bb20c5c324abd6df7696561f04bf61d30f8f90\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:30:14.977265 env[1214]: time="2025-09-06T00:30:14.977167553Z" level=info msg="CreateContainer within sandbox \"d71df72955b44744560e871143bb20c5c324abd6df7696561f04bf61d30f8f90\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46c59dddaaf2b276ebaae14d3604cf08446a1bc17a09f80abbf64216ec5942db\"" Sep 6 00:30:14.978959 env[1214]: time="2025-09-06T00:30:14.978899342Z" level=info msg="StartContainer for \"46c59dddaaf2b276ebaae14d3604cf08446a1bc17a09f80abbf64216ec5942db\"" Sep 6 00:30:15.016649 systemd[1]: Started cri-containerd-46c59dddaaf2b276ebaae14d3604cf08446a1bc17a09f80abbf64216ec5942db.scope. Sep 6 00:30:15.058597 env[1214]: time="2025-09-06T00:30:15.058531352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g4x4w,Uid:f930a583-9367-4cb1-9b63-8cb42ded0143,Namespace:kube-system,Attempt:0,} returns sandbox id \"13d1cc679c3d7159d4740c8f3ab4d2bf9a09d17ed013f8c4767ec2d02bd2a45e\"" Sep 6 00:30:15.066296 env[1214]: time="2025-09-06T00:30:15.066082760Z" level=info msg="CreateContainer within sandbox \"13d1cc679c3d7159d4740c8f3ab4d2bf9a09d17ed013f8c4767ec2d02bd2a45e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:30:15.095014 env[1214]: time="2025-09-06T00:30:15.094929883Z" level=info msg="CreateContainer within sandbox \"13d1cc679c3d7159d4740c8f3ab4d2bf9a09d17ed013f8c4767ec2d02bd2a45e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"116efa6d62ac29c89dc79daad1a4eacab9c88a1c18465f0c0ef94565dbd4dacf\"" Sep 6 00:30:15.096097 env[1214]: time="2025-09-06T00:30:15.096034556Z" level=info msg="StartContainer for \"116efa6d62ac29c89dc79daad1a4eacab9c88a1c18465f0c0ef94565dbd4dacf\"" Sep 6 00:30:15.120942 env[1214]: time="2025-09-06T00:30:15.120878173Z" level=info msg="StartContainer for \"46c59dddaaf2b276ebaae14d3604cf08446a1bc17a09f80abbf64216ec5942db\" returns successfully" Sep 6 00:30:15.145543 systemd[1]: Started cri-containerd-116efa6d62ac29c89dc79daad1a4eacab9c88a1c18465f0c0ef94565dbd4dacf.scope. Sep 6 00:30:15.246117 env[1214]: time="2025-09-06T00:30:15.246042249Z" level=info msg="StartContainer for \"116efa6d62ac29c89dc79daad1a4eacab9c88a1c18465f0c0ef94565dbd4dacf\" returns successfully" Sep 6 00:30:15.596491 kubelet[2012]: I0906 00:30:15.596373 2012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-g4x4w" podStartSLOduration=28.596346215 podStartE2EDuration="28.596346215s" podCreationTimestamp="2025-09-06 00:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:30:15.594861455 +0000 UTC m=+34.457623862" watchObservedRunningTime="2025-09-06 00:30:15.596346215 +0000 UTC m=+34.459108622" Sep 6 00:30:15.644004 kubelet[2012]: I0906 00:30:15.643923 2012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f7q8h" podStartSLOduration=27.643893417 podStartE2EDuration="27.643893417s" podCreationTimestamp="2025-09-06 00:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:30:15.641956297 +0000 UTC m=+34.504718706" watchObservedRunningTime="2025-09-06 00:30:15.643893417 +0000 UTC m=+34.506655824" Sep 6 00:30:38.722722 systemd[1]: Started sshd@5-10.128.0.49:22-162.142.125.41:47048.service. Sep 6 00:30:41.696142 systemd[1]: Started sshd@6-10.128.0.49:22-139.178.89.65:41232.service. Sep 6 00:30:41.992404 sshd[3382]: Accepted publickey for core from 139.178.89.65 port 41232 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:30:41.994547 sshd[3382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:30:42.003838 systemd[1]: Started session-6.scope. Sep 6 00:30:42.004498 systemd-logind[1223]: New session 6 of user core. Sep 6 00:30:42.348303 sshd[3382]: pam_unix(sshd:session): session closed for user core Sep 6 00:30:42.356417 systemd[1]: sshd@6-10.128.0.49:22-139.178.89.65:41232.service: Deactivated successfully. Sep 6 00:30:42.357972 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:30:42.359427 systemd-logind[1223]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:30:42.361460 systemd-logind[1223]: Removed session 6. Sep 6 00:30:47.398574 systemd[1]: Started sshd@7-10.128.0.49:22-139.178.89.65:41240.service. Sep 6 00:30:47.695411 sshd[3398]: Accepted publickey for core from 139.178.89.65 port 41240 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:30:47.697604 sshd[3398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:30:47.705817 systemd-logind[1223]: New session 7 of user core. Sep 6 00:30:47.706475 systemd[1]: Started session-7.scope. Sep 6 00:30:48.001659 sshd[3398]: pam_unix(sshd:session): session closed for user core Sep 6 00:30:48.007893 systemd[1]: sshd@7-10.128.0.49:22-139.178.89.65:41240.service: Deactivated successfully. Sep 6 00:30:48.009329 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:30:48.010382 systemd-logind[1223]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:30:48.012156 systemd-logind[1223]: Removed session 7. Sep 6 00:30:53.050278 systemd[1]: Started sshd@8-10.128.0.49:22-139.178.89.65:43882.service. Sep 6 00:30:53.350293 sshd[3412]: Accepted publickey for core from 139.178.89.65 port 43882 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:30:53.352722 sshd[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:30:53.360816 systemd[1]: Started session-8.scope. Sep 6 00:30:53.361804 systemd-logind[1223]: New session 8 of user core. Sep 6 00:30:53.653974 sshd[3412]: pam_unix(sshd:session): session closed for user core Sep 6 00:30:53.659436 systemd[1]: sshd@8-10.128.0.49:22-139.178.89.65:43882.service: Deactivated successfully. Sep 6 00:30:53.660913 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:30:53.662194 systemd-logind[1223]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:30:53.663854 systemd-logind[1223]: Removed session 8. Sep 6 00:30:54.473114 sshd[3377]: Connection closed by 162.142.125.41 port 47048 [preauth] Sep 6 00:30:54.474791 systemd[1]: sshd@5-10.128.0.49:22-162.142.125.41:47048.service: Deactivated successfully. Sep 6 00:30:58.703289 systemd[1]: Started sshd@9-10.128.0.49:22-139.178.89.65:43894.service. Sep 6 00:30:59.001442 sshd[3425]: Accepted publickey for core from 139.178.89.65 port 43894 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:30:59.003597 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:30:59.011025 systemd-logind[1223]: New session 9 of user core. Sep 6 00:30:59.011881 systemd[1]: Started session-9.scope. Sep 6 00:30:59.303601 sshd[3425]: pam_unix(sshd:session): session closed for user core Sep 6 00:30:59.309083 systemd[1]: sshd@9-10.128.0.49:22-139.178.89.65:43894.service: Deactivated successfully. Sep 6 00:30:59.310531 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:30:59.311760 systemd-logind[1223]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:30:59.313348 systemd-logind[1223]: Removed session 9. Sep 6 00:30:59.352552 systemd[1]: Started sshd@10-10.128.0.49:22-139.178.89.65:43900.service. Sep 6 00:30:59.654320 sshd[3438]: Accepted publickey for core from 139.178.89.65 port 43900 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:30:59.656622 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:30:59.664692 systemd[1]: Started session-10.scope. Sep 6 00:30:59.665591 systemd-logind[1223]: New session 10 of user core. Sep 6 00:31:00.005998 sshd[3438]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:00.012349 systemd[1]: sshd@10-10.128.0.49:22-139.178.89.65:43900.service: Deactivated successfully. Sep 6 00:31:00.013765 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:31:00.013802 systemd-logind[1223]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:31:00.016435 systemd-logind[1223]: Removed session 10. Sep 6 00:31:00.053304 systemd[1]: Started sshd@11-10.128.0.49:22-139.178.89.65:40224.service. Sep 6 00:31:00.350575 sshd[3449]: Accepted publickey for core from 139.178.89.65 port 40224 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:00.352557 sshd[3449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:00.360459 systemd[1]: Started session-11.scope. Sep 6 00:31:00.361468 systemd-logind[1223]: New session 11 of user core. Sep 6 00:31:00.677444 sshd[3449]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:00.682636 systemd[1]: sshd@11-10.128.0.49:22-139.178.89.65:40224.service: Deactivated successfully. Sep 6 00:31:00.684074 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:31:00.685277 systemd-logind[1223]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:31:00.689240 systemd-logind[1223]: Removed session 11. Sep 6 00:31:05.726028 systemd[1]: Started sshd@12-10.128.0.49:22-139.178.89.65:40240.service. Sep 6 00:31:06.019186 sshd[3461]: Accepted publickey for core from 139.178.89.65 port 40240 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:06.021769 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:06.029858 systemd[1]: Started session-12.scope. Sep 6 00:31:06.030833 systemd-logind[1223]: New session 12 of user core. Sep 6 00:31:06.315764 sshd[3461]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:06.321630 systemd[1]: sshd@12-10.128.0.49:22-139.178.89.65:40240.service: Deactivated successfully. Sep 6 00:31:06.323118 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:31:06.324429 systemd-logind[1223]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:31:06.326051 systemd-logind[1223]: Removed session 12. Sep 6 00:31:11.365382 systemd[1]: Started sshd@13-10.128.0.49:22-139.178.89.65:48070.service. Sep 6 00:31:11.661293 sshd[3473]: Accepted publickey for core from 139.178.89.65 port 48070 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:11.663777 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:11.671668 systemd-logind[1223]: New session 13 of user core. Sep 6 00:31:11.672826 systemd[1]: Started session-13.scope. Sep 6 00:31:11.991152 sshd[3473]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:11.997114 systemd-logind[1223]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:31:11.997455 systemd[1]: sshd@13-10.128.0.49:22-139.178.89.65:48070.service: Deactivated successfully. Sep 6 00:31:11.998886 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:31:12.000575 systemd-logind[1223]: Removed session 13. Sep 6 00:31:17.040572 systemd[1]: Started sshd@14-10.128.0.49:22-139.178.89.65:48086.service. Sep 6 00:31:17.343540 sshd[3485]: Accepted publickey for core from 139.178.89.65 port 48086 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:17.346070 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:17.354497 systemd[1]: Started session-14.scope. Sep 6 00:31:17.355573 systemd-logind[1223]: New session 14 of user core. Sep 6 00:31:17.656829 sshd[3485]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:17.661994 systemd[1]: sshd@14-10.128.0.49:22-139.178.89.65:48086.service: Deactivated successfully. Sep 6 00:31:17.663529 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:31:17.665045 systemd-logind[1223]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:31:17.667077 systemd-logind[1223]: Removed session 14. Sep 6 00:31:17.708396 systemd[1]: Started sshd@15-10.128.0.49:22-139.178.89.65:48102.service. Sep 6 00:31:18.008078 sshd[3497]: Accepted publickey for core from 139.178.89.65 port 48102 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:18.010460 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:18.018981 systemd[1]: Started session-15.scope. Sep 6 00:31:18.020227 systemd-logind[1223]: New session 15 of user core. Sep 6 00:31:18.363286 sshd[3497]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:18.369107 systemd[1]: sshd@15-10.128.0.49:22-139.178.89.65:48102.service: Deactivated successfully. Sep 6 00:31:18.370599 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:31:18.371968 systemd-logind[1223]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:31:18.373887 systemd-logind[1223]: Removed session 15. Sep 6 00:31:18.411502 systemd[1]: Started sshd@16-10.128.0.49:22-139.178.89.65:48110.service. Sep 6 00:31:18.709741 sshd[3506]: Accepted publickey for core from 139.178.89.65 port 48110 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:18.712345 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:18.721270 systemd[1]: Started session-16.scope. Sep 6 00:31:18.722374 systemd-logind[1223]: New session 16 of user core. Sep 6 00:31:19.666411 sshd[3506]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:19.672733 systemd-logind[1223]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:31:19.674309 systemd[1]: sshd@16-10.128.0.49:22-139.178.89.65:48110.service: Deactivated successfully. Sep 6 00:31:19.675566 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:31:19.677252 systemd-logind[1223]: Removed session 16. Sep 6 00:31:19.714794 systemd[1]: Started sshd@17-10.128.0.49:22-139.178.89.65:48112.service. Sep 6 00:31:20.010409 sshd[3525]: Accepted publickey for core from 139.178.89.65 port 48112 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:20.012305 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:20.020798 systemd-logind[1223]: New session 17 of user core. Sep 6 00:31:20.021846 systemd[1]: Started session-17.scope. Sep 6 00:31:20.460990 sshd[3525]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:20.466545 systemd[1]: sshd@17-10.128.0.49:22-139.178.89.65:48112.service: Deactivated successfully. Sep 6 00:31:20.468540 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:31:20.470080 systemd-logind[1223]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:31:20.472064 systemd-logind[1223]: Removed session 17. Sep 6 00:31:20.508057 systemd[1]: Started sshd@18-10.128.0.49:22-139.178.89.65:54108.service. Sep 6 00:31:20.801599 sshd[3535]: Accepted publickey for core from 139.178.89.65 port 54108 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:20.804606 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:20.812852 systemd-logind[1223]: New session 18 of user core. Sep 6 00:31:20.813366 systemd[1]: Started session-18.scope. Sep 6 00:31:21.107141 sshd[3535]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:21.112640 systemd-logind[1223]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:31:21.112981 systemd[1]: sshd@18-10.128.0.49:22-139.178.89.65:54108.service: Deactivated successfully. Sep 6 00:31:21.114356 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:31:21.115916 systemd-logind[1223]: Removed session 18. Sep 6 00:31:26.157508 systemd[1]: Started sshd@19-10.128.0.49:22-139.178.89.65:54124.service. Sep 6 00:31:26.450798 sshd[3547]: Accepted publickey for core from 139.178.89.65 port 54124 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:26.453084 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:26.460592 systemd[1]: Started session-19.scope. Sep 6 00:31:26.461402 systemd-logind[1223]: New session 19 of user core. Sep 6 00:31:26.747654 sshd[3547]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:26.753693 systemd[1]: sshd@19-10.128.0.49:22-139.178.89.65:54124.service: Deactivated successfully. Sep 6 00:31:26.755237 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:31:26.756302 systemd-logind[1223]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:31:26.758441 systemd-logind[1223]: Removed session 19. Sep 6 00:31:31.797580 systemd[1]: Started sshd@20-10.128.0.49:22-139.178.89.65:56552.service. Sep 6 00:31:32.095686 sshd[3563]: Accepted publickey for core from 139.178.89.65 port 56552 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:32.098087 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:32.106110 systemd[1]: Started session-20.scope. Sep 6 00:31:32.106850 systemd-logind[1223]: New session 20 of user core. Sep 6 00:31:32.398552 sshd[3563]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:32.404061 systemd-logind[1223]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:31:32.404420 systemd[1]: sshd@20-10.128.0.49:22-139.178.89.65:56552.service: Deactivated successfully. Sep 6 00:31:32.405849 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:31:32.407290 systemd-logind[1223]: Removed session 20. Sep 6 00:31:37.448125 systemd[1]: Started sshd@21-10.128.0.49:22-139.178.89.65:56562.service. Sep 6 00:31:37.744406 sshd[3575]: Accepted publickey for core from 139.178.89.65 port 56562 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:37.746989 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:37.754799 systemd-logind[1223]: New session 21 of user core. Sep 6 00:31:37.755102 systemd[1]: Started session-21.scope. Sep 6 00:31:38.043647 sshd[3575]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:38.049465 systemd[1]: sshd@21-10.128.0.49:22-139.178.89.65:56562.service: Deactivated successfully. Sep 6 00:31:38.050863 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:31:38.051968 systemd-logind[1223]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:31:38.053552 systemd-logind[1223]: Removed session 21. Sep 6 00:31:38.091585 systemd[1]: Started sshd@22-10.128.0.49:22-139.178.89.65:56568.service. Sep 6 00:31:38.387001 sshd[3587]: Accepted publickey for core from 139.178.89.65 port 56568 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:38.389462 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:38.397515 systemd-logind[1223]: New session 22 of user core. Sep 6 00:31:38.398453 systemd[1]: Started session-22.scope. Sep 6 00:31:39.196907 update_engine[1205]: I0906 00:31:39.196846 1205 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 6 00:31:39.196907 update_engine[1205]: I0906 00:31:39.196916 1205 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 6 00:31:39.197977 update_engine[1205]: I0906 00:31:39.197942 1205 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 6 00:31:39.198765 update_engine[1205]: I0906 00:31:39.198732 1205 omaha_request_params.cc:62] Current group set to lts Sep 6 00:31:39.199192 update_engine[1205]: I0906 00:31:39.198953 1205 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 6 00:31:39.199192 update_engine[1205]: I0906 00:31:39.198968 1205 update_attempter.cc:643] Scheduling an action processor start. Sep 6 00:31:39.199192 update_engine[1205]: I0906 00:31:39.198993 1205 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 00:31:39.199192 update_engine[1205]: I0906 00:31:39.199038 1205 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 6 00:31:39.199192 update_engine[1205]: I0906 00:31:39.199126 1205 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 6 00:31:39.199192 update_engine[1205]: I0906 00:31:39.199136 1205 omaha_request_action.cc:271] Request: Sep 6 00:31:39.199192 update_engine[1205]: Sep 6 00:31:39.199192 update_engine[1205]: Sep 6 00:31:39.199192 update_engine[1205]: Sep 6 00:31:39.199192 update_engine[1205]: Sep 6 00:31:39.199192 update_engine[1205]: Sep 6 00:31:39.199192 update_engine[1205]: Sep 6 00:31:39.199192 update_engine[1205]: Sep 6 00:31:39.199192 update_engine[1205]: Sep 6 00:31:39.199192 update_engine[1205]: I0906 00:31:39.199151 1205 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:31:39.200512 locksmithd[1255]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 6 00:31:39.201452 update_engine[1205]: I0906 00:31:39.201426 1205 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:31:39.201860 update_engine[1205]: I0906 00:31:39.201838 1205 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:31:39.241743 update_engine[1205]: E0906 00:31:39.241504 1205 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:31:39.241743 update_engine[1205]: I0906 00:31:39.241665 1205 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 6 00:31:40.417766 env[1214]: time="2025-09-06T00:31:40.417655120Z" level=info msg="StopContainer for \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\" with timeout 30 (s)" Sep 6 00:31:40.425817 env[1214]: time="2025-09-06T00:31:40.425753355Z" level=info msg="Stop container \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\" with signal terminated" Sep 6 00:31:40.429566 systemd[1]: run-containerd-runc-k8s.io-4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4-runc.7injFY.mount: Deactivated successfully. Sep 6 00:31:40.468526 env[1214]: time="2025-09-06T00:31:40.468435318Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:31:40.469864 systemd[1]: cri-containerd-1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8.scope: Deactivated successfully. Sep 6 00:31:40.486534 env[1214]: time="2025-09-06T00:31:40.486469460Z" level=info msg="StopContainer for \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\" with timeout 2 (s)" Sep 6 00:31:40.486968 env[1214]: time="2025-09-06T00:31:40.486925884Z" level=info msg="Stop container \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\" with signal terminated" Sep 6 00:31:40.504049 systemd-networkd[1019]: lxc_health: Link DOWN Sep 6 00:31:40.504066 systemd-networkd[1019]: lxc_health: Lost carrier Sep 6 00:31:40.532974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8-rootfs.mount: Deactivated successfully. Sep 6 00:31:40.536944 systemd[1]: cri-containerd-4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4.scope: Deactivated successfully. Sep 6 00:31:40.537822 systemd[1]: cri-containerd-4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4.scope: Consumed 9.912s CPU time. Sep 6 00:31:40.549757 env[1214]: time="2025-09-06T00:31:40.549664013Z" level=info msg="shim disconnected" id=1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8 Sep 6 00:31:40.550047 env[1214]: time="2025-09-06T00:31:40.549759076Z" level=warning msg="cleaning up after shim disconnected" id=1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8 namespace=k8s.io Sep 6 00:31:40.550047 env[1214]: time="2025-09-06T00:31:40.549779124Z" level=info msg="cleaning up dead shim" Sep 6 00:31:40.568178 env[1214]: time="2025-09-06T00:31:40.568120431Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3645 runtime=io.containerd.runc.v2\n" Sep 6 00:31:40.571319 env[1214]: time="2025-09-06T00:31:40.571254041Z" level=info msg="StopContainer for \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\" returns successfully" Sep 6 00:31:40.572472 env[1214]: time="2025-09-06T00:31:40.572427189Z" level=info msg="StopPodSandbox for \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\"" Sep 6 00:31:40.572812 env[1214]: time="2025-09-06T00:31:40.572756726Z" level=info msg="Container to stop \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:31:40.581355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc-shm.mount: Deactivated successfully. Sep 6 00:31:40.592802 env[1214]: time="2025-09-06T00:31:40.592732450Z" level=info msg="shim disconnected" id=4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4 Sep 6 00:31:40.592802 env[1214]: time="2025-09-06T00:31:40.592808445Z" level=warning msg="cleaning up after shim disconnected" id=4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4 namespace=k8s.io Sep 6 00:31:40.593195 env[1214]: time="2025-09-06T00:31:40.592828839Z" level=info msg="cleaning up dead shim" Sep 6 00:31:40.599733 systemd[1]: cri-containerd-f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc.scope: Deactivated successfully. Sep 6 00:31:40.624195 env[1214]: time="2025-09-06T00:31:40.624105612Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3676 runtime=io.containerd.runc.v2\n" Sep 6 00:31:40.626974 env[1214]: time="2025-09-06T00:31:40.626906977Z" level=info msg="StopContainer for \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\" returns successfully" Sep 6 00:31:40.627626 env[1214]: time="2025-09-06T00:31:40.627580092Z" level=info msg="StopPodSandbox for \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\"" Sep 6 00:31:40.627972 env[1214]: time="2025-09-06T00:31:40.627873750Z" level=info msg="Container to stop \"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:31:40.628130 env[1214]: time="2025-09-06T00:31:40.627972981Z" level=info msg="Container to stop \"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:31:40.628130 env[1214]: time="2025-09-06T00:31:40.628002582Z" level=info msg="Container to stop \"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:31:40.628130 env[1214]: time="2025-09-06T00:31:40.628037523Z" level=info msg="Container to stop \"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:31:40.628130 env[1214]: time="2025-09-06T00:31:40.628065838Z" level=info msg="Container to stop \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:31:40.644325 systemd[1]: cri-containerd-bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086.scope: Deactivated successfully. Sep 6 00:31:40.649018 env[1214]: time="2025-09-06T00:31:40.648924223Z" level=info msg="shim disconnected" id=f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc Sep 6 00:31:40.649221 env[1214]: time="2025-09-06T00:31:40.649056461Z" level=warning msg="cleaning up after shim disconnected" id=f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc namespace=k8s.io Sep 6 00:31:40.649221 env[1214]: time="2025-09-06T00:31:40.649076631Z" level=info msg="cleaning up dead shim" Sep 6 00:31:40.673364 env[1214]: time="2025-09-06T00:31:40.673200149Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3709 runtime=io.containerd.runc.v2\n" Sep 6 00:31:40.676911 env[1214]: time="2025-09-06T00:31:40.676848787Z" level=info msg="TearDown network for sandbox \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\" successfully" Sep 6 00:31:40.676911 env[1214]: time="2025-09-06T00:31:40.676907194Z" level=info msg="StopPodSandbox for \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\" returns successfully" Sep 6 00:31:40.702158 env[1214]: time="2025-09-06T00:31:40.702071688Z" level=info msg="shim disconnected" id=bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086 Sep 6 00:31:40.702158 env[1214]: time="2025-09-06T00:31:40.702150004Z" level=warning msg="cleaning up after shim disconnected" id=bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086 namespace=k8s.io Sep 6 00:31:40.702591 env[1214]: time="2025-09-06T00:31:40.702169057Z" level=info msg="cleaning up dead shim" Sep 6 00:31:40.718848 env[1214]: time="2025-09-06T00:31:40.718772571Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3737 runtime=io.containerd.runc.v2\n" Sep 6 00:31:40.719434 env[1214]: time="2025-09-06T00:31:40.719384854Z" level=info msg="TearDown network for sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" successfully" Sep 6 00:31:40.719600 env[1214]: time="2025-09-06T00:31:40.719432712Z" level=info msg="StopPodSandbox for \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" returns successfully" Sep 6 00:31:40.791597 kubelet[2012]: I0906 00:31:40.791555 2012 scope.go:117] "RemoveContainer" containerID="1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8" Sep 6 00:31:40.799958 env[1214]: time="2025-09-06T00:31:40.799900749Z" level=info msg="RemoveContainer for \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\"" Sep 6 00:31:40.809055 env[1214]: time="2025-09-06T00:31:40.808990265Z" level=info msg="RemoveContainer for \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\" returns successfully" Sep 6 00:31:40.809456 kubelet[2012]: I0906 00:31:40.809421 2012 scope.go:117] "RemoveContainer" containerID="1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8" Sep 6 00:31:40.809920 env[1214]: time="2025-09-06T00:31:40.809803667Z" level=error msg="ContainerStatus for \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\": not found" Sep 6 00:31:40.810140 kubelet[2012]: E0906 00:31:40.810101 2012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\": not found" containerID="1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8" Sep 6 00:31:40.810259 kubelet[2012]: I0906 00:31:40.810169 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8"} err="failed to get container status \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d143924d302a6f20dbfcbbf52c5c6f30af1c60ef52890d26160512ce6bee3c8\": not found" Sep 6 00:31:40.810259 kubelet[2012]: I0906 00:31:40.810226 2012 scope.go:117] "RemoveContainer" containerID="4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4" Sep 6 00:31:40.811742 env[1214]: time="2025-09-06T00:31:40.811680529Z" level=info msg="RemoveContainer for \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\"" Sep 6 00:31:40.818858 env[1214]: time="2025-09-06T00:31:40.818798682Z" level=info msg="RemoveContainer for \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\" returns successfully" Sep 6 00:31:40.819127 kubelet[2012]: I0906 00:31:40.819095 2012 scope.go:117] "RemoveContainer" containerID="db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec" Sep 6 00:31:40.820718 env[1214]: time="2025-09-06T00:31:40.820657999Z" level=info msg="RemoveContainer for \"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec\"" Sep 6 00:31:40.830636 env[1214]: time="2025-09-06T00:31:40.830563722Z" level=info msg="RemoveContainer for \"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec\" returns successfully" Sep 6 00:31:40.830974 kubelet[2012]: I0906 00:31:40.830926 2012 scope.go:117] "RemoveContainer" containerID="5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da" Sep 6 00:31:40.832595 env[1214]: time="2025-09-06T00:31:40.832544780Z" level=info msg="RemoveContainer for \"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da\"" Sep 6 00:31:40.839530 env[1214]: time="2025-09-06T00:31:40.839468809Z" level=info msg="RemoveContainer for \"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da\" returns successfully" Sep 6 00:31:40.839812 kubelet[2012]: I0906 00:31:40.839778 2012 scope.go:117] "RemoveContainer" containerID="72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8" Sep 6 00:31:40.841646 env[1214]: time="2025-09-06T00:31:40.841582336Z" level=info msg="RemoveContainer for \"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8\"" Sep 6 00:31:40.842239 kubelet[2012]: I0906 00:31:40.842197 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-bpf-maps\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.842381 kubelet[2012]: I0906 00:31:40.842268 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8p28\" (UniqueName: \"kubernetes.io/projected/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-kube-api-access-q8p28\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.842381 kubelet[2012]: I0906 00:31:40.842313 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-hostproc\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.842381 kubelet[2012]: I0906 00:31:40.842342 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cni-path\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.842381 kubelet[2012]: I0906 00:31:40.842372 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-host-proc-sys-net\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.842641 kubelet[2012]: I0906 00:31:40.842402 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-xtables-lock\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.842641 kubelet[2012]: I0906 00:31:40.842439 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-clustermesh-secrets\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.842641 kubelet[2012]: I0906 00:31:40.842466 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-cgroup\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.842641 kubelet[2012]: I0906 00:31:40.842502 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp5j5\" (UniqueName: \"kubernetes.io/projected/c08ff725-6646-4e0e-95aa-9140e30e039c-kube-api-access-mp5j5\") pod \"c08ff725-6646-4e0e-95aa-9140e30e039c\" (UID: \"c08ff725-6646-4e0e-95aa-9140e30e039c\") " Sep 6 00:31:40.842641 kubelet[2012]: I0906 00:31:40.842533 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c08ff725-6646-4e0e-95aa-9140e30e039c-cilium-config-path\") pod \"c08ff725-6646-4e0e-95aa-9140e30e039c\" (UID: \"c08ff725-6646-4e0e-95aa-9140e30e039c\") " Sep 6 00:31:40.842641 kubelet[2012]: I0906 00:31:40.842566 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-etc-cni-netd\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.843030 kubelet[2012]: I0906 00:31:40.842636 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-lib-modules\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.843030 kubelet[2012]: I0906 00:31:40.842672 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-hubble-tls\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.843030 kubelet[2012]: I0906 00:31:40.842722 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-host-proc-sys-kernel\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.843030 kubelet[2012]: I0906 00:31:40.842755 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-config-path\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.843030 kubelet[2012]: I0906 00:31:40.842789 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-run\") pod \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\" (UID: \"e1e5d6d6-6a59-4e88-b053-3ede0a0fe056\") " Sep 6 00:31:40.843030 kubelet[2012]: I0906 00:31:40.842879 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:40.843534 kubelet[2012]: I0906 00:31:40.842936 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:40.848742 kubelet[2012]: I0906 00:31:40.846767 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-hostproc" (OuterVolumeSpecName: "hostproc") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:40.848742 kubelet[2012]: I0906 00:31:40.846832 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cni-path" (OuterVolumeSpecName: "cni-path") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:40.848742 kubelet[2012]: I0906 00:31:40.846860 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:40.848742 kubelet[2012]: I0906 00:31:40.846885 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:40.851405 kubelet[2012]: I0906 00:31:40.851358 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08ff725-6646-4e0e-95aa-9140e30e039c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c08ff725-6646-4e0e-95aa-9140e30e039c" (UID: "c08ff725-6646-4e0e-95aa-9140e30e039c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:31:40.851672 kubelet[2012]: I0906 00:31:40.851634 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:40.851883 kubelet[2012]: I0906 00:31:40.851858 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:40.855737 kubelet[2012]: I0906 00:31:40.853943 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:40.856948 kubelet[2012]: I0906 00:31:40.856909 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:40.857440 env[1214]: time="2025-09-06T00:31:40.857372946Z" level=info msg="RemoveContainer for \"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8\" returns successfully" Sep 6 00:31:40.866751 kubelet[2012]: I0906 00:31:40.866200 2012 scope.go:117] "RemoveContainer" containerID="449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b" Sep 6 00:31:40.867239 kubelet[2012]: I0906 00:31:40.867181 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:31:40.867864 kubelet[2012]: I0906 00:31:40.867820 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:31:40.868824 env[1214]: time="2025-09-06T00:31:40.868757198Z" level=info msg="RemoveContainer for \"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b\"" Sep 6 00:31:40.887061 kubelet[2012]: I0906 00:31:40.886982 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08ff725-6646-4e0e-95aa-9140e30e039c-kube-api-access-mp5j5" (OuterVolumeSpecName: "kube-api-access-mp5j5") pod "c08ff725-6646-4e0e-95aa-9140e30e039c" (UID: "c08ff725-6646-4e0e-95aa-9140e30e039c"). InnerVolumeSpecName "kube-api-access-mp5j5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:31:40.889782 env[1214]: time="2025-09-06T00:31:40.889687924Z" level=info msg="RemoveContainer for \"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b\" returns successfully" Sep 6 00:31:40.890033 kubelet[2012]: I0906 00:31:40.889986 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-kube-api-access-q8p28" (OuterVolumeSpecName: "kube-api-access-q8p28") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "kube-api-access-q8p28". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:31:40.890208 kubelet[2012]: I0906 00:31:40.890175 2012 scope.go:117] "RemoveContainer" containerID="4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4" Sep 6 00:31:40.890659 env[1214]: time="2025-09-06T00:31:40.890545692Z" level=error msg="ContainerStatus for \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\": not found" Sep 6 00:31:40.895067 kubelet[2012]: E0906 00:31:40.895028 2012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\": not found" containerID="4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4" Sep 6 00:31:40.895208 kubelet[2012]: I0906 00:31:40.895082 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4"} err="failed to get container status \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\": rpc error: code = NotFound desc = an error occurred when try to find container \"4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4\": not found" Sep 6 00:31:40.895208 kubelet[2012]: I0906 00:31:40.895124 2012 scope.go:117] "RemoveContainer" containerID="db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec" Sep 6 00:31:40.895608 env[1214]: time="2025-09-06T00:31:40.895501323Z" level=error msg="ContainerStatus for \"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec\": not found" Sep 6 00:31:40.898757 kubelet[2012]: E0906 00:31:40.896346 2012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec\": not found" containerID="db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec" Sep 6 00:31:40.898757 kubelet[2012]: I0906 00:31:40.896387 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec"} err="failed to get container status \"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec\": rpc error: code = NotFound desc = an error occurred when try to find container \"db62a000e1171592a9bceb7a1d5f612e0a0a1f01c3215279a559493f59194dec\": not found" Sep 6 00:31:40.898757 kubelet[2012]: I0906 00:31:40.896414 2012 scope.go:117] "RemoveContainer" containerID="5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da" Sep 6 00:31:40.899038 env[1214]: time="2025-09-06T00:31:40.896784328Z" level=error msg="ContainerStatus for \"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da\": not found" Sep 6 00:31:40.899112 kubelet[2012]: E0906 00:31:40.898806 2012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da\": not found" containerID="5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da" Sep 6 00:31:40.899112 kubelet[2012]: I0906 00:31:40.898845 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da"} err="failed to get container status \"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b5e17194f481434d5f848bbde04cecea045ae03d32c5bbabaf62166e1e3f0da\": not found" Sep 6 00:31:40.899112 kubelet[2012]: I0906 00:31:40.898872 2012 scope.go:117] "RemoveContainer" containerID="72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8" Sep 6 00:31:40.899521 kubelet[2012]: I0906 00:31:40.899491 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" (UID: "e1e5d6d6-6a59-4e88-b053-3ede0a0fe056"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:31:40.911116 env[1214]: time="2025-09-06T00:31:40.911007103Z" level=error msg="ContainerStatus for \"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8\": not found" Sep 6 00:31:40.911334 kubelet[2012]: E0906 00:31:40.911299 2012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8\": not found" containerID="72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8" Sep 6 00:31:40.911432 kubelet[2012]: I0906 00:31:40.911346 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8"} err="failed to get container status \"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8\": rpc error: code = NotFound desc = an error occurred when try to find container \"72863cd9aee4363c717c5a5b0e1bea124a305299089a44481d6a8088c901fac8\": not found" Sep 6 00:31:40.911432 kubelet[2012]: I0906 00:31:40.911387 2012 scope.go:117] "RemoveContainer" containerID="449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b" Sep 6 00:31:40.911821 env[1214]: time="2025-09-06T00:31:40.911737732Z" level=error msg="ContainerStatus for \"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b\": not found" Sep 6 00:31:40.912003 kubelet[2012]: E0906 00:31:40.911968 2012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b\": not found" containerID="449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b" Sep 6 00:31:40.912119 kubelet[2012]: I0906 00:31:40.912016 2012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b"} err="failed to get container status \"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"449c6872da8073cc77027197270e0ac6858c296c235d07dc6228de3f6d19dc1b\": not found" Sep 6 00:31:40.943664 kubelet[2012]: I0906 00:31:40.943439 2012 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-lib-modules\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.943664 kubelet[2012]: I0906 00:31:40.943586 2012 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-hubble-tls\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.943664 kubelet[2012]: I0906 00:31:40.943632 2012 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-host-proc-sys-kernel\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.943664 kubelet[2012]: I0906 00:31:40.943655 2012 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-config-path\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944217 kubelet[2012]: I0906 00:31:40.943678 2012 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-run\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944217 kubelet[2012]: I0906 00:31:40.943718 2012 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-bpf-maps\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944217 kubelet[2012]: I0906 00:31:40.943737 2012 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q8p28\" (UniqueName: \"kubernetes.io/projected/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-kube-api-access-q8p28\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944217 kubelet[2012]: I0906 00:31:40.943754 2012 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-hostproc\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944217 kubelet[2012]: I0906 00:31:40.943771 2012 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cni-path\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944217 kubelet[2012]: I0906 00:31:40.943796 2012 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-host-proc-sys-net\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944217 kubelet[2012]: I0906 00:31:40.943816 2012 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-xtables-lock\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944529 kubelet[2012]: I0906 00:31:40.943831 2012 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-clustermesh-secrets\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944529 kubelet[2012]: I0906 00:31:40.943850 2012 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-cilium-cgroup\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944529 kubelet[2012]: I0906 00:31:40.943868 2012 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mp5j5\" (UniqueName: \"kubernetes.io/projected/c08ff725-6646-4e0e-95aa-9140e30e039c-kube-api-access-mp5j5\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944529 kubelet[2012]: I0906 00:31:40.943948 2012 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c08ff725-6646-4e0e-95aa-9140e30e039c-cilium-config-path\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:40.944529 kubelet[2012]: I0906 00:31:40.943977 2012 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056-etc-cni-netd\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:41.099878 systemd[1]: Removed slice kubepods-besteffort-podc08ff725_6646_4e0e_95aa_9140e30e039c.slice. Sep 6 00:31:41.111406 systemd[1]: Removed slice kubepods-burstable-pode1e5d6d6_6a59_4e88_b053_3ede0a0fe056.slice. Sep 6 00:31:41.111631 systemd[1]: kubepods-burstable-pode1e5d6d6_6a59_4e88_b053_3ede0a0fe056.slice: Consumed 10.088s CPU time. Sep 6 00:31:41.341748 env[1214]: time="2025-09-06T00:31:41.341312152Z" level=info msg="StopPodSandbox for \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\"" Sep 6 00:31:41.341748 env[1214]: time="2025-09-06T00:31:41.341464738Z" level=info msg="TearDown network for sandbox \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\" successfully" Sep 6 00:31:41.341748 env[1214]: time="2025-09-06T00:31:41.341515726Z" level=info msg="StopPodSandbox for \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\" returns successfully" Sep 6 00:31:41.342769 env[1214]: time="2025-09-06T00:31:41.342691729Z" level=info msg="RemovePodSandbox for \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\"" Sep 6 00:31:41.342944 env[1214]: time="2025-09-06T00:31:41.342773132Z" level=info msg="Forcibly stopping sandbox \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\"" Sep 6 00:31:41.342944 env[1214]: time="2025-09-06T00:31:41.342894300Z" level=info msg="TearDown network for sandbox \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\" successfully" Sep 6 00:31:41.347746 env[1214]: time="2025-09-06T00:31:41.347662151Z" level=info msg="RemovePodSandbox \"f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc\" returns successfully" Sep 6 00:31:41.348308 env[1214]: time="2025-09-06T00:31:41.348249805Z" level=info msg="StopPodSandbox for \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\"" Sep 6 00:31:41.348430 env[1214]: time="2025-09-06T00:31:41.348375267Z" level=info msg="TearDown network for sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" successfully" Sep 6 00:31:41.348498 env[1214]: time="2025-09-06T00:31:41.348431652Z" level=info msg="StopPodSandbox for \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" returns successfully" Sep 6 00:31:41.348931 env[1214]: time="2025-09-06T00:31:41.348892400Z" level=info msg="RemovePodSandbox for \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\"" Sep 6 00:31:41.349066 env[1214]: time="2025-09-06T00:31:41.348976181Z" level=info msg="Forcibly stopping sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\"" Sep 6 00:31:41.349139 env[1214]: time="2025-09-06T00:31:41.349095103Z" level=info msg="TearDown network for sandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" successfully" Sep 6 00:31:41.353765 env[1214]: time="2025-09-06T00:31:41.353689149Z" level=info msg="RemovePodSandbox \"bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086\" returns successfully" Sep 6 00:31:41.408816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4be2033354180606b38fb4b800442ac42c1b3d9be3d04f6cbce25f97a7d2dac4-rootfs.mount: Deactivated successfully. Sep 6 00:31:41.408988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f73102bb3009af06fc7bd276564ddc1ad275b97bca732bfa9206d822bd22a1bc-rootfs.mount: Deactivated successfully. Sep 6 00:31:41.409112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086-rootfs.mount: Deactivated successfully. Sep 6 00:31:41.409225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bacc7f95ff3d40e225869acc55fa1a57baba3b68655cf89bdea8839dfff49086-shm.mount: Deactivated successfully. Sep 6 00:31:41.409338 systemd[1]: var-lib-kubelet-pods-c08ff725\x2d6646\x2d4e0e\x2d95aa\x2d9140e30e039c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmp5j5.mount: Deactivated successfully. Sep 6 00:31:41.409450 systemd[1]: var-lib-kubelet-pods-e1e5d6d6\x2d6a59\x2d4e88\x2db053\x2d3ede0a0fe056-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq8p28.mount: Deactivated successfully. Sep 6 00:31:41.409563 systemd[1]: var-lib-kubelet-pods-e1e5d6d6\x2d6a59\x2d4e88\x2db053\x2d3ede0a0fe056-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:31:41.409675 systemd[1]: var-lib-kubelet-pods-e1e5d6d6\x2d6a59\x2d4e88\x2db053\x2d3ede0a0fe056-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:31:41.432069 kubelet[2012]: I0906 00:31:41.431985 2012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08ff725-6646-4e0e-95aa-9140e30e039c" path="/var/lib/kubelet/pods/c08ff725-6646-4e0e-95aa-9140e30e039c/volumes" Sep 6 00:31:41.432794 kubelet[2012]: I0906 00:31:41.432748 2012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e5d6d6-6a59-4e88-b053-3ede0a0fe056" path="/var/lib/kubelet/pods/e1e5d6d6-6a59-4e88-b053-3ede0a0fe056/volumes" Sep 6 00:31:41.533655 kubelet[2012]: E0906 00:31:41.533395 2012 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:31:42.370116 sshd[3587]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:42.375502 systemd[1]: sshd@22-10.128.0.49:22-139.178.89.65:56568.service: Deactivated successfully. Sep 6 00:31:42.377006 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:31:42.377283 systemd[1]: session-22.scope: Consumed 1.225s CPU time. Sep 6 00:31:42.378323 systemd-logind[1223]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:31:42.379933 systemd-logind[1223]: Removed session 22. Sep 6 00:31:42.425809 systemd[1]: Started sshd@23-10.128.0.49:22-139.178.89.65:34936.service. Sep 6 00:31:42.723371 sshd[3759]: Accepted publickey for core from 139.178.89.65 port 34936 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:42.725918 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:42.734067 systemd-logind[1223]: New session 23 of user core. Sep 6 00:31:42.735234 systemd[1]: Started session-23.scope. Sep 6 00:31:43.716023 sshd[3759]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:43.720508 systemd[1]: Created slice kubepods-burstable-podfbf790c4_f607_4718_8839_391655158816.slice. Sep 6 00:31:43.728239 systemd[1]: sshd@23-10.128.0.49:22-139.178.89.65:34936.service: Deactivated successfully. Sep 6 00:31:43.730818 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:31:43.735680 systemd-logind[1223]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:31:43.740215 systemd-logind[1223]: Removed session 23. Sep 6 00:31:43.768280 systemd[1]: Started sshd@24-10.128.0.49:22-139.178.89.65:34946.service. Sep 6 00:31:43.863977 kubelet[2012]: I0906 00:31:43.863923 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbf790c4-f607-4718-8839-391655158816-cilium-config-path\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.864844 kubelet[2012]: I0906 00:31:43.864808 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cni-path\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.865038 kubelet[2012]: I0906 00:31:43.865012 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-etc-cni-netd\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.865225 kubelet[2012]: I0906 00:31:43.865200 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-xtables-lock\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.865384 kubelet[2012]: I0906 00:31:43.865358 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbf790c4-f607-4718-8839-391655158816-clustermesh-secrets\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.865538 kubelet[2012]: I0906 00:31:43.865514 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fbf790c4-f607-4718-8839-391655158816-cilium-ipsec-secrets\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.865714 kubelet[2012]: I0906 00:31:43.865673 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n6pt\" (UniqueName: \"kubernetes.io/projected/fbf790c4-f607-4718-8839-391655158816-kube-api-access-7n6pt\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.865911 kubelet[2012]: I0906 00:31:43.865887 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-bpf-maps\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.866075 kubelet[2012]: I0906 00:31:43.866041 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-lib-modules\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.866244 kubelet[2012]: I0906 00:31:43.866221 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-host-proc-sys-net\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.866410 kubelet[2012]: I0906 00:31:43.866379 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cilium-run\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.866572 kubelet[2012]: I0906 00:31:43.866547 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-hostproc\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.866740 kubelet[2012]: I0906 00:31:43.866713 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cilium-cgroup\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.866910 kubelet[2012]: I0906 00:31:43.866886 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-host-proc-sys-kernel\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:43.867076 kubelet[2012]: I0906 00:31:43.867041 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbf790c4-f607-4718-8839-391655158816-hubble-tls\") pod \"cilium-pr7gc\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " pod="kube-system/cilium-pr7gc" Sep 6 00:31:44.039768 env[1214]: time="2025-09-06T00:31:44.037845078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pr7gc,Uid:fbf790c4-f607-4718-8839-391655158816,Namespace:kube-system,Attempt:0,}" Sep 6 00:31:44.074815 env[1214]: time="2025-09-06T00:31:44.069343228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:31:44.074815 env[1214]: time="2025-09-06T00:31:44.069537751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:31:44.074815 env[1214]: time="2025-09-06T00:31:44.069617423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:31:44.074815 env[1214]: time="2025-09-06T00:31:44.069919723Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0 pid=3784 runtime=io.containerd.runc.v2 Sep 6 00:31:44.101260 systemd[1]: Started cri-containerd-0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0.scope. Sep 6 00:31:44.108002 sshd[3769]: Accepted publickey for core from 139.178.89.65 port 34946 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:44.110844 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:44.121535 systemd-logind[1223]: New session 24 of user core. Sep 6 00:31:44.123898 systemd[1]: Started session-24.scope. Sep 6 00:31:44.171567 env[1214]: time="2025-09-06T00:31:44.171501138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pr7gc,Uid:fbf790c4-f607-4718-8839-391655158816,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0\"" Sep 6 00:31:44.182328 env[1214]: time="2025-09-06T00:31:44.182268090Z" level=info msg="CreateContainer within sandbox \"0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:31:44.196338 env[1214]: time="2025-09-06T00:31:44.196286268Z" level=info msg="CreateContainer within sandbox \"0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd\"" Sep 6 00:31:44.199972 env[1214]: time="2025-09-06T00:31:44.199797102Z" level=info msg="StartContainer for \"f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd\"" Sep 6 00:31:44.226954 systemd[1]: Started cri-containerd-f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd.scope. Sep 6 00:31:44.256681 systemd[1]: cri-containerd-f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd.scope: Deactivated successfully. Sep 6 00:31:44.275845 env[1214]: time="2025-09-06T00:31:44.274958580Z" level=info msg="shim disconnected" id=f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd Sep 6 00:31:44.275845 env[1214]: time="2025-09-06T00:31:44.275037659Z" level=warning msg="cleaning up after shim disconnected" id=f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd namespace=k8s.io Sep 6 00:31:44.275845 env[1214]: time="2025-09-06T00:31:44.275059077Z" level=info msg="cleaning up dead shim" Sep 6 00:31:44.293109 env[1214]: time="2025-09-06T00:31:44.292931061Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:31:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3844 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:31:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:31:44.293513 env[1214]: time="2025-09-06T00:31:44.293338854Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Sep 6 00:31:44.295853 env[1214]: time="2025-09-06T00:31:44.295775736Z" level=error msg="Failed to pipe stderr of container \"f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd\"" error="reading from a closed fifo" Sep 6 00:31:44.296032 env[1214]: time="2025-09-06T00:31:44.295887949Z" level=error msg="Failed to pipe stdout of container \"f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd\"" error="reading from a closed fifo" Sep 6 00:31:44.298297 env[1214]: time="2025-09-06T00:31:44.298165127Z" level=error msg="StartContainer for \"f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:31:44.298585 kubelet[2012]: E0906 00:31:44.298484 2012 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd" Sep 6 00:31:44.298817 kubelet[2012]: E0906 00:31:44.298778 2012 kuberuntime_manager.go:1358] "Unhandled Error" err=< Sep 6 00:31:44.298817 kubelet[2012]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:31:44.298817 kubelet[2012]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:31:44.298817 kubelet[2012]: rm /hostbin/cilium-mount Sep 6 00:31:44.300913 kubelet[2012]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n6pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pr7gc_kube-system(fbf790c4-f607-4718-8839-391655158816): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:31:44.300913 kubelet[2012]: > logger="UnhandledError" Sep 6 00:31:44.301395 kubelet[2012]: E0906 00:31:44.301311 2012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pr7gc" podUID="fbf790c4-f607-4718-8839-391655158816" Sep 6 00:31:44.381111 kubelet[2012]: I0906 00:31:44.381047 2012 setters.go:618] "Node became not ready" node="ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:31:44Z","lastTransitionTime":"2025-09-06T00:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:31:44.462264 sshd[3769]: pam_unix(sshd:session): session closed for user core Sep 6 00:31:44.469590 systemd-logind[1223]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:31:44.471002 systemd[1]: sshd@24-10.128.0.49:22-139.178.89.65:34946.service: Deactivated successfully. Sep 6 00:31:44.472289 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:31:44.474625 systemd-logind[1223]: Removed session 24. Sep 6 00:31:44.509955 systemd[1]: Started sshd@25-10.128.0.49:22-139.178.89.65:34954.service. Sep 6 00:31:44.809537 sshd[3865]: Accepted publickey for core from 139.178.89.65 port 34954 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:31:44.812326 sshd[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:31:44.818813 env[1214]: time="2025-09-06T00:31:44.818544622Z" level=info msg="StopPodSandbox for \"0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0\"" Sep 6 00:31:44.818813 env[1214]: time="2025-09-06T00:31:44.818632211Z" level=info msg="Container to stop \"f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:31:44.826525 systemd[1]: Started session-25.scope. Sep 6 00:31:44.837862 systemd-logind[1223]: New session 25 of user core. Sep 6 00:31:44.843614 systemd[1]: cri-containerd-0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0.scope: Deactivated successfully. Sep 6 00:31:44.898906 env[1214]: time="2025-09-06T00:31:44.898831308Z" level=info msg="shim disconnected" id=0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0 Sep 6 00:31:44.898906 env[1214]: time="2025-09-06T00:31:44.898890276Z" level=warning msg="cleaning up after shim disconnected" id=0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0 namespace=k8s.io Sep 6 00:31:44.898906 env[1214]: time="2025-09-06T00:31:44.898911668Z" level=info msg="cleaning up dead shim" Sep 6 00:31:44.912526 env[1214]: time="2025-09-06T00:31:44.912463291Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:31:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3887 runtime=io.containerd.runc.v2\n" Sep 6 00:31:44.913114 env[1214]: time="2025-09-06T00:31:44.913064488Z" level=info msg="TearDown network for sandbox \"0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0\" successfully" Sep 6 00:31:44.913266 env[1214]: time="2025-09-06T00:31:44.913113427Z" level=info msg="StopPodSandbox for \"0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0\" returns successfully" Sep 6 00:31:44.992527 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a327707872479dbf5e7442336b5864282103462429fa0f3bc039edfb336faa0-shm.mount: Deactivated successfully. Sep 6 00:31:45.077268 kubelet[2012]: I0906 00:31:45.077128 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbf790c4-f607-4718-8839-391655158816-hubble-tls\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.077268 kubelet[2012]: I0906 00:31:45.077208 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-host-proc-sys-kernel\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.077268 kubelet[2012]: I0906 00:31:45.077252 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-xtables-lock\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077281 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cilium-run\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077308 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-hostproc\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077344 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbf790c4-f607-4718-8839-391655158816-cilium-config-path\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077377 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cni-path\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077411 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n6pt\" (UniqueName: \"kubernetes.io/projected/fbf790c4-f607-4718-8839-391655158816-kube-api-access-7n6pt\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077458 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-etc-cni-netd\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077495 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fbf790c4-f607-4718-8839-391655158816-cilium-ipsec-secrets\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077526 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-lib-modules\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077573 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbf790c4-f607-4718-8839-391655158816-clustermesh-secrets\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077605 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-host-proc-sys-net\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077632 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cilium-cgroup\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077670 2012 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-bpf-maps\") pod \"fbf790c4-f607-4718-8839-391655158816\" (UID: \"fbf790c4-f607-4718-8839-391655158816\") " Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077804 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077853 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:45.078110 kubelet[2012]: I0906 00:31:45.077882 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:45.079298 kubelet[2012]: I0906 00:31:45.077906 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:45.079298 kubelet[2012]: I0906 00:31:45.077932 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-hostproc" (OuterVolumeSpecName: "hostproc") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:45.083743 kubelet[2012]: I0906 00:31:45.082822 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cni-path" (OuterVolumeSpecName: "cni-path") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:45.085713 kubelet[2012]: I0906 00:31:45.085655 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbf790c4-f607-4718-8839-391655158816-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:31:45.086005 kubelet[2012]: I0906 00:31:45.085951 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:45.086212 kubelet[2012]: I0906 00:31:45.086172 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:45.090548 systemd[1]: var-lib-kubelet-pods-fbf790c4\x2df607\x2d4718\x2d8839\x2d391655158816-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:31:45.097156 kubelet[2012]: I0906 00:31:45.097099 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:45.097356 kubelet[2012]: I0906 00:31:45.097181 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:31:45.097734 kubelet[2012]: I0906 00:31:45.097658 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf790c4-f607-4718-8839-391655158816-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:31:45.103543 systemd[1]: var-lib-kubelet-pods-fbf790c4\x2df607\x2d4718\x2d8839\x2d391655158816-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7n6pt.mount: Deactivated successfully. Sep 6 00:31:45.113310 kubelet[2012]: I0906 00:31:45.112034 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf790c4-f607-4718-8839-391655158816-kube-api-access-7n6pt" (OuterVolumeSpecName: "kube-api-access-7n6pt") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "kube-api-access-7n6pt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:31:45.115266 systemd[1]: var-lib-kubelet-pods-fbf790c4\x2df607\x2d4718\x2d8839\x2d391655158816-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:31:45.116544 kubelet[2012]: I0906 00:31:45.116495 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf790c4-f607-4718-8839-391655158816-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:31:45.124081 kubelet[2012]: I0906 00:31:45.123971 2012 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf790c4-f607-4718-8839-391655158816-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "fbf790c4-f607-4718-8839-391655158816" (UID: "fbf790c4-f607-4718-8839-391655158816"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:31:45.124638 systemd[1]: var-lib-kubelet-pods-fbf790c4\x2df607\x2d4718\x2d8839\x2d391655158816-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:31:45.178725 kubelet[2012]: I0906 00:31:45.178640 2012 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fbf790c4-f607-4718-8839-391655158816-cilium-ipsec-secrets\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.178725 kubelet[2012]: I0906 00:31:45.178726 2012 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-lib-modules\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178758 2012 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbf790c4-f607-4718-8839-391655158816-clustermesh-secrets\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178777 2012 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-host-proc-sys-net\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178793 2012 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cilium-cgroup\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178811 2012 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-bpf-maps\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178830 2012 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbf790c4-f607-4718-8839-391655158816-hubble-tls\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178848 2012 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-host-proc-sys-kernel\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178865 2012 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-xtables-lock\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178889 2012 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cilium-run\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178905 2012 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-hostproc\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178921 2012 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbf790c4-f607-4718-8839-391655158816-cilium-config-path\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178938 2012 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-cni-path\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178953 2012 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7n6pt\" (UniqueName: \"kubernetes.io/projected/fbf790c4-f607-4718-8839-391655158816-kube-api-access-7n6pt\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.179040 kubelet[2012]: I0906 00:31:45.178971 2012 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbf790c4-f607-4718-8839-391655158816-etc-cni-netd\") on node \"ci-3510-3-8-nightly-20250905-2100-5b3338b7aa9a30479081\" DevicePath \"\"" Sep 6 00:31:45.432276 systemd[1]: Removed slice kubepods-burstable-podfbf790c4_f607_4718_8839_391655158816.slice. Sep 6 00:31:45.821846 kubelet[2012]: I0906 00:31:45.821667 2012 scope.go:117] "RemoveContainer" containerID="f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd" Sep 6 00:31:45.825873 env[1214]: time="2025-09-06T00:31:45.824743155Z" level=info msg="RemoveContainer for \"f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd\"" Sep 6 00:31:45.830813 env[1214]: time="2025-09-06T00:31:45.830749880Z" level=info msg="RemoveContainer for \"f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd\" returns successfully" Sep 6 00:31:45.906763 systemd[1]: Created slice kubepods-burstable-pod804025ef_20d7_43da_9032_562fac579167.slice. Sep 6 00:31:45.986902 kubelet[2012]: I0906 00:31:45.986830 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/804025ef-20d7-43da-9032-562fac579167-etc-cni-netd\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.986902 kubelet[2012]: I0906 00:31:45.986906 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/804025ef-20d7-43da-9032-562fac579167-clustermesh-secrets\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987197 kubelet[2012]: I0906 00:31:45.986944 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/804025ef-20d7-43da-9032-562fac579167-cilium-run\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987197 kubelet[2012]: I0906 00:31:45.986975 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/804025ef-20d7-43da-9032-562fac579167-host-proc-sys-kernel\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987197 kubelet[2012]: I0906 00:31:45.987009 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/804025ef-20d7-43da-9032-562fac579167-hostproc\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987197 kubelet[2012]: I0906 00:31:45.987045 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/804025ef-20d7-43da-9032-562fac579167-cilium-config-path\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987197 kubelet[2012]: I0906 00:31:45.987076 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/804025ef-20d7-43da-9032-562fac579167-cilium-cgroup\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987197 kubelet[2012]: I0906 00:31:45.987103 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/804025ef-20d7-43da-9032-562fac579167-cni-path\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987197 kubelet[2012]: I0906 00:31:45.987130 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/804025ef-20d7-43da-9032-562fac579167-xtables-lock\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987197 kubelet[2012]: I0906 00:31:45.987163 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/804025ef-20d7-43da-9032-562fac579167-bpf-maps\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987197 kubelet[2012]: I0906 00:31:45.987192 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqf2q\" (UniqueName: \"kubernetes.io/projected/804025ef-20d7-43da-9032-562fac579167-kube-api-access-rqf2q\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987750 kubelet[2012]: I0906 00:31:45.987225 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/804025ef-20d7-43da-9032-562fac579167-lib-modules\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987750 kubelet[2012]: I0906 00:31:45.987260 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/804025ef-20d7-43da-9032-562fac579167-cilium-ipsec-secrets\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987750 kubelet[2012]: I0906 00:31:45.987288 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/804025ef-20d7-43da-9032-562fac579167-host-proc-sys-net\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:45.987750 kubelet[2012]: I0906 00:31:45.987319 2012 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/804025ef-20d7-43da-9032-562fac579167-hubble-tls\") pod \"cilium-2wmgg\" (UID: \"804025ef-20d7-43da-9032-562fac579167\") " pod="kube-system/cilium-2wmgg" Sep 6 00:31:46.213187 env[1214]: time="2025-09-06T00:31:46.213116804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wmgg,Uid:804025ef-20d7-43da-9032-562fac579167,Namespace:kube-system,Attempt:0,}" Sep 6 00:31:46.237713 env[1214]: time="2025-09-06T00:31:46.237595614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:31:46.237973 env[1214]: time="2025-09-06T00:31:46.237919290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:31:46.238167 env[1214]: time="2025-09-06T00:31:46.238127374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:31:46.238580 env[1214]: time="2025-09-06T00:31:46.238523790Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13 pid=3921 runtime=io.containerd.runc.v2 Sep 6 00:31:46.263735 systemd[1]: Started cri-containerd-6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13.scope. Sep 6 00:31:46.306154 env[1214]: time="2025-09-06T00:31:46.305538731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wmgg,Uid:804025ef-20d7-43da-9032-562fac579167,Namespace:kube-system,Attempt:0,} returns sandbox id \"6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13\"" Sep 6 00:31:46.315300 env[1214]: time="2025-09-06T00:31:46.313545803Z" level=info msg="CreateContainer within sandbox \"6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:31:46.330593 env[1214]: time="2025-09-06T00:31:46.330511235Z" level=info msg="CreateContainer within sandbox \"6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bd9197ce563fd5d1465bef74aee205cc8b7c9a893b4c7bfe273f198089ca0589\"" Sep 6 00:31:46.333740 env[1214]: time="2025-09-06T00:31:46.331553242Z" level=info msg="StartContainer for \"bd9197ce563fd5d1465bef74aee205cc8b7c9a893b4c7bfe273f198089ca0589\"" Sep 6 00:31:46.358089 systemd[1]: Started cri-containerd-bd9197ce563fd5d1465bef74aee205cc8b7c9a893b4c7bfe273f198089ca0589.scope. Sep 6 00:31:46.408725 env[1214]: time="2025-09-06T00:31:46.408354150Z" level=info msg="StartContainer for \"bd9197ce563fd5d1465bef74aee205cc8b7c9a893b4c7bfe273f198089ca0589\" returns successfully" Sep 6 00:31:46.425808 systemd[1]: cri-containerd-bd9197ce563fd5d1465bef74aee205cc8b7c9a893b4c7bfe273f198089ca0589.scope: Deactivated successfully. Sep 6 00:31:46.467213 env[1214]: time="2025-09-06T00:31:46.467056682Z" level=info msg="shim disconnected" id=bd9197ce563fd5d1465bef74aee205cc8b7c9a893b4c7bfe273f198089ca0589 Sep 6 00:31:46.467213 env[1214]: time="2025-09-06T00:31:46.467133612Z" level=warning msg="cleaning up after shim disconnected" id=bd9197ce563fd5d1465bef74aee205cc8b7c9a893b4c7bfe273f198089ca0589 namespace=k8s.io Sep 6 00:31:46.467213 env[1214]: time="2025-09-06T00:31:46.467152984Z" level=info msg="cleaning up dead shim" Sep 6 00:31:46.488539 env[1214]: time="2025-09-06T00:31:46.488466110Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:31:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4002 runtime=io.containerd.runc.v2\n" Sep 6 00:31:46.535057 kubelet[2012]: E0906 00:31:46.534955 2012 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:31:46.846680 env[1214]: time="2025-09-06T00:31:46.846379931Z" level=info msg="CreateContainer within sandbox \"6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:31:46.870831 env[1214]: time="2025-09-06T00:31:46.869390827Z" level=info msg="CreateContainer within sandbox \"6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eecc159dab95af3ade3441833568aa72328bd6dd4f593b283ba39d1f8c0e71b0\"" Sep 6 00:31:46.871611 env[1214]: time="2025-09-06T00:31:46.871557890Z" level=info msg="StartContainer for \"eecc159dab95af3ade3441833568aa72328bd6dd4f593b283ba39d1f8c0e71b0\"" Sep 6 00:31:46.902184 systemd[1]: Started cri-containerd-eecc159dab95af3ade3441833568aa72328bd6dd4f593b283ba39d1f8c0e71b0.scope. Sep 6 00:31:46.959909 env[1214]: time="2025-09-06T00:31:46.959792615Z" level=info msg="StartContainer for \"eecc159dab95af3ade3441833568aa72328bd6dd4f593b283ba39d1f8c0e71b0\" returns successfully" Sep 6 00:31:46.970387 systemd[1]: cri-containerd-eecc159dab95af3ade3441833568aa72328bd6dd4f593b283ba39d1f8c0e71b0.scope: Deactivated successfully. Sep 6 00:31:47.008570 env[1214]: time="2025-09-06T00:31:47.008498756Z" level=info msg="shim disconnected" id=eecc159dab95af3ade3441833568aa72328bd6dd4f593b283ba39d1f8c0e71b0 Sep 6 00:31:47.008570 env[1214]: time="2025-09-06T00:31:47.008569247Z" level=warning msg="cleaning up after shim disconnected" id=eecc159dab95af3ade3441833568aa72328bd6dd4f593b283ba39d1f8c0e71b0 namespace=k8s.io Sep 6 00:31:47.008570 env[1214]: time="2025-09-06T00:31:47.008586472Z" level=info msg="cleaning up dead shim" Sep 6 00:31:47.023250 env[1214]: time="2025-09-06T00:31:47.023162697Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:31:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4065 runtime=io.containerd.runc.v2\n" Sep 6 00:31:47.380125 kubelet[2012]: W0906 00:31:47.380026 2012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbf790c4_f607_4718_8839_391655158816.slice/cri-containerd-f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd.scope WatchSource:0}: container "f335170cfac5ff5a0519bf9aaafaa7d0d5e9e15bfcdc1999c634280da38950bd" in namespace "k8s.io": not found Sep 6 00:31:47.424411 kubelet[2012]: I0906 00:31:47.424349 2012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbf790c4-f607-4718-8839-391655158816" path="/var/lib/kubelet/pods/fbf790c4-f607-4718-8839-391655158816/volumes" Sep 6 00:31:47.844908 env[1214]: time="2025-09-06T00:31:47.844837719Z" level=info msg="CreateContainer within sandbox \"6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:31:47.885843 env[1214]: time="2025-09-06T00:31:47.885759078Z" level=info msg="CreateContainer within sandbox \"6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"74c63aff5888f467532eb01f162cd16bdc0e98f7226b3fa17082dafcaba10f89\"" Sep 6 00:31:47.889856 env[1214]: time="2025-09-06T00:31:47.889685114Z" level=info msg="StartContainer for \"74c63aff5888f467532eb01f162cd16bdc0e98f7226b3fa17082dafcaba10f89\"" Sep 6 00:31:47.939535 systemd[1]: Started cri-containerd-74c63aff5888f467532eb01f162cd16bdc0e98f7226b3fa17082dafcaba10f89.scope. Sep 6 00:31:47.990256 env[1214]: time="2025-09-06T00:31:47.990178564Z" level=info msg="StartContainer for \"74c63aff5888f467532eb01f162cd16bdc0e98f7226b3fa17082dafcaba10f89\" returns successfully" Sep 6 00:31:47.998395 systemd[1]: cri-containerd-74c63aff5888f467532eb01f162cd16bdc0e98f7226b3fa17082dafcaba10f89.scope: Deactivated successfully. Sep 6 00:31:48.038167 env[1214]: time="2025-09-06T00:31:48.038076370Z" level=info msg="shim disconnected" id=74c63aff5888f467532eb01f162cd16bdc0e98f7226b3fa17082dafcaba10f89 Sep 6 00:31:48.038167 env[1214]: time="2025-09-06T00:31:48.038146381Z" level=warning msg="cleaning up after shim disconnected" id=74c63aff5888f467532eb01f162cd16bdc0e98f7226b3fa17082dafcaba10f89 namespace=k8s.io Sep 6 00:31:48.038167 env[1214]: time="2025-09-06T00:31:48.038169840Z" level=info msg="cleaning up dead shim" Sep 6 00:31:48.052026 env[1214]: time="2025-09-06T00:31:48.051914320Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:31:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4124 runtime=io.containerd.runc.v2\n" Sep 6 00:31:48.109682 systemd[1]: run-containerd-runc-k8s.io-74c63aff5888f467532eb01f162cd16bdc0e98f7226b3fa17082dafcaba10f89-runc.R1Uduw.mount: Deactivated successfully. Sep 6 00:31:48.109885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74c63aff5888f467532eb01f162cd16bdc0e98f7226b3fa17082dafcaba10f89-rootfs.mount: Deactivated successfully. Sep 6 00:31:48.850175 env[1214]: time="2025-09-06T00:31:48.850080274Z" level=info msg="CreateContainer within sandbox \"6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:31:48.878806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1604176088.mount: Deactivated successfully. Sep 6 00:31:48.882452 env[1214]: time="2025-09-06T00:31:48.882365193Z" level=info msg="CreateContainer within sandbox \"6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b36237b1f3d7128e419d0b50f8b7b11d783a4d70e22b3148f88d297cb735d93e\"" Sep 6 00:31:48.884403 env[1214]: time="2025-09-06T00:31:48.884356728Z" level=info msg="StartContainer for \"b36237b1f3d7128e419d0b50f8b7b11d783a4d70e22b3148f88d297cb735d93e\"" Sep 6 00:31:48.937600 systemd[1]: Started cri-containerd-b36237b1f3d7128e419d0b50f8b7b11d783a4d70e22b3148f88d297cb735d93e.scope. Sep 6 00:31:48.978302 systemd[1]: cri-containerd-b36237b1f3d7128e419d0b50f8b7b11d783a4d70e22b3148f88d297cb735d93e.scope: Deactivated successfully. Sep 6 00:31:48.982778 env[1214]: time="2025-09-06T00:31:48.982391782Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804025ef_20d7_43da_9032_562fac579167.slice/cri-containerd-b36237b1f3d7128e419d0b50f8b7b11d783a4d70e22b3148f88d297cb735d93e.scope/memory.events\": no such file or directory" Sep 6 00:31:48.985960 env[1214]: time="2025-09-06T00:31:48.985906198Z" level=info msg="StartContainer for \"b36237b1f3d7128e419d0b50f8b7b11d783a4d70e22b3148f88d297cb735d93e\" returns successfully" Sep 6 00:31:49.022288 env[1214]: time="2025-09-06T00:31:49.022208635Z" level=info msg="shim disconnected" id=b36237b1f3d7128e419d0b50f8b7b11d783a4d70e22b3148f88d297cb735d93e Sep 6 00:31:49.022288 env[1214]: time="2025-09-06T00:31:49.022281208Z" level=warning msg="cleaning up after shim disconnected" id=b36237b1f3d7128e419d0b50f8b7b11d783a4d70e22b3148f88d297cb735d93e namespace=k8s.io Sep 6 00:31:49.022779 env[1214]: time="2025-09-06T00:31:49.022298982Z" level=info msg="cleaning up dead shim" Sep 6 00:31:49.036828 env[1214]: time="2025-09-06T00:31:49.036750939Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:31:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4183 runtime=io.containerd.runc.v2\n" Sep 6 00:31:49.109784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b36237b1f3d7128e419d0b50f8b7b11d783a4d70e22b3148f88d297cb735d93e-rootfs.mount: Deactivated successfully. Sep 6 00:31:49.196829 update_engine[1205]: I0906 00:31:49.196053 1205 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:31:49.196829 update_engine[1205]: I0906 00:31:49.196434 1205 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:31:49.196829 update_engine[1205]: I0906 00:31:49.196661 1205 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:31:49.204730 update_engine[1205]: E0906 00:31:49.204643 1205 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:31:49.204959 update_engine[1205]: I0906 00:31:49.204856 1205 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 6 00:31:49.857163 env[1214]: time="2025-09-06T00:31:49.857094974Z" level=info msg="CreateContainer within sandbox \"6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:31:49.896601 env[1214]: time="2025-09-06T00:31:49.896522904Z" level=info msg="CreateContainer within sandbox \"6742fb6ab38cb26f36feedc1f15f6d4632128e72031086c7c166cb76dad16c13\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f352879eebf778433d27a8fea695065f85e912087db38481a98afb30fe63396d\"" Sep 6 00:31:49.897735 env[1214]: time="2025-09-06T00:31:49.897668363Z" level=info msg="StartContainer for \"f352879eebf778433d27a8fea695065f85e912087db38481a98afb30fe63396d\"" Sep 6 00:31:49.943788 systemd[1]: Started cri-containerd-f352879eebf778433d27a8fea695065f85e912087db38481a98afb30fe63396d.scope. Sep 6 00:31:49.993787 env[1214]: time="2025-09-06T00:31:49.993046691Z" level=info msg="StartContainer for \"f352879eebf778433d27a8fea695065f85e912087db38481a98afb30fe63396d\" returns successfully" Sep 6 00:31:50.493989 kubelet[2012]: W0906 00:31:50.493930 2012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804025ef_20d7_43da_9032_562fac579167.slice/cri-containerd-bd9197ce563fd5d1465bef74aee205cc8b7c9a893b4c7bfe273f198089ca0589.scope WatchSource:0}: task bd9197ce563fd5d1465bef74aee205cc8b7c9a893b4c7bfe273f198089ca0589 not found Sep 6 00:31:50.535768 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:31:50.894178 kubelet[2012]: I0906 00:31:50.893984 2012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2wmgg" podStartSLOduration=5.893962872 podStartE2EDuration="5.893962872s" podCreationTimestamp="2025-09-06 00:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:31:50.893466306 +0000 UTC m=+129.756228747" watchObservedRunningTime="2025-09-06 00:31:50.893962872 +0000 UTC m=+129.756725279" Sep 6 00:31:51.309997 systemd[1]: run-containerd-runc-k8s.io-f352879eebf778433d27a8fea695065f85e912087db38481a98afb30fe63396d-runc.oK6Hwq.mount: Deactivated successfully. Sep 6 00:31:53.516274 systemd[1]: run-containerd-runc-k8s.io-f352879eebf778433d27a8fea695065f85e912087db38481a98afb30fe63396d-runc.xeTBJD.mount: Deactivated successfully. Sep 6 00:31:53.605735 kubelet[2012]: W0906 00:31:53.603491 2012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804025ef_20d7_43da_9032_562fac579167.slice/cri-containerd-eecc159dab95af3ade3441833568aa72328bd6dd4f593b283ba39d1f8c0e71b0.scope WatchSource:0}: task eecc159dab95af3ade3441833568aa72328bd6dd4f593b283ba39d1f8c0e71b0 not found Sep 6 00:31:53.936147 systemd-networkd[1019]: lxc_health: Link UP Sep 6 00:31:53.967740 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:31:53.968090 systemd-networkd[1019]: lxc_health: Gained carrier Sep 6 00:31:55.582595 systemd-networkd[1019]: lxc_health: Gained IPv6LL Sep 6 00:31:55.996463 systemd[1]: run-containerd-runc-k8s.io-f352879eebf778433d27a8fea695065f85e912087db38481a98afb30fe63396d-runc.nURszE.mount: Deactivated successfully. Sep 6 00:31:56.719055 kubelet[2012]: W0906 00:31:56.718988 2012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804025ef_20d7_43da_9032_562fac579167.slice/cri-containerd-74c63aff5888f467532eb01f162cd16bdc0e98f7226b3fa17082dafcaba10f89.scope WatchSource:0}: task 74c63aff5888f467532eb01f162cd16bdc0e98f7226b3fa17082dafcaba10f89 not found Sep 6 00:31:58.306981 systemd[1]: run-containerd-runc-k8s.io-f352879eebf778433d27a8fea695065f85e912087db38481a98afb30fe63396d-runc.OjNY5M.mount: Deactivated successfully. Sep 6 00:31:59.206038 update_engine[1205]: I0906 00:31:59.205243 1205 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:31:59.206038 update_engine[1205]: I0906 00:31:59.205649 1205 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:31:59.206038 update_engine[1205]: I0906 00:31:59.205976 1205 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:31:59.217093 update_engine[1205]: E0906 00:31:59.216868 1205 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:31:59.217093 update_engine[1205]: I0906 00:31:59.217043 1205 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 6 00:31:59.839909 kubelet[2012]: W0906 00:31:59.839845 2012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804025ef_20d7_43da_9032_562fac579167.slice/cri-containerd-b36237b1f3d7128e419d0b50f8b7b11d783a4d70e22b3148f88d297cb735d93e.scope WatchSource:0}: task b36237b1f3d7128e419d0b50f8b7b11d783a4d70e22b3148f88d297cb735d93e not found Sep 6 00:32:00.696961 sshd[3865]: pam_unix(sshd:session): session closed for user core Sep 6 00:32:00.703643 systemd[1]: sshd@25-10.128.0.49:22-139.178.89.65:34954.service: Deactivated successfully. Sep 6 00:32:00.705070 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:32:00.707332 systemd-logind[1223]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:32:00.709000 systemd-logind[1223]: Removed session 25.