Sep 6 00:22:42.234579 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:22:42.234636 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:22:42.234653 kernel: BIOS-provided physical RAM map: Sep 6 00:22:42.234666 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 6 00:22:42.234679 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 6 00:22:42.234691 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 6 00:22:42.234711 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 6 00:22:42.234725 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 6 00:22:42.234739 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd27afff] usable Sep 6 00:22:42.234817 kernel: BIOS-e820: [mem 0x00000000bd27b000-0x00000000bd284fff] ACPI data Sep 6 00:22:42.234831 kernel: BIOS-e820: [mem 0x00000000bd285000-0x00000000bf8ecfff] usable Sep 6 00:22:42.234843 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 6 00:22:42.234857 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 6 00:22:42.234869 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 6 00:22:42.234891 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 6 00:22:42.234907 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 6 00:22:42.234924 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 6 00:22:42.234941 kernel: NX (Execute Disable) protection: active Sep 6 00:22:42.234957 kernel: efi: EFI v2.70 by EDK II Sep 6 00:22:42.234972 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd27b018 Sep 6 00:22:42.234987 kernel: random: crng init done Sep 6 00:22:42.235000 kernel: SMBIOS 2.4 present. Sep 6 00:22:42.235018 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 6 00:22:42.235032 kernel: Hypervisor detected: KVM Sep 6 00:22:42.235045 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:22:42.235058 kernel: kvm-clock: cpu 0, msr 16b19f001, primary cpu clock Sep 6 00:22:42.235072 kernel: kvm-clock: using sched offset of 13844637531 cycles Sep 6 00:22:42.235088 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:22:42.235103 kernel: tsc: Detected 2299.998 MHz processor Sep 6 00:22:42.235119 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:22:42.235135 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:22:42.235152 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 6 00:22:42.235194 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:22:42.235210 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 6 00:22:42.235224 kernel: Using GB pages for direct mapping Sep 6 00:22:42.235249 kernel: Secure boot disabled Sep 6 00:22:42.235263 kernel: ACPI: Early table checksum verification disabled Sep 6 00:22:42.235277 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 6 00:22:42.235291 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 6 00:22:42.235308 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 6 00:22:42.235333 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 6 00:22:42.235349 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 6 00:22:42.235363 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 6 00:22:42.235378 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 6 00:22:42.235394 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 6 00:22:42.235410 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 6 00:22:42.235431 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 6 00:22:42.235447 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 6 00:22:42.235460 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 6 00:22:42.235476 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 6 00:22:42.235493 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 6 00:22:42.235510 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 6 00:22:42.235527 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 6 00:22:42.235545 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 6 00:22:42.235562 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 6 00:22:42.235583 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 6 00:22:42.235600 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 6 00:22:42.235616 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 00:22:42.235631 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 00:22:42.235646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 6 00:22:42.235661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 6 00:22:42.235678 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 6 00:22:42.235696 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 6 00:22:42.235715 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 6 00:22:42.235737 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Sep 6 00:22:42.235768 kernel: Zone ranges: Sep 6 00:22:42.235785 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:22:42.235802 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 6 00:22:42.235819 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 6 00:22:42.235835 kernel: Movable zone start for each node Sep 6 00:22:42.235852 kernel: Early memory node ranges Sep 6 00:22:42.235869 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 6 00:22:42.235885 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 6 00:22:42.235906 kernel: node 0: [mem 0x0000000000100000-0x00000000bd27afff] Sep 6 00:22:42.235923 kernel: node 0: [mem 0x00000000bd285000-0x00000000bf8ecfff] Sep 6 00:22:42.235939 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 6 00:22:42.235956 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 6 00:22:42.235973 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 6 00:22:42.235990 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:22:42.236007 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 6 00:22:42.236022 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 6 00:22:42.236040 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Sep 6 00:22:42.236061 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 6 00:22:42.236076 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 6 00:22:42.236093 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 6 00:22:42.236110 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:22:42.236127 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:22:42.236144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:22:42.236177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:22:42.236208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:22:42.236225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:22:42.236247 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:22:42.236263 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 6 00:22:42.236280 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 6 00:22:42.236296 kernel: Booting paravirtualized kernel on KVM Sep 6 00:22:42.236312 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:22:42.236329 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 6 00:22:42.236346 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 6 00:22:42.236363 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 6 00:22:42.236380 kernel: pcpu-alloc: [0] 0 1 Sep 6 00:22:42.236400 kernel: kvm-guest: PV spinlocks enabled Sep 6 00:22:42.236417 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 00:22:42.236433 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Sep 6 00:22:42.236450 kernel: Policy zone: Normal Sep 6 00:22:42.236469 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:22:42.236486 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:22:42.236503 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 6 00:22:42.236519 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:22:42.236536 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:22:42.236558 kernel: Memory: 7515424K/7860544K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 344860K reserved, 0K cma-reserved) Sep 6 00:22:42.236575 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:22:42.236591 kernel: Kernel/User page tables isolation: enabled Sep 6 00:22:42.236624 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:22:42.236641 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:22:42.236657 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:22:42.236675 kernel: rcu: RCU event tracing is enabled. Sep 6 00:22:42.236692 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:22:42.236714 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:22:42.236755 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:22:42.236774 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:22:42.236795 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:22:42.236813 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 6 00:22:42.236830 kernel: Console: colour dummy device 80x25 Sep 6 00:22:42.236848 kernel: printk: console [ttyS0] enabled Sep 6 00:22:42.236865 kernel: ACPI: Core revision 20210730 Sep 6 00:22:42.236882 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:22:42.236901 kernel: x2apic enabled Sep 6 00:22:42.236923 kernel: Switched APIC routing to physical x2apic. Sep 6 00:22:42.236940 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 6 00:22:42.236958 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 6 00:22:42.236975 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 6 00:22:42.236992 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 6 00:22:42.237010 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 6 00:22:42.237028 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:22:42.237049 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 6 00:22:42.237067 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 6 00:22:42.237084 kernel: Spectre V2 : Mitigation: IBRS Sep 6 00:22:42.237102 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:22:42.237119 kernel: RETBleed: Mitigation: IBRS Sep 6 00:22:42.237137 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:22:42.237154 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Sep 6 00:22:42.237190 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:22:42.237208 kernel: MDS: Mitigation: Clear CPU buffers Sep 6 00:22:42.237229 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:22:42.237247 kernel: active return thunk: its_return_thunk Sep 6 00:22:42.237264 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 00:22:42.237282 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:22:42.237299 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:22:42.237317 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:22:42.237334 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:22:42.237353 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:22:42.237370 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:22:42.237391 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:22:42.237409 kernel: LSM: Security Framework initializing Sep 6 00:22:42.237426 kernel: SELinux: Initializing. Sep 6 00:22:42.237444 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:22:42.237462 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:22:42.237481 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 6 00:22:42.237498 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 6 00:22:42.237515 kernel: signal: max sigframe size: 1776 Sep 6 00:22:42.237532 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:22:42.237554 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 00:22:42.237571 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:22:42.237589 kernel: x86: Booting SMP configuration: Sep 6 00:22:42.237606 kernel: .... node #0, CPUs: #1 Sep 6 00:22:42.237627 kernel: kvm-clock: cpu 1, msr 16b19f041, secondary cpu clock Sep 6 00:22:42.237644 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 6 00:22:42.237662 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 6 00:22:42.237678 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:22:42.237699 kernel: smpboot: Max logical packages: 1 Sep 6 00:22:42.237715 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 6 00:22:42.237732 kernel: devtmpfs: initialized Sep 6 00:22:42.237801 kernel: x86/mm: Memory block size: 128MB Sep 6 00:22:42.237818 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 6 00:22:42.237835 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:22:42.237852 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:22:42.237868 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:22:42.237887 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:22:42.237909 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:22:42.237929 kernel: audit: type=2000 audit(1757118161.063:1): state=initialized audit_enabled=0 res=1 Sep 6 00:22:42.237947 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:22:42.237966 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:22:42.237985 kernel: cpuidle: using governor menu Sep 6 00:22:42.238502 kernel: ACPI: bus type PCI registered Sep 6 00:22:42.238557 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:22:42.238577 kernel: dca service started, version 1.12.1 Sep 6 00:22:42.238657 kernel: PCI: Using configuration type 1 for base access Sep 6 00:22:42.238733 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:22:42.238753 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:22:42.238828 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:22:42.238847 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:22:42.238863 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:22:42.238879 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:22:42.238894 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:22:42.238910 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:22:42.238926 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:22:42.238948 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 6 00:22:42.238962 kernel: ACPI: Interpreter enabled Sep 6 00:22:42.238978 kernel: ACPI: PM: (supports S0 S3 S5) Sep 6 00:22:42.238994 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:22:42.239009 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:22:42.239024 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 6 00:22:42.239039 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:22:42.239415 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:22:42.239636 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 6 00:22:42.239661 kernel: PCI host bridge to bus 0000:00 Sep 6 00:22:42.239971 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:22:42.240142 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:22:42.240314 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:22:42.240463 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 6 00:22:42.240615 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:22:42.240822 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 6 00:22:42.241006 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 6 00:22:42.241232 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 6 00:22:42.241617 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 6 00:22:42.242079 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 6 00:22:42.246015 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 6 00:22:42.246393 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 6 00:22:42.246638 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:22:42.246855 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 6 00:22:42.247078 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 6 00:22:42.256546 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:22:42.256798 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 6 00:22:42.257036 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 6 00:22:42.257074 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:22:42.257093 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:22:42.257109 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:22:42.257126 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:22:42.257142 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 6 00:22:42.257310 kernel: iommu: Default domain type: Translated Sep 6 00:22:42.257336 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:22:42.257355 kernel: vgaarb: loaded Sep 6 00:22:42.257374 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:22:42.257402 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:22:42.257421 kernel: PTP clock support registered Sep 6 00:22:42.257439 kernel: Registered efivars operations Sep 6 00:22:42.257458 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:22:42.257478 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:22:42.257498 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 6 00:22:42.257517 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 6 00:22:42.257537 kernel: e820: reserve RAM buffer [mem 0xbd27b000-0xbfffffff] Sep 6 00:22:42.257556 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 6 00:22:42.257581 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 6 00:22:42.257600 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:22:42.257619 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:22:42.257639 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:22:42.257658 kernel: pnp: PnP ACPI init Sep 6 00:22:42.257676 kernel: pnp: PnP ACPI: found 7 devices Sep 6 00:22:42.257695 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:22:42.257714 kernel: NET: Registered PF_INET protocol family Sep 6 00:22:42.257738 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 00:22:42.257758 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 6 00:22:42.257777 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:22:42.257798 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:22:42.257818 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Sep 6 00:22:42.257838 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 6 00:22:42.257858 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 6 00:22:42.257878 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 6 00:22:42.257897 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:22:42.257930 kernel: NET: Registered PF_XDP protocol family Sep 6 00:22:42.258178 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:22:42.258407 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:22:42.258607 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:22:42.258806 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 6 00:22:42.259045 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 6 00:22:42.259072 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:22:42.259101 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 6 00:22:42.259121 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 6 00:22:42.259141 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 00:22:42.267794 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 6 00:22:42.267845 kernel: clocksource: Switched to clocksource tsc Sep 6 00:22:42.267865 kernel: Initialise system trusted keyrings Sep 6 00:22:42.267884 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 6 00:22:42.267904 kernel: Key type asymmetric registered Sep 6 00:22:42.267930 kernel: Asymmetric key parser 'x509' registered Sep 6 00:22:42.267967 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:22:42.267987 kernel: io scheduler mq-deadline registered Sep 6 00:22:42.268006 kernel: io scheduler kyber registered Sep 6 00:22:42.268025 kernel: io scheduler bfq registered Sep 6 00:22:42.268043 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:22:42.268064 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 6 00:22:42.268406 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 6 00:22:42.268439 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 6 00:22:42.268653 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 6 00:22:42.268693 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 6 00:22:42.268905 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 6 00:22:42.268934 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:22:42.268953 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:22:42.268971 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 6 00:22:42.268990 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 6 00:22:42.269008 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 6 00:22:42.273866 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 6 00:22:42.273928 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:22:42.273948 kernel: i8042: Warning: Keylock active Sep 6 00:22:42.273967 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:22:42.273987 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:22:42.274258 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 6 00:22:42.274475 kernel: rtc_cmos 00:00: registered as rtc0 Sep 6 00:22:42.274692 kernel: rtc_cmos 00:00: setting system clock to 2025-09-06T00:22:41 UTC (1757118161) Sep 6 00:22:42.274903 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 6 00:22:42.274933 kernel: intel_pstate: CPU model not supported Sep 6 00:22:42.274952 kernel: pstore: Registered efi as persistent store backend Sep 6 00:22:42.274971 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:22:42.274989 kernel: Segment Routing with IPv6 Sep 6 00:22:42.275007 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:22:42.275027 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:22:42.275045 kernel: Key type dns_resolver registered Sep 6 00:22:42.275063 kernel: IPI shorthand broadcast: enabled Sep 6 00:22:42.275083 kernel: sched_clock: Marking stable (840359086, 191692508)->(1117534530, -85482936) Sep 6 00:22:42.275107 kernel: registered taskstats version 1 Sep 6 00:22:42.275124 kernel: Loading compiled-in X.509 certificates Sep 6 00:22:42.275143 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:22:42.275175 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:22:42.275192 kernel: Key type .fscrypt registered Sep 6 00:22:42.275209 kernel: Key type fscrypt-provisioning registered Sep 6 00:22:42.275228 kernel: pstore: Using crash dump compression: deflate Sep 6 00:22:42.275246 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:22:42.275264 kernel: ima: No architecture policies found Sep 6 00:22:42.275288 kernel: clk: Disabling unused clocks Sep 6 00:22:42.275306 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:22:42.275325 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:22:42.275344 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:22:42.275363 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:22:42.275382 kernel: Run /init as init process Sep 6 00:22:42.275401 kernel: with arguments: Sep 6 00:22:42.275420 kernel: /init Sep 6 00:22:42.275438 kernel: with environment: Sep 6 00:22:42.275461 kernel: HOME=/ Sep 6 00:22:42.275479 kernel: TERM=linux Sep 6 00:22:42.275497 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:22:42.275521 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:22:42.275543 systemd[1]: Detected virtualization kvm. Sep 6 00:22:42.275563 systemd[1]: Detected architecture x86-64. Sep 6 00:22:42.275582 systemd[1]: Running in initrd. Sep 6 00:22:42.275605 systemd[1]: No hostname configured, using default hostname. Sep 6 00:22:42.275623 systemd[1]: Hostname set to . Sep 6 00:22:42.275643 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:22:42.275668 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:22:42.275687 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:22:42.275706 systemd[1]: Reached target cryptsetup.target. Sep 6 00:22:42.275723 systemd[1]: Reached target paths.target. Sep 6 00:22:42.275742 systemd[1]: Reached target slices.target. Sep 6 00:22:42.275766 systemd[1]: Reached target swap.target. Sep 6 00:22:42.275785 systemd[1]: Reached target timers.target. Sep 6 00:22:42.275805 systemd[1]: Listening on iscsid.socket. Sep 6 00:22:42.275825 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:22:42.275843 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:22:42.275862 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:22:42.275881 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:22:42.275901 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:22:42.275925 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:22:42.275945 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:22:42.275989 systemd[1]: Reached target sockets.target. Sep 6 00:22:42.276014 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:22:42.276034 systemd[1]: Finished network-cleanup.service. Sep 6 00:22:42.276053 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:22:42.276078 systemd[1]: Starting systemd-journald.service... Sep 6 00:22:42.276098 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:22:42.276118 systemd[1]: Starting systemd-resolved.service... Sep 6 00:22:42.276138 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:22:42.276156 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:22:42.276192 kernel: audit: type=1130 audit(1757118162.246:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.276212 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:22:42.276232 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:22:42.276252 kernel: audit: type=1130 audit(1757118162.258:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.276277 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:22:42.276297 kernel: audit: type=1130 audit(1757118162.268:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.276324 systemd-journald[189]: Journal started Sep 6 00:22:42.276426 systemd-journald[189]: Runtime Journal (/run/log/journal/081536fed85ea05d0a4bc94f94089e24) is 8.0M, max 148.8M, 140.8M free. Sep 6 00:22:42.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.287196 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:22:42.282662 systemd-modules-load[190]: Inserted module 'overlay' Sep 6 00:22:42.291197 systemd[1]: Started systemd-journald.service. Sep 6 00:22:42.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.304231 kernel: audit: type=1130 audit(1757118162.292:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.315613 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:22:42.326192 kernel: audit: type=1130 audit(1757118162.314:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.331648 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:22:42.348326 kernel: audit: type=1130 audit(1757118162.334:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.336763 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:22:42.348974 systemd-resolved[191]: Positive Trust Anchors: Sep 6 00:22:42.348993 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:22:42.349050 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:22:42.355501 systemd-resolved[191]: Defaulting to hostname 'linux'. Sep 6 00:22:42.378188 dracut-cmdline[206]: dracut-dracut-053 Sep 6 00:22:42.378188 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:22:42.396228 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:22:42.396276 kernel: Bridge firewalling registered Sep 6 00:22:42.357316 systemd[1]: Started systemd-resolved.service. Sep 6 00:22:42.396137 systemd-modules-load[190]: Inserted module 'br_netfilter' Sep 6 00:22:42.413269 systemd[1]: Reached target nss-lookup.target. Sep 6 00:22:42.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.424224 kernel: audit: type=1130 audit(1757118162.411:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.429190 kernel: SCSI subsystem initialized Sep 6 00:22:42.449124 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:22:42.449232 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:22:42.452189 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:22:42.457210 systemd-modules-load[190]: Inserted module 'dm_multipath' Sep 6 00:22:42.458871 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:22:42.478516 kernel: audit: type=1130 audit(1757118162.470:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.472568 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:22:42.489924 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:22:42.492526 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:22:42.503324 kernel: audit: type=1130 audit(1757118162.495:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.514197 kernel: iscsi: registered transport (tcp) Sep 6 00:22:42.550183 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:22:42.550288 kernel: QLogic iSCSI HBA Driver Sep 6 00:22:42.600052 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:22:42.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.601530 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:22:42.665223 kernel: raid6: avx2x4 gen() 17969 MB/s Sep 6 00:22:42.686213 kernel: raid6: avx2x4 xor() 8225 MB/s Sep 6 00:22:42.707213 kernel: raid6: avx2x2 gen() 18104 MB/s Sep 6 00:22:42.728258 kernel: raid6: avx2x2 xor() 18229 MB/s Sep 6 00:22:42.749220 kernel: raid6: avx2x1 gen() 14083 MB/s Sep 6 00:22:42.770229 kernel: raid6: avx2x1 xor() 15802 MB/s Sep 6 00:22:42.791213 kernel: raid6: sse2x4 gen() 10917 MB/s Sep 6 00:22:42.812218 kernel: raid6: sse2x4 xor() 6642 MB/s Sep 6 00:22:42.833219 kernel: raid6: sse2x2 gen() 11651 MB/s Sep 6 00:22:42.854223 kernel: raid6: sse2x2 xor() 7184 MB/s Sep 6 00:22:42.875229 kernel: raid6: sse2x1 gen() 10183 MB/s Sep 6 00:22:42.901326 kernel: raid6: sse2x1 xor() 5091 MB/s Sep 6 00:22:42.901418 kernel: raid6: using algorithm avx2x2 gen() 18104 MB/s Sep 6 00:22:42.901442 kernel: raid6: .... xor() 18229 MB/s, rmw enabled Sep 6 00:22:42.906511 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:22:42.932209 kernel: xor: automatically using best checksumming function avx Sep 6 00:22:43.045208 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:22:43.058466 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:22:43.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:43.058000 audit: BPF prog-id=7 op=LOAD Sep 6 00:22:43.058000 audit: BPF prog-id=8 op=LOAD Sep 6 00:22:43.059991 systemd[1]: Starting systemd-udevd.service... Sep 6 00:22:43.078723 systemd-udevd[389]: Using default interface naming scheme 'v252'. Sep 6 00:22:43.096561 systemd[1]: Started systemd-udevd.service. Sep 6 00:22:43.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:43.106725 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:22:43.122646 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Sep 6 00:22:43.164316 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:22:43.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:43.165706 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:22:43.236420 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:22:43.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:43.405206 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:22:43.405295 kernel: scsi host0: Virtio SCSI HBA Sep 6 00:22:43.428195 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 6 00:22:43.499608 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:22:43.499706 kernel: AES CTR mode by8 optimization enabled Sep 6 00:22:43.523851 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 6 00:22:43.585277 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 6 00:22:43.585548 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 6 00:22:43.585772 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 6 00:22:43.585982 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 6 00:22:43.586218 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:22:43.586253 kernel: GPT:17805311 != 25165823 Sep 6 00:22:43.586275 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:22:43.586298 kernel: GPT:17805311 != 25165823 Sep 6 00:22:43.586319 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:22:43.586346 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:22:43.586378 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 6 00:22:43.650198 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (441) Sep 6 00:22:43.651954 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:22:43.677202 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:22:43.685801 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:22:43.703390 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:22:43.734765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:22:43.745579 systemd[1]: Starting disk-uuid.service... Sep 6 00:22:43.768530 disk-uuid[513]: Primary Header is updated. Sep 6 00:22:43.768530 disk-uuid[513]: Secondary Entries is updated. Sep 6 00:22:43.768530 disk-uuid[513]: Secondary Header is updated. Sep 6 00:22:43.798376 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:22:43.805249 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:22:44.825074 disk-uuid[514]: The operation has completed successfully. Sep 6 00:22:44.834377 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:22:44.916156 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:22:44.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:44.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:44.916328 systemd[1]: Finished disk-uuid.service. Sep 6 00:22:44.923328 systemd[1]: Starting verity-setup.service... Sep 6 00:22:44.955197 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 00:22:45.033058 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:22:45.035895 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:22:45.048042 systemd[1]: Finished verity-setup.service. Sep 6 00:22:45.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.146222 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:22:45.146846 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:22:45.147361 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:22:45.198404 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:22:45.198450 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:22:45.198474 kernel: BTRFS info (device sda6): has skinny extents Sep 6 00:22:45.148603 systemd[1]: Starting ignition-setup.service... Sep 6 00:22:45.168019 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:22:45.227363 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 00:22:45.235310 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:22:45.255511 systemd[1]: Finished ignition-setup.service. Sep 6 00:22:45.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.257255 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:22:45.302045 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:22:45.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.310000 audit: BPF prog-id=9 op=LOAD Sep 6 00:22:45.312752 systemd[1]: Starting systemd-networkd.service... Sep 6 00:22:45.351766 systemd-networkd[688]: lo: Link UP Sep 6 00:22:45.351781 systemd-networkd[688]: lo: Gained carrier Sep 6 00:22:45.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.353409 systemd-networkd[688]: Enumeration completed Sep 6 00:22:45.353568 systemd[1]: Started systemd-networkd.service. Sep 6 00:22:45.354457 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:22:45.356681 systemd-networkd[688]: eth0: Link UP Sep 6 00:22:45.356689 systemd-networkd[688]: eth0: Gained carrier Sep 6 00:22:45.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.365585 systemd[1]: Reached target network.target. Sep 6 00:22:45.451599 iscsid[699]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:22:45.451599 iscsid[699]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 00:22:45.451599 iscsid[699]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:22:45.451599 iscsid[699]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:22:45.451599 iscsid[699]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:22:45.451599 iscsid[699]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:22:45.451599 iscsid[699]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:22:45.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.371582 systemd-networkd[688]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d.c.flatcar-212911.internal' to 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' Sep 6 00:22:45.557569 ignition[658]: Ignition 2.14.0 Sep 6 00:22:45.371608 systemd-networkd[688]: eth0: DHCPv4 address 10.128.0.81/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 6 00:22:45.557586 ignition[658]: Stage: fetch-offline Sep 6 00:22:45.396246 systemd[1]: Starting iscsiuio.service... Sep 6 00:22:45.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.557796 ignition[658]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:22:45.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.413530 systemd[1]: Started iscsiuio.service. Sep 6 00:22:45.557849 ignition[658]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:22:45.429869 systemd[1]: Starting iscsid.service... Sep 6 00:22:45.579938 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:22:45.444608 systemd[1]: Started iscsid.service. Sep 6 00:22:45.580329 ignition[658]: parsed url from cmdline: "" Sep 6 00:22:45.460446 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:22:45.580337 ignition[658]: no config URL provided Sep 6 00:22:45.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.533591 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:22:45.580348 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:22:45.561587 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:22:45.580362 ignition[658]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:22:45.570487 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:22:45.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.580372 ignition[658]: failed to fetch config: resource requires networking Sep 6 00:22:45.591540 systemd[1]: Reached target remote-fs.target. Sep 6 00:22:45.580830 ignition[658]: Ignition finished successfully Sep 6 00:22:45.624680 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:22:45.693482 ignition[713]: Ignition 2.14.0 Sep 6 00:22:45.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.649855 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:22:45.693492 ignition[713]: Stage: fetch Sep 6 00:22:45.664810 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:22:45.693660 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:22:45.680803 systemd[1]: Starting ignition-fetch.service... Sep 6 00:22:45.693706 ignition[713]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:22:45.719687 unknown[713]: fetched base config from "system" Sep 6 00:22:45.703799 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:22:45.719709 unknown[713]: fetched base config from "system" Sep 6 00:22:45.704051 ignition[713]: parsed url from cmdline: "" Sep 6 00:22:45.719726 unknown[713]: fetched user config from "gcp" Sep 6 00:22:45.704059 ignition[713]: no config URL provided Sep 6 00:22:45.725041 systemd[1]: Finished ignition-fetch.service. Sep 6 00:22:45.704070 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:22:45.742952 systemd[1]: Starting ignition-kargs.service... Sep 6 00:22:45.704085 ignition[713]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:22:45.782680 systemd[1]: Finished ignition-kargs.service. Sep 6 00:22:45.704134 ignition[713]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 6 00:22:45.799744 systemd[1]: Starting ignition-disks.service... Sep 6 00:22:45.714231 ignition[713]: GET result: OK Sep 6 00:22:45.836906 systemd[1]: Finished ignition-disks.service. Sep 6 00:22:45.714331 ignition[713]: parsing config with SHA512: d97c028cb1f7dbb10a677dc5337ac85401906954170d7ec7e9c01d649c047f1709d743b033a55c22484931f17084b01231776f130019bb8cb6642c90c42ec34f Sep 6 00:22:45.854712 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:22:45.722180 ignition[713]: fetch: fetch complete Sep 6 00:22:45.869559 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:22:45.722193 ignition[713]: fetch: fetch passed Sep 6 00:22:45.878625 systemd[1]: Reached target local-fs.target. Sep 6 00:22:45.722265 ignition[713]: Ignition finished successfully Sep 6 00:22:45.891595 systemd[1]: Reached target sysinit.target. Sep 6 00:22:45.756108 ignition[719]: Ignition 2.14.0 Sep 6 00:22:45.905666 systemd[1]: Reached target basic.target. Sep 6 00:22:45.756117 ignition[719]: Stage: kargs Sep 6 00:22:45.929794 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:22:45.756294 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:22:45.756333 ignition[719]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:22:45.764005 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:22:45.765544 ignition[719]: kargs: kargs passed Sep 6 00:22:45.765617 ignition[719]: Ignition finished successfully Sep 6 00:22:45.811497 ignition[725]: Ignition 2.14.0 Sep 6 00:22:45.811506 ignition[725]: Stage: disks Sep 6 00:22:45.811648 ignition[725]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:22:45.811684 ignition[725]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:22:45.820090 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:22:45.821663 ignition[725]: disks: disks passed Sep 6 00:22:45.821721 ignition[725]: Ignition finished successfully Sep 6 00:22:45.970994 systemd-fsck[733]: ROOT: clean, 629/1628000 files, 124065/1617920 blocks Sep 6 00:22:46.156353 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:22:46.198496 kernel: kauditd_printk_skb: 22 callbacks suppressed Sep 6 00:22:46.198542 kernel: audit: type=1130 audit(1757118166.155:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:46.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:46.157927 systemd[1]: Mounting sysroot.mount... Sep 6 00:22:46.223626 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:22:46.216696 systemd[1]: Mounted sysroot.mount. Sep 6 00:22:46.230757 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:22:46.249255 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:22:46.254075 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:22:46.254133 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:22:46.254194 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:22:46.331354 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (739) Sep 6 00:22:46.275000 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:22:46.358366 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:22:46.358408 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:22:46.358433 kernel: BTRFS info (device sda6): has skinny extents Sep 6 00:22:46.298925 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:22:46.381371 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 00:22:46.381416 initrd-setup-root[744]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:22:46.317835 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:22:46.406376 initrd-setup-root[752]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:22:46.385478 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:22:46.422970 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:22:46.433337 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:22:46.466287 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:22:46.501530 kernel: audit: type=1130 audit(1757118166.465:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:46.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:46.467850 systemd[1]: Starting ignition-mount.service... Sep 6 00:22:46.509599 systemd[1]: Starting sysroot-boot.service... Sep 6 00:22:46.523517 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 00:22:46.523642 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 00:22:46.547344 ignition[804]: INFO : Ignition 2.14.0 Sep 6 00:22:46.547344 ignition[804]: INFO : Stage: mount Sep 6 00:22:46.547344 ignition[804]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:22:46.547344 ignition[804]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:22:46.644363 kernel: audit: type=1130 audit(1757118166.560:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:46.644419 kernel: audit: type=1130 audit(1757118166.597:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:46.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:46.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:46.644673 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:22:46.644673 ignition[804]: INFO : mount: mount passed Sep 6 00:22:46.644673 ignition[804]: INFO : Ignition finished successfully Sep 6 00:22:46.716349 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (814) Sep 6 00:22:46.716395 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:22:46.716507 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:22:46.716573 kernel: BTRFS info (device sda6): has skinny extents Sep 6 00:22:46.716599 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 00:22:46.557120 systemd[1]: Finished ignition-mount.service. Sep 6 00:22:46.564064 systemd[1]: Finished sysroot-boot.service. Sep 6 00:22:46.600072 systemd[1]: Starting ignition-files.service... Sep 6 00:22:46.655765 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:22:46.755339 ignition[833]: INFO : Ignition 2.14.0 Sep 6 00:22:46.755339 ignition[833]: INFO : Stage: files Sep 6 00:22:46.755339 ignition[833]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:22:46.755339 ignition[833]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:22:46.755339 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:22:46.719122 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:22:46.821431 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:22:46.821431 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:22:46.821431 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:22:46.821431 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:22:46.821431 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:22:46.821431 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:22:46.821431 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:22:46.821431 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:22:46.821431 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:22:46.821431 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 00:22:46.773767 unknown[833]: wrote ssh authorized keys file for user: core Sep 6 00:22:47.378344 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:22:47.387378 systemd-networkd[688]: eth0: Gained IPv6LL Sep 6 00:22:48.322240 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem135229652" Sep 6 00:22:48.339331 ignition[833]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem135229652": device or resource busy Sep 6 00:22:48.339331 ignition[833]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem135229652", trying btrfs: device or resource busy Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem135229652" Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem135229652" Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem135229652" Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem135229652" Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Sep 6 00:22:48.339331 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:22:48.336984 systemd[1]: mnt-oem135229652.mount: Deactivated successfully. Sep 6 00:22:48.574434 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3894448473" Sep 6 00:22:48.574434 ignition[833]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3894448473": device or resource busy Sep 6 00:22:48.574434 ignition[833]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3894448473", trying btrfs: device or resource busy Sep 6 00:22:48.574434 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3894448473" Sep 6 00:22:48.574434 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3894448473" Sep 6 00:22:48.574434 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem3894448473" Sep 6 00:22:48.574434 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem3894448473" Sep 6 00:22:48.574434 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Sep 6 00:22:48.574434 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:22:48.574434 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:22:48.357651 systemd[1]: mnt-oem3894448473.mount: Deactivated successfully. Sep 6 00:22:48.755354 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Sep 6 00:22:49.018605 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:22:49.018605 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:22:49.050573 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4055423515" Sep 6 00:22:49.050573 ignition[833]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4055423515": device or resource busy Sep 6 00:22:49.050573 ignition[833]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4055423515", trying btrfs: device or resource busy Sep 6 00:22:49.296405 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4055423515" Sep 6 00:22:49.296405 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4055423515" Sep 6 00:22:49.296405 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem4055423515" Sep 6 00:22:49.296405 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem4055423515" Sep 6 00:22:49.296405 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Sep 6 00:22:49.296405 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:22:49.296405 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(18): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 00:22:49.418312 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(18): GET result: OK Sep 6 00:22:49.333681 systemd[1]: mnt-oem4055423515.mount: Deactivated successfully. Sep 6 00:22:49.888774 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:22:49.907373 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(19): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Sep 6 00:22:49.907373 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(19): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:22:49.907373 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem187771071" Sep 6 00:22:49.907373 ignition[833]: CRITICAL : files: createFilesystemsFiles: createFiles: op(19): op(1a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem187771071": device or resource busy Sep 6 00:22:49.907373 ignition[833]: ERROR : files: createFilesystemsFiles: createFiles: op(19): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem187771071", trying btrfs: device or resource busy Sep 6 00:22:49.907373 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem187771071" Sep 6 00:22:49.907373 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem187771071" Sep 6 00:22:49.907373 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1c): [started] unmounting "/mnt/oem187771071" Sep 6 00:22:49.907373 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1c): [finished] unmounting "/mnt/oem187771071" Sep 6 00:22:49.907373 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(19): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Sep 6 00:22:49.907373 ignition[833]: INFO : files: op(1d): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:22:49.907373 ignition[833]: INFO : files: op(1d): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:22:49.907373 ignition[833]: INFO : files: op(1e): [started] processing unit "oem-gce.service" Sep 6 00:22:49.907373 ignition[833]: INFO : files: op(1e): [finished] processing unit "oem-gce.service" Sep 6 00:22:49.907373 ignition[833]: INFO : files: op(1f): [started] processing unit "oem-gce-enable-oslogin.service" Sep 6 00:22:49.907373 ignition[833]: INFO : files: op(1f): [finished] processing unit "oem-gce-enable-oslogin.service" Sep 6 00:22:49.907373 ignition[833]: INFO : files: op(20): [started] processing unit "containerd.service" Sep 6 00:22:50.352467 kernel: audit: type=1130 audit(1757118169.940:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.352531 kernel: audit: type=1130 audit(1757118170.034:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.352561 kernel: audit: type=1130 audit(1757118170.081:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.352594 kernel: audit: type=1131 audit(1757118170.081:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.352612 kernel: audit: type=1130 audit(1757118170.219:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.352627 kernel: audit: type=1131 audit(1757118170.219:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:49.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(20): op(21): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(20): op(21): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(20): [finished] processing unit "containerd.service" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(22): [started] processing unit "prepare-helm.service" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(22): op(23): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(22): op(23): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(22): [finished] processing unit "prepare-helm.service" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(25): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(25): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(26): [started] setting preset to enabled for "oem-gce.service" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(26): [finished] setting preset to enabled for "oem-gce.service" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(27): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Sep 6 00:22:50.352873 ignition[833]: INFO : files: op(27): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Sep 6 00:22:50.352873 ignition[833]: INFO : files: createResultFile: createFiles: op(28): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:22:50.352873 ignition[833]: INFO : files: createResultFile: createFiles: op(28): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:22:50.352873 ignition[833]: INFO : files: files passed Sep 6 00:22:50.352873 ignition[833]: INFO : Ignition finished successfully Sep 6 00:22:50.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:49.912713 systemd[1]: mnt-oem187771071.mount: Deactivated successfully. Sep 6 00:22:49.925977 systemd[1]: Finished ignition-files.service. Sep 6 00:22:50.726527 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:22:49.952385 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:22:49.983578 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:22:49.984754 systemd[1]: Starting ignition-quench.service... Sep 6 00:22:50.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.009987 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:22:50.036204 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:22:50.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.036381 systemd[1]: Finished ignition-quench.service. Sep 6 00:22:50.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.082793 systemd[1]: Reached target ignition-complete.target. Sep 6 00:22:50.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.167652 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:22:50.883412 ignition[871]: INFO : Ignition 2.14.0 Sep 6 00:22:50.883412 ignition[871]: INFO : Stage: umount Sep 6 00:22:50.883412 ignition[871]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:22:50.883412 ignition[871]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 6 00:22:50.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.195106 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:22:50.961531 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 6 00:22:50.961531 ignition[871]: INFO : umount: umount passed Sep 6 00:22:50.961531 ignition[871]: INFO : Ignition finished successfully Sep 6 00:22:50.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.195305 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:22:51.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.220777 systemd[1]: Reached target initrd-fs.target. Sep 6 00:22:51.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.273570 systemd[1]: Reached target initrd.target. Sep 6 00:22:51.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.307716 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:22:51.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.309447 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:22:50.341937 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:22:51.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.362247 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:22:50.396517 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:22:50.415741 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:22:50.435781 systemd[1]: Stopped target timers.target. Sep 6 00:22:50.453718 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:22:50.453936 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:22:50.494000 systemd[1]: Stopped target initrd.target. Sep 6 00:22:51.240563 kernel: kauditd_printk_skb: 15 callbacks suppressed Sep 6 00:22:51.240603 kernel: audit: type=1131 audit(1757118171.206:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.524801 systemd[1]: Stopped target basic.target. Sep 6 00:22:51.275545 kernel: audit: type=1131 audit(1757118171.247:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.557822 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:22:50.590848 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:22:50.603840 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:22:50.643755 systemd[1]: Stopped target remote-fs.target. Sep 6 00:22:51.352614 kernel: audit: type=1131 audit(1757118171.323:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.679749 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:22:51.390516 kernel: audit: type=1131 audit(1757118171.360:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.703702 systemd[1]: Stopped target sysinit.target. Sep 6 00:22:51.456492 kernel: audit: type=1130 audit(1757118171.398:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.456535 kernel: audit: type=1131 audit(1757118171.398:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.456552 kernel: audit: type=1334 audit(1757118171.400:64): prog-id=6 op=UNLOAD Sep 6 00:22:51.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.400000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:22:50.719691 systemd[1]: Stopped target local-fs.target. Sep 6 00:22:50.734705 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:22:50.757709 systemd[1]: Stopped target swap.target. Sep 6 00:22:51.523424 kernel: audit: type=1131 audit(1757118171.494:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.774628 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:22:51.560441 kernel: audit: type=1131 audit(1757118171.530:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.774861 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:22:51.597389 kernel: audit: type=1131 audit(1757118171.568:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.799840 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:22:50.814669 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:22:50.814900 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:22:51.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.833182 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:22:50.833428 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:22:50.851844 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:22:51.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.852055 systemd[1]: Stopped ignition-files.service. Sep 6 00:22:51.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.868443 systemd[1]: Stopping ignition-mount.service... Sep 6 00:22:51.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.905405 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:22:50.905715 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:22:50.926456 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:22:51.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.952399 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:22:51.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.952733 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:22:51.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.970790 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:22:50.971004 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:22:51.823000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:22:51.823000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:22:50.994606 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:22:51.827000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:22:51.827000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:22:51.827000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:22:50.995964 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:22:51.859767 systemd-journald[189]: Received SIGTERM from PID 1 (n/a). Sep 6 00:22:50.996097 systemd[1]: Stopped ignition-mount.service. Sep 6 00:22:51.868453 iscsid[699]: iscsid shutting down. Sep 6 00:22:51.007235 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:22:51.007382 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:22:51.022333 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:22:51.022503 systemd[1]: Stopped ignition-disks.service. Sep 6 00:22:51.038571 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:22:51.038666 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:22:51.053554 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:22:51.053767 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:22:51.068765 systemd[1]: Stopped target network.target. Sep 6 00:22:51.087491 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:22:51.087606 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:22:51.103615 systemd[1]: Stopped target paths.target. Sep 6 00:22:51.117533 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:22:51.121316 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:22:51.133477 systemd[1]: Stopped target slices.target. Sep 6 00:22:51.146443 systemd[1]: Stopped target sockets.target. Sep 6 00:22:51.161517 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:22:51.161588 systemd[1]: Closed iscsid.socket. Sep 6 00:22:51.175569 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:22:51.175633 systemd[1]: Closed iscsiuio.socket. Sep 6 00:22:51.191512 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:22:51.191627 systemd[1]: Stopped ignition-setup.service. Sep 6 00:22:51.234608 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:22:51.234711 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:22:51.249079 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:22:51.271386 systemd-networkd[688]: eth0: DHCPv6 lease lost Sep 6 00:22:51.868000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:22:51.282677 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:22:51.307156 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:22:51.307367 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:22:51.346758 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:22:51.346924 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:22:51.362082 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:22:51.362265 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:22:51.401598 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:22:51.401670 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:22:51.466607 systemd[1]: Stopping network-cleanup.service... Sep 6 00:22:51.480334 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:22:51.480478 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:22:51.495566 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:22:51.495684 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:22:51.553308 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:22:51.553392 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:22:51.569831 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:22:51.606287 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:22:51.607001 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:22:51.607304 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:22:51.629398 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:22:51.629478 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:22:51.642479 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:22:51.642558 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:22:51.660426 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:22:51.660532 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:22:51.677534 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:22:51.677620 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:22:51.694513 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:22:51.694605 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:22:51.711958 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:22:51.736377 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:22:51.736609 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:22:51.753146 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:22:51.753324 systemd[1]: Stopped network-cleanup.service. Sep 6 00:22:51.767879 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:22:51.768000 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:22:51.786782 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:22:51.804727 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:22:51.822983 systemd[1]: Switching root. Sep 6 00:22:51.871758 systemd-journald[189]: Journal stopped Sep 6 00:22:56.569547 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:22:56.569669 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:22:56.569704 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:22:56.569747 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:22:56.569771 kernel: SELinux: policy capability open_perms=1 Sep 6 00:22:56.569799 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:22:56.569828 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:22:56.569851 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:22:56.569874 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:22:56.569897 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:22:56.569923 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:22:56.569950 systemd[1]: Successfully loaded SELinux policy in 115.497ms. Sep 6 00:22:56.570001 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.963ms. Sep 6 00:22:56.570027 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:22:56.570052 systemd[1]: Detected virtualization kvm. Sep 6 00:22:56.570080 systemd[1]: Detected architecture x86-64. Sep 6 00:22:56.570104 systemd[1]: Detected first boot. Sep 6 00:22:56.570128 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:22:56.570152 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:22:56.570201 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:22:56.570232 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:22:56.570265 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:22:56.570296 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:22:56.570327 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:22:56.570351 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 6 00:22:56.570376 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:22:56.570400 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:22:56.570424 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:22:56.570457 systemd[1]: Created slice system-getty.slice. Sep 6 00:22:56.570481 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:22:56.570509 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:22:56.570537 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:22:56.570561 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:22:56.570585 systemd[1]: Created slice user.slice. Sep 6 00:22:56.570609 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:22:56.570632 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:22:56.570657 systemd[1]: Set up automount boot.automount. Sep 6 00:22:56.570681 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:22:56.570705 systemd[1]: Reached target integritysetup.target. Sep 6 00:22:56.570733 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:22:56.570763 systemd[1]: Reached target remote-fs.target. Sep 6 00:22:56.570788 systemd[1]: Reached target slices.target. Sep 6 00:22:56.570812 systemd[1]: Reached target swap.target. Sep 6 00:22:56.570835 systemd[1]: Reached target torcx.target. Sep 6 00:22:56.570859 systemd[1]: Reached target veritysetup.target. Sep 6 00:22:56.570886 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:22:56.570910 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:22:56.570938 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:22:56.570965 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:22:56.570996 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:22:56.571022 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:22:56.571047 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:22:56.571073 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:22:56.571097 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:22:56.571128 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:22:56.571156 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:22:56.599579 systemd[1]: Mounting media.mount... Sep 6 00:22:56.599617 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:56.599642 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:22:56.599671 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:22:56.599695 systemd[1]: Mounting tmp.mount... Sep 6 00:22:56.599719 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:22:56.599744 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:22:56.599768 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:22:56.599792 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:22:56.599816 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:22:56.599844 systemd[1]: Starting modprobe@drm.service... Sep 6 00:22:56.599869 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:22:56.599894 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:22:56.599918 systemd[1]: Starting modprobe@loop.service... Sep 6 00:22:56.599942 kernel: fuse: init (API version 7.34) Sep 6 00:22:56.599976 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:22:56.600001 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 00:22:56.600025 kernel: loop: module loaded Sep 6 00:22:56.600050 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 6 00:22:56.600078 systemd[1]: Starting systemd-journald.service... Sep 6 00:22:56.600101 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:22:56.600126 kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 6 00:22:56.600153 kernel: audit: type=1305 audit(1757118176.565:88): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:22:56.600196 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:22:56.600223 kernel: audit: type=1300 audit(1757118176.565:88): arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffffa381b40 a2=4000 a3=7ffffa381bdc items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:56.600255 systemd-journald[1034]: Journal started Sep 6 00:22:56.600359 systemd-journald[1034]: Runtime Journal (/run/log/journal/081536fed85ea05d0a4bc94f94089e24) is 8.0M, max 148.8M, 140.8M free. Sep 6 00:22:56.132000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:22:56.132000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 00:22:56.565000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:22:56.565000 audit[1034]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffffa381b40 a2=4000 a3=7ffffa381bdc items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:56.565000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:22:56.638937 kernel: audit: type=1327 audit(1757118176.565:88): proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:22:56.659197 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:22:56.676220 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:22:56.702186 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:56.702288 systemd[1]: Started systemd-journald.service. Sep 6 00:22:56.736383 kernel: audit: type=1130 audit(1757118176.712:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.715469 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:22:56.743567 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:22:56.750528 systemd[1]: Mounted media.mount. Sep 6 00:22:56.757560 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:22:56.766439 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:22:56.775582 systemd[1]: Mounted tmp.mount. Sep 6 00:22:56.783784 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:22:56.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.793134 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:22:56.815218 kernel: audit: type=1130 audit(1757118176.791:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.824135 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:22:56.824491 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:22:56.846210 kernel: audit: type=1130 audit(1757118176.822:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.855100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:22:56.855423 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:22:56.899199 kernel: audit: type=1130 audit(1757118176.853:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.899322 kernel: audit: type=1131 audit(1757118176.853:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.908035 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:22:56.908344 systemd[1]: Finished modprobe@drm.service. Sep 6 00:22:56.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.951571 kernel: audit: type=1130 audit(1757118176.906:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.951679 kernel: audit: type=1131 audit(1757118176.906:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.960881 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:22:56.961190 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:22:56.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.970966 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:22:56.971291 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:22:56.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.979812 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:22:56.980108 systemd[1]: Finished modprobe@loop.service. Sep 6 00:22:56.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.988900 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:22:56.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.997820 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:22:57.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:57.006874 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:22:57.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:57.015928 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:22:57.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:57.025089 systemd[1]: Reached target network-pre.target. Sep 6 00:22:57.036098 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:22:57.047257 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:22:57.054352 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:22:57.058086 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:22:57.067591 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:22:57.075385 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:22:57.077498 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:22:57.084976 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:22:57.087231 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:22:57.091487 systemd-journald[1034]: Time spent on flushing to /var/log/journal/081536fed85ea05d0a4bc94f94089e24 is 77.365ms for 1107 entries. Sep 6 00:22:57.091487 systemd-journald[1034]: System Journal (/var/log/journal/081536fed85ea05d0a4bc94f94089e24) is 8.0M, max 584.8M, 576.8M free. Sep 6 00:22:57.194975 systemd-journald[1034]: Received client request to flush runtime journal. Sep 6 00:22:57.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:57.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:57.105492 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:22:57.114522 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:22:57.125644 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:22:57.196095 udevadm[1057]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:22:57.134577 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:22:57.143875 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:22:57.152892 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:22:57.165056 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:22:57.195147 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:22:57.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:57.204155 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:22:57.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:57.214840 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:22:57.275991 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:22:57.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:57.830523 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:22:57.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:57.841299 systemd[1]: Starting systemd-udevd.service... Sep 6 00:22:57.867418 systemd-udevd[1066]: Using default interface naming scheme 'v252'. Sep 6 00:22:57.923931 systemd[1]: Started systemd-udevd.service. Sep 6 00:22:57.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:57.936602 systemd[1]: Starting systemd-networkd.service... Sep 6 00:22:57.951975 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:22:58.003471 systemd[1]: Found device dev-ttyS0.device. Sep 6 00:22:58.059493 systemd[1]: Started systemd-userdbd.service. Sep 6 00:22:58.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:58.134188 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:22:58.217190 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:22:58.217868 systemd-networkd[1079]: lo: Link UP Sep 6 00:22:58.217880 systemd-networkd[1079]: lo: Gained carrier Sep 6 00:22:58.218838 systemd-networkd[1079]: Enumeration completed Sep 6 00:22:58.219122 systemd[1]: Started systemd-networkd.service. Sep 6 00:22:58.220803 systemd-networkd[1079]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:22:58.223245 systemd-networkd[1079]: eth0: Link UP Sep 6 00:22:58.223264 systemd-networkd[1079]: eth0: Gained carrier Sep 6 00:22:58.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:58.236408 systemd-networkd[1079]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d.c.flatcar-212911.internal' to 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' Sep 6 00:22:58.236443 systemd-networkd[1079]: eth0: DHCPv4 address 10.128.0.81/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 6 00:22:58.251208 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 6 00:22:58.228000 audit[1069]: AVC avc: denied { confidentiality } for pid=1069 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:22:58.228000 audit[1069]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5618ac64f5f0 a1=338ec a2=7f4016519bc5 a3=5 items=110 ppid=1066 pid=1069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:58.228000 audit: CWD cwd="/" Sep 6 00:22:58.228000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=1 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=2 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=3 name=(null) inode=14785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=4 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=5 name=(null) inode=14786 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=6 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=7 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=8 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=9 name=(null) inode=14788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=10 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=11 name=(null) inode=14789 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=12 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=13 name=(null) inode=14790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=14 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=15 name=(null) inode=14791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=16 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=17 name=(null) inode=14792 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=18 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=19 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=20 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=21 name=(null) inode=14794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=22 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=23 name=(null) inode=14795 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=24 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=25 name=(null) inode=14796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=26 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=27 name=(null) inode=14797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=28 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=29 name=(null) inode=14798 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=30 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=31 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=32 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=33 name=(null) inode=14800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=34 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=35 name=(null) inode=14801 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=36 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=37 name=(null) inode=14802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=38 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=39 name=(null) inode=14803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=40 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=41 name=(null) inode=14804 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=42 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=43 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=44 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=45 name=(null) inode=14806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=46 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=47 name=(null) inode=14807 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=48 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=49 name=(null) inode=14808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=50 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=51 name=(null) inode=14809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=52 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=53 name=(null) inode=14810 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=55 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=56 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=57 name=(null) inode=14812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=58 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=59 name=(null) inode=14813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=60 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=61 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=62 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=63 name=(null) inode=14815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=64 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=65 name=(null) inode=14816 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=66 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=67 name=(null) inode=14817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=68 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=69 name=(null) inode=14818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=70 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=71 name=(null) inode=14819 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=72 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=73 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.266244 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:22:58.228000 audit: PATH item=74 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=75 name=(null) inode=14821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=76 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=77 name=(null) inode=14822 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=78 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=79 name=(null) inode=14823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=80 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=81 name=(null) inode=14824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=82 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=83 name=(null) inode=14825 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=84 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=85 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=86 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=87 name=(null) inode=14827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=88 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=89 name=(null) inode=14828 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=90 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=91 name=(null) inode=14829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=92 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=93 name=(null) inode=14830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=94 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=95 name=(null) inode=14831 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=96 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=97 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=98 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=99 name=(null) inode=14833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=100 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=101 name=(null) inode=14834 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=102 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=103 name=(null) inode=14835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=104 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=105 name=(null) inode=14836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=106 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=107 name=(null) inode=14837 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PATH item=109 name=(null) inode=14838 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:58.228000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:22:58.272195 kernel: ACPI: button: Sleep Button [SLPF] Sep 6 00:22:58.305195 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 6 00:22:58.363267 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 6 00:22:58.370220 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:22:58.398567 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:22:58.407955 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:22:58.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:58.418421 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:22:58.446719 lvm[1104]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:22:58.480895 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:22:58.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:58.489780 systemd[1]: Reached target cryptsetup.target. Sep 6 00:22:58.500134 systemd[1]: Starting lvm2-activation.service... Sep 6 00:22:58.507522 lvm[1106]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:22:58.533872 systemd[1]: Finished lvm2-activation.service. Sep 6 00:22:58.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:58.542776 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:22:58.551357 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:22:58.551412 systemd[1]: Reached target local-fs.target. Sep 6 00:22:58.560394 systemd[1]: Reached target machines.target. Sep 6 00:22:58.571181 systemd[1]: Starting ldconfig.service... Sep 6 00:22:58.579440 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:22:58.579546 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:22:58.581433 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:22:58.589326 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:22:58.601271 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:22:58.603900 systemd[1]: Starting systemd-sysext.service... Sep 6 00:22:58.604705 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1109 (bootctl) Sep 6 00:22:58.606966 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:22:58.623397 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:22:58.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:58.641749 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:22:58.651254 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:22:58.651588 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:22:58.682632 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 00:22:58.790699 systemd-fsck[1121]: fsck.fat 4.2 (2021-01-31) Sep 6 00:22:58.790699 systemd-fsck[1121]: /dev/sda1: 790 files, 120761/258078 clusters Sep 6 00:22:58.793537 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:22:58.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:58.805643 systemd[1]: Mounting boot.mount... Sep 6 00:22:58.824862 systemd[1]: Mounted boot.mount. Sep 6 00:22:58.857481 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:22:58.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.026526 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:22:59.088200 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 00:22:59.128026 (sd-sysext)[1131]: Using extensions 'kubernetes'. Sep 6 00:22:59.128750 (sd-sysext)[1131]: Merged extensions into '/usr'. Sep 6 00:22:59.160153 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:59.162787 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:22:59.170737 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:22:59.173900 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:22:59.185142 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:22:59.194930 systemd[1]: Starting modprobe@loop.service... Sep 6 00:22:59.202571 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:22:59.202861 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:22:59.203096 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:59.210003 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:22:59.220993 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:22:59.222627 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:22:59.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.232276 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:22:59.232503 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:22:59.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.242086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:22:59.242370 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:22:59.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.252126 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:22:59.252426 systemd[1]: Finished modprobe@loop.service. Sep 6 00:22:59.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.262317 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:22:59.262470 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:22:59.264192 systemd[1]: Finished systemd-sysext.service. Sep 6 00:22:59.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.275075 systemd[1]: Starting ensure-sysext.service... Sep 6 00:22:59.284184 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:22:59.295872 systemd[1]: Reloading. Sep 6 00:22:59.308421 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:22:59.313050 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:22:59.317894 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:22:59.324981 ldconfig[1108]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:22:59.411467 /usr/lib/systemd/system-generators/torcx-generator[1167]: time="2025-09-06T00:22:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:22:59.411520 /usr/lib/systemd/system-generators/torcx-generator[1167]: time="2025-09-06T00:22:59Z" level=info msg="torcx already run" Sep 6 00:22:59.614219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:22:59.614250 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:22:59.639120 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:22:59.729782 systemd[1]: Finished ldconfig.service. Sep 6 00:22:59.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.740200 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:22:59.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.755497 systemd[1]: Starting audit-rules.service... Sep 6 00:22:59.764459 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:22:59.773951 systemd[1]: Starting oem-gce-enable-oslogin.service... Sep 6 00:22:59.784998 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:22:59.796652 systemd[1]: Starting systemd-resolved.service... Sep 6 00:22:59.807992 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:22:59.818040 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:22:59.828464 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:22:59.835000 audit[1243]: SYSTEM_BOOT pid=1243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.839334 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Sep 6 00:22:59.840354 systemd[1]: Finished oem-gce-enable-oslogin.service. Sep 6 00:22:59.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.861067 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:22:59.868000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:22:59.868000 audit[1250]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd71e95c60 a2=420 a3=0 items=0 ppid=1218 pid=1250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:59.868000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:22:59.869838 augenrules[1250]: No rules Sep 6 00:22:59.872794 systemd[1]: Finished audit-rules.service. Sep 6 00:22:59.881382 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:59.882015 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:22:59.884700 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:22:59.893999 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:22:59.902850 systemd[1]: Starting modprobe@loop.service... Sep 6 00:22:59.913093 systemd[1]: Starting oem-gce-enable-oslogin.service... Sep 6 00:22:59.922414 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:22:59.922716 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:22:59.924769 enable-oslogin[1264]: /etc/pam.d/sshd already exists. Not enabling OS Login Sep 6 00:22:59.925978 systemd[1]: Starting systemd-update-done.service... Sep 6 00:22:59.933347 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:22:59.933617 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:59.936569 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:22:59.946236 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:22:59.946560 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:22:59.956154 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:22:59.956482 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:22:59.966219 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:22:59.966528 systemd[1]: Finished modprobe@loop.service. Sep 6 00:22:59.976092 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Sep 6 00:22:59.976520 systemd[1]: Finished oem-gce-enable-oslogin.service. Sep 6 00:22:59.986428 systemd[1]: Finished systemd-update-done.service. Sep 6 00:22:59.999847 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:23:00.000458 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:23:00.002921 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:23:00.013033 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:23:00.022607 systemd[1]: Starting modprobe@loop.service... Sep 6 00:23:00.031838 systemd[1]: Starting oem-gce-enable-oslogin.service... Sep 6 00:23:00.040373 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:23:00.040635 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:23:00.040854 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:23:00.041034 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:23:00.043001 enable-oslogin[1276]: /etc/pam.d/sshd already exists. Not enabling OS Login Sep 6 00:23:00.043386 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:23:00.043663 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:23:00.053132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:23:00.053446 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:23:00.063105 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:23:00.063410 systemd[1]: Finished modprobe@loop.service. Sep 6 00:23:00.073100 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Sep 6 00:23:00.073503 systemd[1]: Finished oem-gce-enable-oslogin.service. Sep 6 00:23:00.083283 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:23:00.083480 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:23:00.088866 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:23:00.091342 systemd-timesyncd[1238]: Contacted time server 169.254.169.254:123 (169.254.169.254). Sep 6 00:23:00.091441 systemd-timesyncd[1238]: Initial clock synchronization to Sat 2025-09-06 00:23:00.405609 UTC. Sep 6 00:23:00.098517 systemd-resolved[1233]: Positive Trust Anchors: Sep 6 00:23:00.098849 systemd[1]: Reached target time-set.target. Sep 6 00:23:00.099205 systemd-resolved[1233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:23:00.099325 systemd-resolved[1233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:23:00.107551 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:23:00.108045 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:23:00.110370 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:23:00.119349 systemd[1]: Starting modprobe@drm.service... Sep 6 00:23:00.129454 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:23:00.138497 systemd[1]: Starting modprobe@loop.service... Sep 6 00:23:00.141812 systemd-resolved[1233]: Defaulting to hostname 'linux'. Sep 6 00:23:00.148062 systemd[1]: Starting oem-gce-enable-oslogin.service... Sep 6 00:23:00.156049 enable-oslogin[1288]: /etc/pam.d/sshd already exists. Not enabling OS Login Sep 6 00:23:00.157532 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:23:00.157860 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:23:00.160782 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:23:00.169427 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:23:00.169694 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:23:00.172274 systemd[1]: Started systemd-resolved.service. Sep 6 00:23:00.182240 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:23:00.182531 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:23:00.192003 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:23:00.192307 systemd[1]: Finished modprobe@drm.service. Sep 6 00:23:00.201093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:23:00.201399 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:23:00.211086 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:23:00.211403 systemd[1]: Finished modprobe@loop.service. Sep 6 00:23:00.221262 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Sep 6 00:23:00.221621 systemd[1]: Finished oem-gce-enable-oslogin.service. Sep 6 00:23:00.231418 systemd[1]: Reached target network.target. Sep 6 00:23:00.240448 systemd[1]: Reached target nss-lookup.target. Sep 6 00:23:00.249431 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:23:00.249506 systemd[1]: Reached target sysinit.target. Sep 6 00:23:00.251364 systemd-networkd[1079]: eth0: Gained IPv6LL Sep 6 00:23:00.258547 systemd[1]: Started motdgen.path. Sep 6 00:23:00.266491 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:23:00.276777 systemd[1]: Started logrotate.timer. Sep 6 00:23:00.284622 systemd[1]: Started mdadm.timer. Sep 6 00:23:00.291373 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:23:00.300380 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:23:00.300456 systemd[1]: Reached target paths.target. Sep 6 00:23:00.307387 systemd[1]: Reached target timers.target. Sep 6 00:23:00.315140 systemd[1]: Listening on dbus.socket. Sep 6 00:23:00.324240 systemd[1]: Starting docker.socket... Sep 6 00:23:00.333788 systemd[1]: Listening on sshd.socket. Sep 6 00:23:00.341510 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:23:00.341651 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:23:00.342703 systemd[1]: Finished ensure-sysext.service. Sep 6 00:23:00.351905 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:23:00.363630 systemd[1]: Listening on docker.socket. Sep 6 00:23:00.371602 systemd[1]: Reached target network-online.target. Sep 6 00:23:00.380348 systemd[1]: Reached target sockets.target. Sep 6 00:23:00.388365 systemd[1]: Reached target basic.target. Sep 6 00:23:00.395608 systemd[1]: System is tainted: cgroupsv1 Sep 6 00:23:00.395700 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:23:00.395742 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:23:00.397505 systemd[1]: Starting containerd.service... Sep 6 00:23:00.407022 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:23:00.418741 systemd[1]: Starting dbus.service... Sep 6 00:23:00.428610 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:23:00.439781 systemd[1]: Starting extend-filesystems.service... Sep 6 00:23:00.458239 jq[1300]: false Sep 6 00:23:00.449482 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:23:00.452321 systemd[1]: Starting kubelet.service... Sep 6 00:23:00.460086 systemd[1]: Starting motdgen.service... Sep 6 00:23:00.469537 systemd[1]: Starting oem-gce.service... Sep 6 00:23:00.479591 systemd[1]: Starting prepare-helm.service... Sep 6 00:23:00.488632 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:23:00.498603 systemd[1]: Starting sshd-keygen.service... Sep 6 00:23:00.509241 systemd[1]: Starting systemd-logind.service... Sep 6 00:23:00.516347 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:23:00.516476 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 6 00:23:00.518847 systemd[1]: Starting update-engine.service... Sep 6 00:23:00.528456 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:23:00.541338 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:23:00.541841 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:23:00.549024 jq[1324]: true Sep 6 00:23:00.561434 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:23:00.561859 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:23:00.608676 extend-filesystems[1301]: Found loop1 Sep 6 00:23:00.616405 mkfs.ext4[1338]: mke2fs 1.46.5 (30-Dec-2021) Sep 6 00:23:00.620186 mkfs.ext4[1338]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Sep 6 00:23:00.620186 mkfs.ext4[1338]: Creating filesystem with 262144 4k blocks and 65536 inodes Sep 6 00:23:00.620186 mkfs.ext4[1338]: Filesystem UUID: 9e235e0e-40df-4def-8c28-d3e8baf20cd5 Sep 6 00:23:00.620186 mkfs.ext4[1338]: Superblock backups stored on blocks: Sep 6 00:23:00.620186 mkfs.ext4[1338]: 32768, 98304, 163840, 229376 Sep 6 00:23:00.620186 mkfs.ext4[1338]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Sep 6 00:23:00.620186 mkfs.ext4[1338]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Sep 6 00:23:00.620186 mkfs.ext4[1338]: Creating journal (8192 blocks): done Sep 6 00:23:00.623622 jq[1331]: true Sep 6 00:23:00.627495 extend-filesystems[1301]: Found sda Sep 6 00:23:00.637426 extend-filesystems[1301]: Found sda1 Sep 6 00:23:00.637426 extend-filesystems[1301]: Found sda2 Sep 6 00:23:00.637426 extend-filesystems[1301]: Found sda3 Sep 6 00:23:00.659471 mkfs.ext4[1338]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Sep 6 00:23:00.660653 extend-filesystems[1301]: Found usr Sep 6 00:23:00.668423 extend-filesystems[1301]: Found sda4 Sep 6 00:23:00.668423 extend-filesystems[1301]: Found sda6 Sep 6 00:23:00.668423 extend-filesystems[1301]: Found sda7 Sep 6 00:23:00.668423 extend-filesystems[1301]: Found sda9 Sep 6 00:23:00.668423 extend-filesystems[1301]: Checking size of /dev/sda9 Sep 6 00:23:00.673064 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:23:00.673550 systemd[1]: Finished motdgen.service. Sep 6 00:23:00.742000 dbus-daemon[1299]: [system] SELinux support is enabled Sep 6 00:23:00.742392 systemd[1]: Started dbus.service. Sep 6 00:23:00.747226 umount[1355]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Sep 6 00:23:00.755686 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:23:00.755773 systemd[1]: Reached target system-config.target. Sep 6 00:23:00.765442 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:23:00.765484 systemd[1]: Reached target user-config.target. Sep 6 00:23:00.768695 update_engine[1323]: I0906 00:23:00.768625 1323 main.cc:92] Flatcar Update Engine starting Sep 6 00:23:00.771870 extend-filesystems[1301]: Resized partition /dev/sda9 Sep 6 00:23:00.798216 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 6 00:23:00.798308 kernel: loop2: detected capacity change from 0 to 2097152 Sep 6 00:23:00.798344 extend-filesystems[1369]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:23:00.807379 tar[1329]: linux-amd64/helm Sep 6 00:23:00.805803 dbus-daemon[1299]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1079 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 6 00:23:00.825475 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 6 00:23:00.826652 dbus-daemon[1299]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 6 00:23:00.865938 systemd[1]: Starting systemd-hostnamed.service... Sep 6 00:23:00.869526 update_engine[1323]: I0906 00:23:00.835930 1323 update_check_scheduler.cc:74] Next update check in 5m56s Sep 6 00:23:00.869661 env[1335]: time="2025-09-06T00:23:00.866727575Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:23:00.875537 systemd[1]: Started update-engine.service. Sep 6 00:23:00.878607 extend-filesystems[1369]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 6 00:23:00.878607 extend-filesystems[1369]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 6 00:23:00.878607 extend-filesystems[1369]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 6 00:23:00.967396 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:23:00.885015 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:23:00.967644 extend-filesystems[1301]: Resized filesystem in /dev/sda9 Sep 6 00:23:00.976405 bash[1375]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:23:00.885499 systemd[1]: Finished extend-filesystems.service. Sep 6 00:23:00.905513 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:23:00.921244 systemd[1]: Started locksmithd.service. Sep 6 00:23:01.044651 env[1335]: time="2025-09-06T00:23:01.044461950Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:23:01.044832 env[1335]: time="2025-09-06T00:23:01.044738942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:23:01.081157 coreos-metadata[1298]: Sep 06 00:23:01.080 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 6 00:23:01.086837 env[1335]: time="2025-09-06T00:23:01.086775775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:23:01.088140 env[1335]: time="2025-09-06T00:23:01.088094794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:23:01.088789 env[1335]: time="2025-09-06T00:23:01.088749778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:23:01.089159 env[1335]: time="2025-09-06T00:23:01.089126670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:23:01.089344 env[1335]: time="2025-09-06T00:23:01.089313872Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:23:01.089800 env[1335]: time="2025-09-06T00:23:01.089764936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:23:01.092110 env[1335]: time="2025-09-06T00:23:01.092073467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:23:01.094481 coreos-metadata[1298]: Sep 06 00:23:01.094 INFO Fetch failed with 404: resource not found Sep 6 00:23:01.094740 coreos-metadata[1298]: Sep 06 00:23:01.094 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 6 00:23:01.097294 env[1335]: time="2025-09-06T00:23:01.097235766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:23:01.101245 coreos-metadata[1298]: Sep 06 00:23:01.101 INFO Fetch successful Sep 6 00:23:01.101497 coreos-metadata[1298]: Sep 06 00:23:01.101 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 6 00:23:01.106639 env[1335]: time="2025-09-06T00:23:01.106574718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:23:01.106888 env[1335]: time="2025-09-06T00:23:01.106858087Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:23:01.107345 coreos-metadata[1298]: Sep 06 00:23:01.107 INFO Fetch failed with 404: resource not found Sep 6 00:23:01.107581 coreos-metadata[1298]: Sep 06 00:23:01.107 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 6 00:23:01.109042 env[1335]: time="2025-09-06T00:23:01.108964680Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:23:01.110911 env[1335]: time="2025-09-06T00:23:01.110867299Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:23:01.111599 coreos-metadata[1298]: Sep 06 00:23:01.111 INFO Fetch failed with 404: resource not found Sep 6 00:23:01.111828 coreos-metadata[1298]: Sep 06 00:23:01.111 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 6 00:23:01.113448 coreos-metadata[1298]: Sep 06 00:23:01.113 INFO Fetch successful Sep 6 00:23:01.116169 systemd-logind[1321]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:23:01.116242 systemd-logind[1321]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 6 00:23:01.116282 systemd-logind[1321]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:23:01.122461 unknown[1298]: wrote ssh authorized keys file for user: core Sep 6 00:23:01.126310 systemd-logind[1321]: New seat seat0. Sep 6 00:23:01.126499 env[1335]: time="2025-09-06T00:23:01.126441768Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:23:01.126594 env[1335]: time="2025-09-06T00:23:01.126498322Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:23:01.126594 env[1335]: time="2025-09-06T00:23:01.126522092Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:23:01.126928 env[1335]: time="2025-09-06T00:23:01.126893297Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:23:01.127017 env[1335]: time="2025-09-06T00:23:01.126937433Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:23:01.127017 env[1335]: time="2025-09-06T00:23:01.126965415Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:23:01.127017 env[1335]: time="2025-09-06T00:23:01.126991086Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:23:01.127170 env[1335]: time="2025-09-06T00:23:01.127015718Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:23:01.127170 env[1335]: time="2025-09-06T00:23:01.127038587Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:23:01.127170 env[1335]: time="2025-09-06T00:23:01.127065165Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:23:01.127170 env[1335]: time="2025-09-06T00:23:01.127088978Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:23:01.127170 env[1335]: time="2025-09-06T00:23:01.127115794Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:23:01.127448 env[1335]: time="2025-09-06T00:23:01.127330928Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:23:01.127514 env[1335]: time="2025-09-06T00:23:01.127467056Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:23:01.128050 env[1335]: time="2025-09-06T00:23:01.128014460Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:23:01.128145 env[1335]: time="2025-09-06T00:23:01.128071852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128145 env[1335]: time="2025-09-06T00:23:01.128098568Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:23:01.128285 env[1335]: time="2025-09-06T00:23:01.128175511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128285 env[1335]: time="2025-09-06T00:23:01.128232925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128285 env[1335]: time="2025-09-06T00:23:01.128255990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128424 env[1335]: time="2025-09-06T00:23:01.128297753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128424 env[1335]: time="2025-09-06T00:23:01.128334346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128424 env[1335]: time="2025-09-06T00:23:01.128360481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128424 env[1335]: time="2025-09-06T00:23:01.128382376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128424 env[1335]: time="2025-09-06T00:23:01.128406795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128681 env[1335]: time="2025-09-06T00:23:01.128433106Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:23:01.128681 env[1335]: time="2025-09-06T00:23:01.128628219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128681 env[1335]: time="2025-09-06T00:23:01.128654362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128681 env[1335]: time="2025-09-06T00:23:01.128676848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.128867 env[1335]: time="2025-09-06T00:23:01.128698810Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:23:01.128867 env[1335]: time="2025-09-06T00:23:01.128726740Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:23:01.128867 env[1335]: time="2025-09-06T00:23:01.128746817Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:23:01.128867 env[1335]: time="2025-09-06T00:23:01.128777598Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:23:01.128867 env[1335]: time="2025-09-06T00:23:01.128829712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:23:01.134065 systemd[1]: Started systemd-logind.service. Sep 6 00:23:01.135999 env[1335]: time="2025-09-06T00:23:01.129158926Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:23:01.139489 env[1335]: time="2025-09-06T00:23:01.136000180Z" level=info msg="Connect containerd service" Sep 6 00:23:01.139489 env[1335]: time="2025-09-06T00:23:01.136076834Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:23:01.151438 env[1335]: time="2025-09-06T00:23:01.151030188Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:23:01.151594 env[1335]: time="2025-09-06T00:23:01.151462883Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:23:01.151594 env[1335]: time="2025-09-06T00:23:01.151543056Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:23:01.151757 systemd[1]: Started containerd.service. Sep 6 00:23:01.152083 env[1335]: time="2025-09-06T00:23:01.151944081Z" level=info msg="Start subscribing containerd event" Sep 6 00:23:01.160516 env[1335]: time="2025-09-06T00:23:01.160448878Z" level=info msg="Start recovering state" Sep 6 00:23:01.160669 env[1335]: time="2025-09-06T00:23:01.160600743Z" level=info msg="Start event monitor" Sep 6 00:23:01.160669 env[1335]: time="2025-09-06T00:23:01.160623376Z" level=info msg="Start snapshots syncer" Sep 6 00:23:01.160669 env[1335]: time="2025-09-06T00:23:01.160638424Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:23:01.160669 env[1335]: time="2025-09-06T00:23:01.160650921Z" level=info msg="Start streaming server" Sep 6 00:23:01.160956 env[1335]: time="2025-09-06T00:23:01.160921348Z" level=info msg="containerd successfully booted in 0.305855s" Sep 6 00:23:01.174768 update-ssh-keys[1400]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:23:01.176110 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:23:01.349628 dbus-daemon[1299]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 6 00:23:01.349874 systemd[1]: Started systemd-hostnamed.service. Sep 6 00:23:01.350768 dbus-daemon[1299]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1378 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 6 00:23:01.364856 systemd[1]: Starting polkit.service... Sep 6 00:23:01.486070 polkitd[1417]: Started polkitd version 121 Sep 6 00:23:01.508992 polkitd[1417]: Loading rules from directory /etc/polkit-1/rules.d Sep 6 00:23:01.509108 polkitd[1417]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 6 00:23:01.513361 polkitd[1417]: Finished loading, compiling and executing 2 rules Sep 6 00:23:01.515469 dbus-daemon[1299]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 6 00:23:01.515723 systemd[1]: Started polkit.service. Sep 6 00:23:01.516713 polkitd[1417]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 6 00:23:01.556920 systemd-hostnamed[1378]: Hostname set to (transient) Sep 6 00:23:01.560842 systemd-resolved[1233]: System hostname changed to 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d'. Sep 6 00:23:02.489017 tar[1329]: linux-amd64/LICENSE Sep 6 00:23:02.489964 tar[1329]: linux-amd64/README.md Sep 6 00:23:02.514535 systemd[1]: Finished prepare-helm.service. Sep 6 00:23:03.350849 systemd[1]: Started kubelet.service. Sep 6 00:23:04.228672 locksmithd[1385]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:23:04.828581 kubelet[1434]: E0906 00:23:04.828519 1434 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:23:04.831493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:23:04.831815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:23:06.242681 sshd_keygen[1340]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:23:06.310074 systemd[1]: Finished sshd-keygen.service. Sep 6 00:23:06.321916 systemd[1]: Starting issuegen.service... Sep 6 00:23:06.337159 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:23:06.337613 systemd[1]: Finished issuegen.service. Sep 6 00:23:06.347967 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:23:06.358872 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:23:06.370992 systemd[1]: Started getty@tty1.service. Sep 6 00:23:06.380984 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:23:06.389794 systemd[1]: Reached target getty.target. Sep 6 00:23:07.631095 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Sep 6 00:23:09.726238 kernel: loop2: detected capacity change from 0 to 2097152 Sep 6 00:23:09.743746 systemd-nspawn[1463]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Sep 6 00:23:09.743746 systemd-nspawn[1463]: Press ^] three times within 1s to kill container. Sep 6 00:23:09.757219 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:23:09.760600 systemd[1]: Created slice system-sshd.slice. Sep 6 00:23:09.769402 systemd[1]: Started sshd@0-10.128.0.81:22-139.178.89.65:47584.service. Sep 6 00:23:09.845791 systemd[1]: Started oem-gce.service. Sep 6 00:23:09.854884 systemd[1]: Reached target multi-user.target. Sep 6 00:23:09.865618 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:23:09.879142 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:23:09.879595 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:23:09.894153 systemd[1]: Startup finished in 11.452s (kernel) + 17.763s (userspace) = 29.215s. Sep 6 00:23:09.921048 systemd-nspawn[1463]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 6 00:23:09.921048 systemd-nspawn[1463]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 6 00:23:09.921394 systemd-nspawn[1463]: + /usr/bin/google_instance_setup Sep 6 00:23:10.107663 sshd[1468]: Accepted publickey for core from 139.178.89.65 port 47584 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:23:10.112945 sshd[1468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:10.137029 systemd[1]: Created slice user-500.slice. Sep 6 00:23:10.139138 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:23:10.145269 systemd-logind[1321]: New session 1 of user core. Sep 6 00:23:10.163062 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:23:10.165638 systemd[1]: Starting user@500.service... Sep 6 00:23:10.188909 (systemd)[1476]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:10.358516 systemd[1476]: Queued start job for default target default.target. Sep 6 00:23:10.360032 systemd[1476]: Reached target paths.target. Sep 6 00:23:10.360077 systemd[1476]: Reached target sockets.target. Sep 6 00:23:10.360102 systemd[1476]: Reached target timers.target. Sep 6 00:23:10.360124 systemd[1476]: Reached target basic.target. Sep 6 00:23:10.360401 systemd[1]: Started user@500.service. Sep 6 00:23:10.362197 systemd[1]: Started session-1.scope. Sep 6 00:23:10.365853 systemd[1476]: Reached target default.target. Sep 6 00:23:10.366917 systemd[1476]: Startup finished in 163ms. Sep 6 00:23:10.592546 systemd[1]: Started sshd@1-10.128.0.81:22-139.178.89.65:54864.service. Sep 6 00:23:10.749217 instance-setup[1472]: INFO Running google_set_multiqueue. Sep 6 00:23:10.765536 instance-setup[1472]: INFO Set channels for eth0 to 2. Sep 6 00:23:10.769710 instance-setup[1472]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Sep 6 00:23:10.771458 instance-setup[1472]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Sep 6 00:23:10.771685 instance-setup[1472]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Sep 6 00:23:10.773822 instance-setup[1472]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Sep 6 00:23:10.774357 instance-setup[1472]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Sep 6 00:23:10.776199 instance-setup[1472]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Sep 6 00:23:10.776792 instance-setup[1472]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Sep 6 00:23:10.778411 instance-setup[1472]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Sep 6 00:23:10.792751 instance-setup[1472]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 6 00:23:10.792924 instance-setup[1472]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 6 00:23:10.854324 systemd-nspawn[1463]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 6 00:23:10.920614 sshd[1487]: Accepted publickey for core from 139.178.89.65 port 54864 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:23:10.921796 sshd[1487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:10.933640 systemd[1]: Started session-2.scope. Sep 6 00:23:10.936453 systemd-logind[1321]: New session 2 of user core. Sep 6 00:23:11.145937 sshd[1487]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:11.151915 systemd[1]: sshd@1-10.128.0.81:22-139.178.89.65:54864.service: Deactivated successfully. Sep 6 00:23:11.153546 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:23:11.156827 systemd-logind[1321]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:23:11.164246 systemd-logind[1321]: Removed session 2. Sep 6 00:23:11.190134 systemd[1]: Started sshd@2-10.128.0.81:22-139.178.89.65:54878.service. Sep 6 00:23:11.265608 startup-script[1517]: INFO Starting startup scripts. Sep 6 00:23:11.280540 startup-script[1517]: INFO No startup scripts found in metadata. Sep 6 00:23:11.280735 startup-script[1517]: INFO Finished running startup scripts. Sep 6 00:23:11.329242 systemd-nspawn[1463]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 6 00:23:11.329242 systemd-nspawn[1463]: + daemon_pids=() Sep 6 00:23:11.329605 systemd-nspawn[1463]: + for d in accounts clock_skew network Sep 6 00:23:11.329605 systemd-nspawn[1463]: + daemon_pids+=($!) Sep 6 00:23:11.329776 systemd-nspawn[1463]: + for d in accounts clock_skew network Sep 6 00:23:11.330096 systemd-nspawn[1463]: + daemon_pids+=($!) Sep 6 00:23:11.330271 systemd-nspawn[1463]: + for d in accounts clock_skew network Sep 6 00:23:11.330471 systemd-nspawn[1463]: + daemon_pids+=($!) Sep 6 00:23:11.330584 systemd-nspawn[1463]: + NOTIFY_SOCKET=/run/systemd/notify Sep 6 00:23:11.330584 systemd-nspawn[1463]: + /usr/bin/systemd-notify --ready Sep 6 00:23:11.330883 systemd-nspawn[1463]: + /usr/bin/google_accounts_daemon Sep 6 00:23:11.331236 systemd-nspawn[1463]: + /usr/bin/google_network_daemon Sep 6 00:23:11.331701 systemd-nspawn[1463]: + /usr/bin/google_clock_skew_daemon Sep 6 00:23:11.400475 systemd-nspawn[1463]: + wait -n 36 37 38 Sep 6 00:23:11.502958 sshd[1525]: Accepted publickey for core from 139.178.89.65 port 54878 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:23:11.504118 sshd[1525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:11.512529 systemd-logind[1321]: New session 3 of user core. Sep 6 00:23:11.513529 systemd[1]: Started session-3.scope. Sep 6 00:23:11.721635 sshd[1525]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:11.728202 systemd[1]: sshd@2-10.128.0.81:22-139.178.89.65:54878.service: Deactivated successfully. Sep 6 00:23:11.729655 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:23:11.732660 systemd-logind[1321]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:23:11.734938 systemd-logind[1321]: Removed session 3. Sep 6 00:23:11.768602 systemd[1]: Started sshd@3-10.128.0.81:22-139.178.89.65:54886.service. Sep 6 00:23:12.094631 sshd[1536]: Accepted publickey for core from 139.178.89.65 port 54886 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:23:12.096559 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:12.105794 systemd-logind[1321]: New session 4 of user core. Sep 6 00:23:12.106871 systemd[1]: Started session-4.scope. Sep 6 00:23:12.158557 google-clock-skew[1528]: INFO Starting Google Clock Skew daemon. Sep 6 00:23:12.186952 google-clock-skew[1528]: INFO Clock drift token has changed: 0. Sep 6 00:23:12.199092 systemd-nspawn[1463]: hwclock: Cannot access the Hardware Clock via any known method. Sep 6 00:23:12.200707 google-clock-skew[1528]: WARNING Failed to sync system time with hardware clock. Sep 6 00:23:12.200918 systemd-nspawn[1463]: hwclock: Use the --verbose option to see the details of our search for an access method. Sep 6 00:23:12.238140 groupadd[1547]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 6 00:23:12.242103 groupadd[1547]: group added to /etc/gshadow: name=google-sudoers Sep 6 00:23:12.250038 groupadd[1547]: new group: name=google-sudoers, GID=1000 Sep 6 00:23:12.281497 google-accounts[1527]: INFO Starting Google Accounts daemon. Sep 6 00:23:12.305597 google-networking[1529]: INFO Starting Google Networking daemon. Sep 6 00:23:12.316280 google-accounts[1527]: WARNING OS Login not installed. Sep 6 00:23:12.317826 google-accounts[1527]: INFO Creating a new user account for 0. Sep 6 00:23:12.321501 sshd[1536]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:12.327100 systemd[1]: sshd@3-10.128.0.81:22-139.178.89.65:54886.service: Deactivated successfully. Sep 6 00:23:12.328436 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:23:12.329719 systemd-logind[1321]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:23:12.331347 systemd-logind[1321]: Removed session 4. Sep 6 00:23:12.332652 systemd-nspawn[1463]: useradd: invalid user name '0': use --badname to ignore Sep 6 00:23:12.333515 google-accounts[1527]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 6 00:23:12.366124 systemd[1]: Started sshd@4-10.128.0.81:22-139.178.89.65:54900.service. Sep 6 00:23:12.664895 sshd[1561]: Accepted publickey for core from 139.178.89.65 port 54900 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:23:12.667064 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:12.674138 systemd[1]: Started session-5.scope. Sep 6 00:23:12.674487 systemd-logind[1321]: New session 5 of user core. Sep 6 00:23:12.865894 sudo[1565]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:23:12.866397 sudo[1565]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:23:12.902058 systemd[1]: Starting docker.service... Sep 6 00:23:12.954027 env[1575]: time="2025-09-06T00:23:12.953596692Z" level=info msg="Starting up" Sep 6 00:23:12.956070 env[1575]: time="2025-09-06T00:23:12.956037621Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:23:12.956764 env[1575]: time="2025-09-06T00:23:12.956709755Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:23:12.956764 env[1575]: time="2025-09-06T00:23:12.956756471Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:23:12.956938 env[1575]: time="2025-09-06T00:23:12.956777756Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:23:12.959254 env[1575]: time="2025-09-06T00:23:12.959220270Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:23:12.959254 env[1575]: time="2025-09-06T00:23:12.959245519Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:23:12.959421 env[1575]: time="2025-09-06T00:23:12.959271600Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:23:12.959421 env[1575]: time="2025-09-06T00:23:12.959285580Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:23:13.539621 env[1575]: time="2025-09-06T00:23:13.539545555Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 6 00:23:13.539621 env[1575]: time="2025-09-06T00:23:13.539586712Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 6 00:23:13.540058 env[1575]: time="2025-09-06T00:23:13.539970639Z" level=info msg="Loading containers: start." Sep 6 00:23:13.733218 kernel: Initializing XFRM netlink socket Sep 6 00:23:13.783484 env[1575]: time="2025-09-06T00:23:13.783410657Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:23:13.877353 systemd-networkd[1079]: docker0: Link UP Sep 6 00:23:13.898516 env[1575]: time="2025-09-06T00:23:13.898430793Z" level=info msg="Loading containers: done." Sep 6 00:23:13.920880 env[1575]: time="2025-09-06T00:23:13.920798028Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:23:13.921295 env[1575]: time="2025-09-06T00:23:13.921250671Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:23:13.921476 env[1575]: time="2025-09-06T00:23:13.921440750Z" level=info msg="Daemon has completed initialization" Sep 6 00:23:13.946361 systemd[1]: Started docker.service. Sep 6 00:23:13.960340 env[1575]: time="2025-09-06T00:23:13.960240264Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:23:14.859029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:23:14.859400 systemd[1]: Stopped kubelet.service. Sep 6 00:23:14.862816 systemd[1]: Starting kubelet.service... Sep 6 00:23:14.949714 env[1335]: time="2025-09-06T00:23:14.949610419Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:23:15.136085 systemd[1]: Started kubelet.service. Sep 6 00:23:15.208212 kubelet[1703]: E0906 00:23:15.208136 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:23:15.212308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:23:15.212650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:23:15.579123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195595591.mount: Deactivated successfully. Sep 6 00:23:17.435445 env[1335]: time="2025-09-06T00:23:17.435326058Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:17.438814 env[1335]: time="2025-09-06T00:23:17.438747735Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:17.441650 env[1335]: time="2025-09-06T00:23:17.441588891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:17.444291 env[1335]: time="2025-09-06T00:23:17.444241138Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:17.445396 env[1335]: time="2025-09-06T00:23:17.445344786Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 00:23:17.446251 env[1335]: time="2025-09-06T00:23:17.446212389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:23:19.413867 env[1335]: time="2025-09-06T00:23:19.413793194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:19.417095 env[1335]: time="2025-09-06T00:23:19.417042643Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:19.421481 env[1335]: time="2025-09-06T00:23:19.421408121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:19.423759 env[1335]: time="2025-09-06T00:23:19.423696113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:19.425152 env[1335]: time="2025-09-06T00:23:19.425095102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 00:23:19.426034 env[1335]: time="2025-09-06T00:23:19.425997180Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:23:20.782212 env[1335]: time="2025-09-06T00:23:20.782133945Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:20.785298 env[1335]: time="2025-09-06T00:23:20.785229714Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:20.788288 env[1335]: time="2025-09-06T00:23:20.788241989Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:20.795741 env[1335]: time="2025-09-06T00:23:20.795689634Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:20.796938 env[1335]: time="2025-09-06T00:23:20.796873815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 00:23:20.797697 env[1335]: time="2025-09-06T00:23:20.797662203Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:23:22.017327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1303610849.mount: Deactivated successfully. Sep 6 00:23:22.814713 env[1335]: time="2025-09-06T00:23:22.814623160Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:22.817639 env[1335]: time="2025-09-06T00:23:22.817560591Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:22.820086 env[1335]: time="2025-09-06T00:23:22.820009179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:22.822519 env[1335]: time="2025-09-06T00:23:22.822451329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:22.823467 env[1335]: time="2025-09-06T00:23:22.823417273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 00:23:22.824306 env[1335]: time="2025-09-06T00:23:22.824243323Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:23:23.270949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205635772.mount: Deactivated successfully. Sep 6 00:23:24.698973 env[1335]: time="2025-09-06T00:23:24.698909625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:24.707382 env[1335]: time="2025-09-06T00:23:24.707302916Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:24.709991 env[1335]: time="2025-09-06T00:23:24.709939970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:24.712972 env[1335]: time="2025-09-06T00:23:24.712924362Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:24.714776 env[1335]: time="2025-09-06T00:23:24.714706343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 00:23:24.715518 env[1335]: time="2025-09-06T00:23:24.715479314Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:23:25.186149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439379957.mount: Deactivated successfully. Sep 6 00:23:25.190310 env[1335]: time="2025-09-06T00:23:25.190248617Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:25.192994 env[1335]: time="2025-09-06T00:23:25.192935390Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:25.195699 env[1335]: time="2025-09-06T00:23:25.195658553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:25.197694 env[1335]: time="2025-09-06T00:23:25.197637883Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:25.198509 env[1335]: time="2025-09-06T00:23:25.198451087Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:23:25.199343 env[1335]: time="2025-09-06T00:23:25.199288591Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:23:25.358895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:23:25.359245 systemd[1]: Stopped kubelet.service. Sep 6 00:23:25.361771 systemd[1]: Starting kubelet.service... Sep 6 00:23:25.697682 systemd[1]: Started kubelet.service. Sep 6 00:23:25.758123 kubelet[1718]: E0906 00:23:25.758075 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:23:25.760513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:23:25.760838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:23:25.921502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335883661.mount: Deactivated successfully. Sep 6 00:23:28.776795 env[1335]: time="2025-09-06T00:23:28.776714385Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:28.780033 env[1335]: time="2025-09-06T00:23:28.779980124Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:28.782713 env[1335]: time="2025-09-06T00:23:28.782669220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:28.785466 env[1335]: time="2025-09-06T00:23:28.785425572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:28.786610 env[1335]: time="2025-09-06T00:23:28.786570570Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 00:23:31.566958 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 6 00:23:32.351077 systemd[1]: Stopped kubelet.service. Sep 6 00:23:32.355077 systemd[1]: Starting kubelet.service... Sep 6 00:23:32.401106 systemd[1]: Reloading. Sep 6 00:23:32.530429 /usr/lib/systemd/system-generators/torcx-generator[1773]: time="2025-09-06T00:23:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:23:32.534305 /usr/lib/systemd/system-generators/torcx-generator[1773]: time="2025-09-06T00:23:32Z" level=info msg="torcx already run" Sep 6 00:23:32.696883 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:23:32.696912 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:23:32.722811 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:23:32.859652 systemd[1]: Started kubelet.service. Sep 6 00:23:32.862751 systemd[1]: Stopping kubelet.service... Sep 6 00:23:32.864931 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:23:32.865546 systemd[1]: Stopped kubelet.service. Sep 6 00:23:32.869239 systemd[1]: Starting kubelet.service... Sep 6 00:23:33.183435 systemd[1]: Started kubelet.service. Sep 6 00:23:33.251648 kubelet[1838]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:23:33.252100 kubelet[1838]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:23:33.252272 kubelet[1838]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:23:33.252489 kubelet[1838]: I0906 00:23:33.252451 1838 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:23:33.878016 kubelet[1838]: I0906 00:23:33.877937 1838 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:23:33.878016 kubelet[1838]: I0906 00:23:33.877991 1838 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:23:33.878536 kubelet[1838]: I0906 00:23:33.878489 1838 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:23:33.952212 kubelet[1838]: E0906 00:23:33.952117 1838 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:23:33.953524 kubelet[1838]: I0906 00:23:33.953476 1838 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:23:33.967557 kubelet[1838]: E0906 00:23:33.967505 1838 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:23:33.967557 kubelet[1838]: I0906 00:23:33.967554 1838 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:23:33.974375 kubelet[1838]: I0906 00:23:33.974332 1838 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:23:33.976634 kubelet[1838]: I0906 00:23:33.976581 1838 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:23:33.976885 kubelet[1838]: I0906 00:23:33.976823 1838 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:23:33.977196 kubelet[1838]: I0906 00:23:33.976871 1838 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:23:33.977419 kubelet[1838]: I0906 00:23:33.977204 1838 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:23:33.977419 kubelet[1838]: I0906 00:23:33.977225 1838 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:23:33.977419 kubelet[1838]: I0906 00:23:33.977390 1838 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:23:33.986457 kubelet[1838]: W0906 00:23:33.986385 1838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d&limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 6 00:23:33.986753 kubelet[1838]: E0906 00:23:33.986697 1838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d&limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:23:33.986753 kubelet[1838]: I0906 00:23:33.986545 1838 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:23:33.986940 kubelet[1838]: I0906 00:23:33.986760 1838 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:23:33.986940 kubelet[1838]: I0906 00:23:33.986807 1838 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:23:33.986940 kubelet[1838]: I0906 00:23:33.986834 1838 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:23:33.991862 kubelet[1838]: W0906 00:23:33.991457 1838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 6 00:23:33.991862 kubelet[1838]: E0906 00:23:33.991536 1838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:23:33.992090 kubelet[1838]: I0906 00:23:33.991982 1838 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:23:33.993314 kubelet[1838]: I0906 00:23:33.992702 1838 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:23:33.993314 kubelet[1838]: W0906 00:23:33.992789 1838 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:23:33.998186 kubelet[1838]: I0906 00:23:33.998126 1838 server.go:1274] "Started kubelet" Sep 6 00:23:34.021519 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:23:34.021743 kubelet[1838]: I0906 00:23:34.021709 1838 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:23:34.027180 kubelet[1838]: E0906 00:23:34.024901 1838 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d.186289b4bdb9a442 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d,UID:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d,},FirstTimestamp:2025-09-06 00:23:33.998093378 +0000 UTC m=+0.804748557,LastTimestamp:2025-09-06 00:23:33.998093378 +0000 UTC m=+0.804748557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d,}" Sep 6 00:23:34.031381 kubelet[1838]: I0906 00:23:34.031307 1838 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:23:34.031702 kubelet[1838]: I0906 00:23:34.031654 1838 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:23:34.032025 kubelet[1838]: E0906 00:23:34.031996 1838 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" not found" Sep 6 00:23:34.033841 kubelet[1838]: I0906 00:23:34.033810 1838 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:23:34.036366 kubelet[1838]: I0906 00:23:34.036309 1838 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:23:34.036510 kubelet[1838]: I0906 00:23:34.036389 1838 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:23:34.038567 kubelet[1838]: I0906 00:23:34.038516 1838 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:23:34.038878 kubelet[1838]: I0906 00:23:34.038849 1838 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:23:34.039665 kubelet[1838]: I0906 00:23:34.039631 1838 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:23:34.041056 kubelet[1838]: I0906 00:23:34.041005 1838 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:23:34.041298 kubelet[1838]: I0906 00:23:34.041197 1838 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:23:34.044208 kubelet[1838]: E0906 00:23:34.042601 1838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d?timeout=10s\": dial tcp 10.128.0.81:6443: connect: connection refused" interval="200ms" Sep 6 00:23:34.044208 kubelet[1838]: W0906 00:23:34.043054 1838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 6 00:23:34.044208 kubelet[1838]: E0906 00:23:34.043095 1838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:23:34.050775 kubelet[1838]: I0906 00:23:34.050714 1838 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:23:34.053462 kubelet[1838]: E0906 00:23:34.053432 1838 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:23:34.071512 kubelet[1838]: I0906 00:23:34.071439 1838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:23:34.073118 kubelet[1838]: I0906 00:23:34.073063 1838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:23:34.073118 kubelet[1838]: I0906 00:23:34.073099 1838 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:23:34.073357 kubelet[1838]: I0906 00:23:34.073133 1838 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:23:34.073357 kubelet[1838]: E0906 00:23:34.073232 1838 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:23:34.095976 kubelet[1838]: W0906 00:23:34.095884 1838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 6 00:23:34.096233 kubelet[1838]: E0906 00:23:34.096002 1838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:23:34.108140 kubelet[1838]: I0906 00:23:34.108080 1838 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:23:34.108140 kubelet[1838]: I0906 00:23:34.108113 1838 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:23:34.108140 kubelet[1838]: I0906 00:23:34.108144 1838 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:23:34.111013 kubelet[1838]: I0906 00:23:34.110963 1838 policy_none.go:49] "None policy: Start" Sep 6 00:23:34.112225 kubelet[1838]: I0906 00:23:34.112193 1838 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:23:34.112382 kubelet[1838]: I0906 00:23:34.112233 1838 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:23:34.119365 kubelet[1838]: I0906 00:23:34.119317 1838 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:23:34.119557 kubelet[1838]: I0906 00:23:34.119537 1838 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:23:34.119642 kubelet[1838]: I0906 00:23:34.119560 1838 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:23:34.121435 kubelet[1838]: I0906 00:23:34.121389 1838 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:23:34.123305 kubelet[1838]: E0906 00:23:34.123256 1838 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" not found" Sep 6 00:23:34.225392 kubelet[1838]: I0906 00:23:34.224759 1838 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.225392 kubelet[1838]: E0906 00:23:34.225237 1838 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.81:6443/api/v1/nodes\": dial tcp 10.128.0.81:6443: connect: connection refused" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.244132 kubelet[1838]: E0906 00:23:34.244068 1838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d?timeout=10s\": dial tcp 10.128.0.81:6443: connect: connection refused" interval="400ms" Sep 6 00:23:34.337515 kubelet[1838]: I0906 00:23:34.337457 1838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d46af0a9e23c4bf4941847b1aef7746-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"8d46af0a9e23c4bf4941847b1aef7746\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.338094 kubelet[1838]: I0906 00:23:34.337527 1838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/281ebb4575bdd8096d2ceb0fcd2c7a3d-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"281ebb4575bdd8096d2ceb0fcd2c7a3d\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.338094 kubelet[1838]: I0906 00:23:34.337564 1838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/281ebb4575bdd8096d2ceb0fcd2c7a3d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"281ebb4575bdd8096d2ceb0fcd2c7a3d\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.338094 kubelet[1838]: I0906 00:23:34.337608 1838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/281ebb4575bdd8096d2ceb0fcd2c7a3d-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"281ebb4575bdd8096d2ceb0fcd2c7a3d\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.338094 kubelet[1838]: I0906 00:23:34.337637 1838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d46af0a9e23c4bf4941847b1aef7746-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"8d46af0a9e23c4bf4941847b1aef7746\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.338323 kubelet[1838]: I0906 00:23:34.337677 1838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d46af0a9e23c4bf4941847b1aef7746-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"8d46af0a9e23c4bf4941847b1aef7746\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.338323 kubelet[1838]: I0906 00:23:34.337708 1838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d46af0a9e23c4bf4941847b1aef7746-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"8d46af0a9e23c4bf4941847b1aef7746\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.338323 kubelet[1838]: I0906 00:23:34.337741 1838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d46af0a9e23c4bf4941847b1aef7746-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"8d46af0a9e23c4bf4941847b1aef7746\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.338323 kubelet[1838]: I0906 00:23:34.337771 1838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ceae2bd7ee7052ac2771fcda1cbebb9e-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"ceae2bd7ee7052ac2771fcda1cbebb9e\") " pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.430433 kubelet[1838]: I0906 00:23:34.430391 1838 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.430941 kubelet[1838]: E0906 00:23:34.430875 1838 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.81:6443/api/v1/nodes\": dial tcp 10.128.0.81:6443: connect: connection refused" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.492086 env[1335]: time="2025-09-06T00:23:34.491919743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d,Uid:8d46af0a9e23c4bf4941847b1aef7746,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:34.499410 env[1335]: time="2025-09-06T00:23:34.499301644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d,Uid:ceae2bd7ee7052ac2771fcda1cbebb9e,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:34.504559 env[1335]: time="2025-09-06T00:23:34.503886630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d,Uid:281ebb4575bdd8096d2ceb0fcd2c7a3d,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:34.645467 kubelet[1838]: E0906 00:23:34.645387 1838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d?timeout=10s\": dial tcp 10.128.0.81:6443: connect: connection refused" interval="800ms" Sep 6 00:23:34.838259 kubelet[1838]: I0906 00:23:34.837735 1838 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.838664 kubelet[1838]: E0906 00:23:34.838605 1838 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.81:6443/api/v1/nodes\": dial tcp 10.128.0.81:6443: connect: connection refused" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:34.901535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3609517474.mount: Deactivated successfully. Sep 6 00:23:34.911996 env[1335]: time="2025-09-06T00:23:34.911900957Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:34.916609 env[1335]: time="2025-09-06T00:23:34.916546825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:34.918666 env[1335]: time="2025-09-06T00:23:34.918568299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:34.920131 env[1335]: time="2025-09-06T00:23:34.920076194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:34.924440 env[1335]: time="2025-09-06T00:23:34.924384806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:34.928812 env[1335]: time="2025-09-06T00:23:34.928757913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:34.930884 env[1335]: time="2025-09-06T00:23:34.930818057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:34.932388 env[1335]: time="2025-09-06T00:23:34.932333159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:34.934369 env[1335]: time="2025-09-06T00:23:34.934317092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:34.935384 env[1335]: time="2025-09-06T00:23:34.935331943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:34.936693 env[1335]: time="2025-09-06T00:23:34.936654564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:34.940843 env[1335]: time="2025-09-06T00:23:34.940776977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:35.011267 env[1335]: time="2025-09-06T00:23:35.010564986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:35.011267 env[1335]: time="2025-09-06T00:23:35.010803053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:35.011267 env[1335]: time="2025-09-06T00:23:35.010825214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:35.011267 env[1335]: time="2025-09-06T00:23:35.011092999Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee297196ae8d0df05977863c39a475f0994c90b7062ea8dbb2ff368fed42cdcb pid=1878 runtime=io.containerd.runc.v2 Sep 6 00:23:35.030380 env[1335]: time="2025-09-06T00:23:35.030264193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:35.030380 env[1335]: time="2025-09-06T00:23:35.030325057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:35.030699 env[1335]: time="2025-09-06T00:23:35.030343590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:35.030699 env[1335]: time="2025-09-06T00:23:35.030536185Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11d407fbe7e397c5289f37133df29716b1e771eba380afab615a459a6331de75 pid=1909 runtime=io.containerd.runc.v2 Sep 6 00:23:35.033328 env[1335]: time="2025-09-06T00:23:35.033204385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:35.033328 env[1335]: time="2025-09-06T00:23:35.033273499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:35.033679 env[1335]: time="2025-09-06T00:23:35.033598143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:35.034095 env[1335]: time="2025-09-06T00:23:35.034017195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14c649ab9181157ffa66c7b83093c2791366d5364d8f94e186bfad66cff997f0 pid=1895 runtime=io.containerd.runc.v2 Sep 6 00:23:35.065962 kubelet[1838]: W0906 00:23:35.065751 1838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d&limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 6 00:23:35.065962 kubelet[1838]: E0906 00:23:35.065887 1838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d&limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:23:35.095284 kubelet[1838]: W0906 00:23:35.092457 1838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 6 00:23:35.095284 kubelet[1838]: E0906 00:23:35.092522 1838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:23:35.202442 env[1335]: time="2025-09-06T00:23:35.202377610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d,Uid:ceae2bd7ee7052ac2771fcda1cbebb9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"11d407fbe7e397c5289f37133df29716b1e771eba380afab615a459a6331de75\"" Sep 6 00:23:35.207291 env[1335]: time="2025-09-06T00:23:35.207225868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d,Uid:281ebb4575bdd8096d2ceb0fcd2c7a3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee297196ae8d0df05977863c39a475f0994c90b7062ea8dbb2ff368fed42cdcb\"" Sep 6 00:23:35.207886 kubelet[1838]: E0906 00:23:35.207841 1838 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393ed" Sep 6 00:23:35.209862 kubelet[1838]: E0906 00:23:35.209819 1838 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393ed" Sep 6 00:23:35.211766 env[1335]: time="2025-09-06T00:23:35.211710513Z" level=info msg="CreateContainer within sandbox \"ee297196ae8d0df05977863c39a475f0994c90b7062ea8dbb2ff368fed42cdcb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:23:35.213403 env[1335]: time="2025-09-06T00:23:35.213352479Z" level=info msg="CreateContainer within sandbox \"11d407fbe7e397c5289f37133df29716b1e771eba380afab615a459a6331de75\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:23:35.235774 env[1335]: time="2025-09-06T00:23:35.235712644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d,Uid:8d46af0a9e23c4bf4941847b1aef7746,Namespace:kube-system,Attempt:0,} returns sandbox id \"14c649ab9181157ffa66c7b83093c2791366d5364d8f94e186bfad66cff997f0\"" Sep 6 00:23:35.238610 kubelet[1838]: E0906 00:23:35.237961 1838 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c975" Sep 6 00:23:35.240440 env[1335]: time="2025-09-06T00:23:35.240391540Z" level=info msg="CreateContainer within sandbox \"14c649ab9181157ffa66c7b83093c2791366d5364d8f94e186bfad66cff997f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:23:35.244032 env[1335]: time="2025-09-06T00:23:35.243966296Z" level=info msg="CreateContainer within sandbox \"ee297196ae8d0df05977863c39a475f0994c90b7062ea8dbb2ff368fed42cdcb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a1b6671e6e39a7f184569f450595337b857403323e7745ba9a8562c65defb932\"" Sep 6 00:23:35.245264 env[1335]: time="2025-09-06T00:23:35.245226262Z" level=info msg="StartContainer for \"a1b6671e6e39a7f184569f450595337b857403323e7745ba9a8562c65defb932\"" Sep 6 00:23:35.255910 env[1335]: time="2025-09-06T00:23:35.255841132Z" level=info msg="CreateContainer within sandbox \"11d407fbe7e397c5289f37133df29716b1e771eba380afab615a459a6331de75\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed0d3993bd955c99604a35cca33ae16a0809a49d02e9c8b59c9701ec2e125499\"" Sep 6 00:23:35.257999 env[1335]: time="2025-09-06T00:23:35.257945880Z" level=info msg="StartContainer for \"ed0d3993bd955c99604a35cca33ae16a0809a49d02e9c8b59c9701ec2e125499\"" Sep 6 00:23:35.265927 env[1335]: time="2025-09-06T00:23:35.265862079Z" level=info msg="CreateContainer within sandbox \"14c649ab9181157ffa66c7b83093c2791366d5364d8f94e186bfad66cff997f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc594caa02fe40c0a71cad15806262105f792107b19b9c210e4c0dbad53cf818\"" Sep 6 00:23:35.267000 env[1335]: time="2025-09-06T00:23:35.266951176Z" level=info msg="StartContainer for \"fc594caa02fe40c0a71cad15806262105f792107b19b9c210e4c0dbad53cf818\"" Sep 6 00:23:35.388453 env[1335]: time="2025-09-06T00:23:35.388386779Z" level=info msg="StartContainer for \"a1b6671e6e39a7f184569f450595337b857403323e7745ba9a8562c65defb932\" returns successfully" Sep 6 00:23:35.422754 kubelet[1838]: W0906 00:23:35.422662 1838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 6 00:23:35.423365 kubelet[1838]: E0906 00:23:35.422779 1838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:23:35.432297 kubelet[1838]: W0906 00:23:35.432214 1838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 6 00:23:35.432600 kubelet[1838]: E0906 00:23:35.432566 1838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:23:35.446831 kubelet[1838]: E0906 00:23:35.446723 1838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d?timeout=10s\": dial tcp 10.128.0.81:6443: connect: connection refused" interval="1.6s" Sep 6 00:23:35.462450 env[1335]: time="2025-09-06T00:23:35.462382447Z" level=info msg="StartContainer for \"ed0d3993bd955c99604a35cca33ae16a0809a49d02e9c8b59c9701ec2e125499\" returns successfully" Sep 6 00:23:35.512408 env[1335]: time="2025-09-06T00:23:35.512339661Z" level=info msg="StartContainer for \"fc594caa02fe40c0a71cad15806262105f792107b19b9c210e4c0dbad53cf818\" returns successfully" Sep 6 00:23:35.645078 kubelet[1838]: I0906 00:23:35.644947 1838 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:38.628686 kubelet[1838]: E0906 00:23:38.628629 1838 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" not found" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:38.756748 kubelet[1838]: I0906 00:23:38.756702 1838 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:38.989522 kubelet[1838]: I0906 00:23:38.989395 1838 apiserver.go:52] "Watching apiserver" Sep 6 00:23:39.037562 kubelet[1838]: I0906 00:23:39.037518 1838 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:23:41.173134 systemd[1]: Reloading. Sep 6 00:23:41.219109 kubelet[1838]: W0906 00:23:41.219072 1838 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 6 00:23:41.294760 /usr/lib/systemd/system-generators/torcx-generator[2123]: time="2025-09-06T00:23:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:23:41.299275 /usr/lib/systemd/system-generators/torcx-generator[2123]: time="2025-09-06T00:23:41Z" level=info msg="torcx already run" Sep 6 00:23:41.412415 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:23:41.412445 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:23:41.437862 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:23:41.580784 systemd[1]: Stopping kubelet.service... Sep 6 00:23:41.606957 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:23:41.607772 systemd[1]: Stopped kubelet.service. Sep 6 00:23:41.612854 systemd[1]: Starting kubelet.service... Sep 6 00:23:41.890178 systemd[1]: Started kubelet.service. Sep 6 00:23:41.995792 kubelet[2181]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:23:41.996361 kubelet[2181]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:23:41.996440 kubelet[2181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:23:41.996619 kubelet[2181]: I0906 00:23:41.996571 2181 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:23:42.005138 kubelet[2181]: I0906 00:23:42.005074 2181 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:23:42.005138 kubelet[2181]: I0906 00:23:42.005112 2181 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:23:42.005534 kubelet[2181]: I0906 00:23:42.005496 2181 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:23:42.007199 kubelet[2181]: I0906 00:23:42.007127 2181 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:23:42.009823 kubelet[2181]: I0906 00:23:42.009746 2181 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:23:42.014424 kubelet[2181]: E0906 00:23:42.014392 2181 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:23:42.014566 kubelet[2181]: I0906 00:23:42.014552 2181 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:23:42.018274 kubelet[2181]: I0906 00:23:42.018245 2181 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:23:42.019303 kubelet[2181]: I0906 00:23:42.019280 2181 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:23:42.019673 kubelet[2181]: I0906 00:23:42.019638 2181 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:23:42.020237 kubelet[2181]: I0906 00:23:42.019798 2181 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:23:42.020504 kubelet[2181]: I0906 00:23:42.020484 2181 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:23:42.020621 kubelet[2181]: I0906 00:23:42.020606 2181 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:23:42.020753 kubelet[2181]: I0906 00:23:42.020737 2181 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:23:42.020984 kubelet[2181]: I0906 00:23:42.020970 2181 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:23:42.021097 kubelet[2181]: I0906 00:23:42.021083 2181 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:23:42.021329 kubelet[2181]: I0906 00:23:42.021312 2181 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:23:42.021451 kubelet[2181]: I0906 00:23:42.021437 2181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:23:42.025278 kubelet[2181]: I0906 00:23:42.025189 2181 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:23:42.026688 kubelet[2181]: I0906 00:23:42.026657 2181 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:23:42.027538 kubelet[2181]: I0906 00:23:42.027511 2181 server.go:1274] "Started kubelet" Sep 6 00:23:42.052253 kubelet[2181]: I0906 00:23:42.052217 2181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:23:42.056457 kubelet[2181]: E0906 00:23:42.056396 2181 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:23:42.056683 kubelet[2181]: I0906 00:23:42.056628 2181 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:23:42.058111 kubelet[2181]: I0906 00:23:42.058066 2181 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:23:42.059608 kubelet[2181]: I0906 00:23:42.059562 2181 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:23:42.059866 kubelet[2181]: I0906 00:23:42.059840 2181 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:23:42.061827 kubelet[2181]: I0906 00:23:42.061801 2181 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:23:42.064669 kubelet[2181]: I0906 00:23:42.064643 2181 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:23:42.065065 kubelet[2181]: E0906 00:23:42.065036 2181 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" not found" Sep 6 00:23:42.065807 kubelet[2181]: I0906 00:23:42.065785 2181 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:23:42.066089 kubelet[2181]: I0906 00:23:42.066071 2181 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:23:42.074120 kubelet[2181]: I0906 00:23:42.074048 2181 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:23:42.074319 kubelet[2181]: I0906 00:23:42.074229 2181 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:23:42.080769 kubelet[2181]: I0906 00:23:42.080684 2181 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:23:42.084408 kubelet[2181]: I0906 00:23:42.084356 2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:23:42.086910 kubelet[2181]: I0906 00:23:42.086878 2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:23:42.087080 kubelet[2181]: I0906 00:23:42.087062 2181 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:23:42.087252 kubelet[2181]: I0906 00:23:42.087235 2181 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:23:42.087434 kubelet[2181]: E0906 00:23:42.087390 2181 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:23:42.175967 kubelet[2181]: I0906 00:23:42.173616 2181 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:23:42.176244 kubelet[2181]: I0906 00:23:42.176215 2181 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:23:42.176399 kubelet[2181]: I0906 00:23:42.176381 2181 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:23:42.176733 kubelet[2181]: I0906 00:23:42.176710 2181 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:23:42.176891 kubelet[2181]: I0906 00:23:42.176851 2181 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:23:42.176998 kubelet[2181]: I0906 00:23:42.176982 2181 policy_none.go:49] "None policy: Start" Sep 6 00:23:42.178146 kubelet[2181]: I0906 00:23:42.178123 2181 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:23:42.178346 kubelet[2181]: I0906 00:23:42.178329 2181 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:23:42.178693 kubelet[2181]: I0906 00:23:42.178671 2181 state_mem.go:75] "Updated machine memory state" Sep 6 00:23:42.179983 sudo[2212]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:23:42.180599 sudo[2212]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:23:42.181705 kubelet[2181]: I0906 00:23:42.181682 2181 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:23:42.182054 kubelet[2181]: I0906 00:23:42.182034 2181 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:23:42.182235 kubelet[2181]: I0906 00:23:42.182188 2181 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:23:42.195273 kubelet[2181]: I0906 00:23:42.195244 2181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:23:42.205609 kubelet[2181]: W0906 00:23:42.205547 2181 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 6 00:23:42.206082 kubelet[2181]: E0906 00:23:42.206007 2181 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.206824 kubelet[2181]: W0906 00:23:42.206770 2181 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 6 00:23:42.207985 kubelet[2181]: W0906 00:23:42.207959 2181 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 6 00:23:42.267574 kubelet[2181]: I0906 00:23:42.267523 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/281ebb4575bdd8096d2ceb0fcd2c7a3d-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"281ebb4575bdd8096d2ceb0fcd2c7a3d\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.267843 kubelet[2181]: I0906 00:23:42.267815 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/281ebb4575bdd8096d2ceb0fcd2c7a3d-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"281ebb4575bdd8096d2ceb0fcd2c7a3d\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.268038 kubelet[2181]: I0906 00:23:42.268008 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d46af0a9e23c4bf4941847b1aef7746-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"8d46af0a9e23c4bf4941847b1aef7746\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.268197 kubelet[2181]: I0906 00:23:42.268175 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d46af0a9e23c4bf4941847b1aef7746-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"8d46af0a9e23c4bf4941847b1aef7746\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.268361 kubelet[2181]: I0906 00:23:42.268336 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d46af0a9e23c4bf4941847b1aef7746-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"8d46af0a9e23c4bf4941847b1aef7746\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.268506 kubelet[2181]: I0906 00:23:42.268485 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ceae2bd7ee7052ac2771fcda1cbebb9e-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"ceae2bd7ee7052ac2771fcda1cbebb9e\") " pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.268640 kubelet[2181]: I0906 00:23:42.268619 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/281ebb4575bdd8096d2ceb0fcd2c7a3d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"281ebb4575bdd8096d2ceb0fcd2c7a3d\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.268769 kubelet[2181]: I0906 00:23:42.268748 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d46af0a9e23c4bf4941847b1aef7746-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"8d46af0a9e23c4bf4941847b1aef7746\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.268911 kubelet[2181]: I0906 00:23:42.268891 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d46af0a9e23c4bf4941847b1aef7746-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" (UID: \"8d46af0a9e23c4bf4941847b1aef7746\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.298505 kubelet[2181]: I0906 00:23:42.298463 2181 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.309838 kubelet[2181]: I0906 00:23:42.309798 2181 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.310027 kubelet[2181]: I0906 00:23:42.309903 2181 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" Sep 6 00:23:42.931122 sudo[2212]: pam_unix(sudo:session): session closed for user root Sep 6 00:23:43.032718 kubelet[2181]: I0906 00:23:43.032668 2181 apiserver.go:52] "Watching apiserver" Sep 6 00:23:43.066442 kubelet[2181]: I0906 00:23:43.066393 2181 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:23:43.176781 kubelet[2181]: I0906 00:23:43.176699 2181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" podStartSLOduration=2.176654749 podStartE2EDuration="2.176654749s" podCreationTimestamp="2025-09-06 00:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:23:43.16474022 +0000 UTC m=+1.253854464" watchObservedRunningTime="2025-09-06 00:23:43.176654749 +0000 UTC m=+1.265768988" Sep 6 00:23:43.194096 kubelet[2181]: I0906 00:23:43.193920 2181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" podStartSLOduration=1.19389623 podStartE2EDuration="1.19389623s" podCreationTimestamp="2025-09-06 00:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:23:43.179383206 +0000 UTC m=+1.268497451" watchObservedRunningTime="2025-09-06 00:23:43.19389623 +0000 UTC m=+1.283010471" Sep 6 00:23:43.194856 kubelet[2181]: I0906 00:23:43.194796 2181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" podStartSLOduration=1.194774527 podStartE2EDuration="1.194774527s" podCreationTimestamp="2025-09-06 00:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:23:43.191289965 +0000 UTC m=+1.280404216" watchObservedRunningTime="2025-09-06 00:23:43.194774527 +0000 UTC m=+1.283888772" Sep 6 00:23:45.086342 sudo[1565]: pam_unix(sudo:session): session closed for user root Sep 6 00:23:45.130674 sshd[1561]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:45.136390 systemd[1]: sshd@4-10.128.0.81:22-139.178.89.65:54900.service: Deactivated successfully. Sep 6 00:23:45.138783 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:23:45.138784 systemd-logind[1321]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:23:45.141054 systemd-logind[1321]: Removed session 5. Sep 6 00:23:45.679719 update_engine[1323]: I0906 00:23:45.679631 1323 update_attempter.cc:509] Updating boot flags... Sep 6 00:23:47.830124 kubelet[2181]: I0906 00:23:47.830064 2181 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:23:47.830959 env[1335]: time="2025-09-06T00:23:47.830843149Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:23:47.831542 kubelet[2181]: I0906 00:23:47.831294 2181 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:23:48.816641 kubelet[2181]: W0906 00:23:48.816589 2181 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object Sep 6 00:23:48.817207 kubelet[2181]: W0906 00:23:48.817036 2181 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object Sep 6 00:23:48.817418 kubelet[2181]: E0906 00:23:48.817381 2181 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object" logger="UnhandledError" Sep 6 00:23:48.817641 kubelet[2181]: E0906 00:23:48.817612 2181 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object" logger="UnhandledError" Sep 6 00:23:48.819424 kubelet[2181]: I0906 00:23:48.819383 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/01de7dc3-7676-44ab-865d-11e682f78ba8-kube-proxy\") pod \"kube-proxy-9xqw5\" (UID: \"01de7dc3-7676-44ab-865d-11e682f78ba8\") " pod="kube-system/kube-proxy-9xqw5" Sep 6 00:23:48.819708 kubelet[2181]: I0906 00:23:48.819682 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01de7dc3-7676-44ab-865d-11e682f78ba8-xtables-lock\") pod \"kube-proxy-9xqw5\" (UID: \"01de7dc3-7676-44ab-865d-11e682f78ba8\") " pod="kube-system/kube-proxy-9xqw5" Sep 6 00:23:48.819930 kubelet[2181]: I0906 00:23:48.819905 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01de7dc3-7676-44ab-865d-11e682f78ba8-lib-modules\") pod \"kube-proxy-9xqw5\" (UID: \"01de7dc3-7676-44ab-865d-11e682f78ba8\") " pod="kube-system/kube-proxy-9xqw5" Sep 6 00:23:48.820121 kubelet[2181]: I0906 00:23:48.820093 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25289\" (UniqueName: \"kubernetes.io/projected/01de7dc3-7676-44ab-865d-11e682f78ba8-kube-api-access-25289\") pod \"kube-proxy-9xqw5\" (UID: \"01de7dc3-7676-44ab-865d-11e682f78ba8\") " pod="kube-system/kube-proxy-9xqw5" Sep 6 00:23:48.920699 kubelet[2181]: I0906 00:23:48.920636 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-xtables-lock\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.921626 kubelet[2181]: I0906 00:23:48.921574 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-etc-cni-netd\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.921812 kubelet[2181]: I0906 00:23:48.921791 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-lib-modules\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.922031 kubelet[2181]: I0906 00:23:48.921994 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-run\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.922323 kubelet[2181]: I0906 00:23:48.922289 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-bpf-maps\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.922601 kubelet[2181]: I0906 00:23:48.922571 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-hostproc\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.922995 kubelet[2181]: I0906 00:23:48.922857 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-config-path\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.923154 kubelet[2181]: I0906 00:23:48.923133 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-host-proc-sys-net\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.923343 kubelet[2181]: I0906 00:23:48.923321 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-host-proc-sys-kernel\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.923710 kubelet[2181]: I0906 00:23:48.923590 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28f463ed-0cb8-48ce-988a-aaffa74730c9-hubble-tls\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.923902 kubelet[2181]: I0906 00:23:48.923871 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-cgroup\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.924300 kubelet[2181]: I0906 00:23:48.924263 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cni-path\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.924683 kubelet[2181]: I0906 00:23:48.924647 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28f463ed-0cb8-48ce-988a-aaffa74730c9-clustermesh-secrets\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:48.925001 kubelet[2181]: I0906 00:23:48.924970 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lz2g\" (UniqueName: \"kubernetes.io/projected/28f463ed-0cb8-48ce-988a-aaffa74730c9-kube-api-access-4lz2g\") pod \"cilium-kp7dz\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " pod="kube-system/cilium-kp7dz" Sep 6 00:23:49.026504 kubelet[2181]: I0906 00:23:49.026445 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmqcr\" (UniqueName: \"kubernetes.io/projected/8b11815a-be38-404f-9321-32219122ff53-kube-api-access-bmqcr\") pod \"cilium-operator-5d85765b45-pv8sf\" (UID: \"8b11815a-be38-404f-9321-32219122ff53\") " pod="kube-system/cilium-operator-5d85765b45-pv8sf" Sep 6 00:23:49.027960 kubelet[2181]: I0906 00:23:49.027393 2181 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:23:49.029992 kubelet[2181]: I0906 00:23:49.029954 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b11815a-be38-404f-9321-32219122ff53-cilium-config-path\") pod \"cilium-operator-5d85765b45-pv8sf\" (UID: \"8b11815a-be38-404f-9321-32219122ff53\") " pod="kube-system/cilium-operator-5d85765b45-pv8sf" Sep 6 00:23:49.924483 kubelet[2181]: E0906 00:23:49.924422 2181 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:23:49.926065 kubelet[2181]: E0906 00:23:49.924623 2181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01de7dc3-7676-44ab-865d-11e682f78ba8-kube-proxy podName:01de7dc3-7676-44ab-865d-11e682f78ba8 nodeName:}" failed. No retries permitted until 2025-09-06 00:23:50.424534017 +0000 UTC m=+8.513648243 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/01de7dc3-7676-44ab-865d-11e682f78ba8-kube-proxy") pod "kube-proxy-9xqw5" (UID: "01de7dc3-7676-44ab-865d-11e682f78ba8") : failed to sync configmap cache: timed out waiting for the condition Sep 6 00:23:49.938211 kubelet[2181]: E0906 00:23:49.938092 2181 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:23:49.938211 kubelet[2181]: E0906 00:23:49.938148 2181 projected.go:194] Error preparing data for projected volume kube-api-access-25289 for pod kube-system/kube-proxy-9xqw5: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:23:49.940125 kubelet[2181]: E0906 00:23:49.938729 2181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01de7dc3-7676-44ab-865d-11e682f78ba8-kube-api-access-25289 podName:01de7dc3-7676-44ab-865d-11e682f78ba8 nodeName:}" failed. No retries permitted until 2025-09-06 00:23:50.438312781 +0000 UTC m=+8.527427023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-25289" (UniqueName: "kubernetes.io/projected/01de7dc3-7676-44ab-865d-11e682f78ba8-kube-api-access-25289") pod "kube-proxy-9xqw5" (UID: "01de7dc3-7676-44ab-865d-11e682f78ba8") : failed to sync configmap cache: timed out waiting for the condition Sep 6 00:23:50.045306 kubelet[2181]: E0906 00:23:50.045240 2181 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:23:50.045306 kubelet[2181]: E0906 00:23:50.045296 2181 projected.go:194] Error preparing data for projected volume kube-api-access-4lz2g for pod kube-system/cilium-kp7dz: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:23:50.045599 kubelet[2181]: E0906 00:23:50.045385 2181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28f463ed-0cb8-48ce-988a-aaffa74730c9-kube-api-access-4lz2g podName:28f463ed-0cb8-48ce-988a-aaffa74730c9 nodeName:}" failed. No retries permitted until 2025-09-06 00:23:50.54535975 +0000 UTC m=+8.634473992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4lz2g" (UniqueName: "kubernetes.io/projected/28f463ed-0cb8-48ce-988a-aaffa74730c9-kube-api-access-4lz2g") pod "cilium-kp7dz" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9") : failed to sync configmap cache: timed out waiting for the condition Sep 6 00:23:50.161009 env[1335]: time="2025-09-06T00:23:50.160933650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pv8sf,Uid:8b11815a-be38-404f-9321-32219122ff53,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:50.611527 env[1335]: time="2025-09-06T00:23:50.611457787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xqw5,Uid:01de7dc3-7676-44ab-865d-11e682f78ba8,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:50.639958 env[1335]: time="2025-09-06T00:23:50.639590358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kp7dz,Uid:28f463ed-0cb8-48ce-988a-aaffa74730c9,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:50.848939 env[1335]: time="2025-09-06T00:23:50.848834279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:50.849251 env[1335]: time="2025-09-06T00:23:50.848894424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:50.849251 env[1335]: time="2025-09-06T00:23:50.848914355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:50.849823 env[1335]: time="2025-09-06T00:23:50.849762544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3138b8cd316c35de3afb6b69f437fdad7e928ee241b9a4a42eaab20d5774043 pid=2279 runtime=io.containerd.runc.v2 Sep 6 00:23:50.884426 env[1335]: time="2025-09-06T00:23:50.880362936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:50.884426 env[1335]: time="2025-09-06T00:23:50.880436237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:50.884426 env[1335]: time="2025-09-06T00:23:50.880459119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:50.884426 env[1335]: time="2025-09-06T00:23:50.880766144Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9672f960fc95a2c994ade737d652a0429ee5851736653fea94745e7dda26478 pid=2305 runtime=io.containerd.runc.v2 Sep 6 00:23:50.907046 env[1335]: time="2025-09-06T00:23:50.902495260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:50.907046 env[1335]: time="2025-09-06T00:23:50.902617428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:50.907046 env[1335]: time="2025-09-06T00:23:50.902662347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:50.907046 env[1335]: time="2025-09-06T00:23:50.902924140Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9 pid=2324 runtime=io.containerd.runc.v2 Sep 6 00:23:50.991706 env[1335]: time="2025-09-06T00:23:50.990974534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pv8sf,Uid:8b11815a-be38-404f-9321-32219122ff53,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3138b8cd316c35de3afb6b69f437fdad7e928ee241b9a4a42eaab20d5774043\"" Sep 6 00:23:51.000374 env[1335]: time="2025-09-06T00:23:50.994997673Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:23:51.008015 env[1335]: time="2025-09-06T00:23:51.006782124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xqw5,Uid:01de7dc3-7676-44ab-865d-11e682f78ba8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9672f960fc95a2c994ade737d652a0429ee5851736653fea94745e7dda26478\"" Sep 6 00:23:51.013757 env[1335]: time="2025-09-06T00:23:51.012407842Z" level=info msg="CreateContainer within sandbox \"d9672f960fc95a2c994ade737d652a0429ee5851736653fea94745e7dda26478\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:23:51.041535 env[1335]: time="2025-09-06T00:23:51.041388474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kp7dz,Uid:28f463ed-0cb8-48ce-988a-aaffa74730c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\"" Sep 6 00:23:51.052901 env[1335]: time="2025-09-06T00:23:51.052845034Z" level=info msg="CreateContainer within sandbox \"d9672f960fc95a2c994ade737d652a0429ee5851736653fea94745e7dda26478\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2dde2403bdd983ee69f5bd6619a2f045bd9467021bd90ff9d2678de41ab1f36e\"" Sep 6 00:23:51.055558 env[1335]: time="2025-09-06T00:23:51.055508228Z" level=info msg="StartContainer for \"2dde2403bdd983ee69f5bd6619a2f045bd9467021bd90ff9d2678de41ab1f36e\"" Sep 6 00:23:51.142460 env[1335]: time="2025-09-06T00:23:51.142327441Z" level=info msg="StartContainer for \"2dde2403bdd983ee69f5bd6619a2f045bd9467021bd90ff9d2678de41ab1f36e\" returns successfully" Sep 6 00:23:51.191745 kubelet[2181]: I0906 00:23:51.191672 2181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9xqw5" podStartSLOduration=3.191642322 podStartE2EDuration="3.191642322s" podCreationTimestamp="2025-09-06 00:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:23:51.190356222 +0000 UTC m=+9.279470472" watchObservedRunningTime="2025-09-06 00:23:51.191642322 +0000 UTC m=+9.280756573" Sep 6 00:23:52.392577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3971696806.mount: Deactivated successfully. Sep 6 00:23:53.414538 env[1335]: time="2025-09-06T00:23:53.414443626Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:53.418658 env[1335]: time="2025-09-06T00:23:53.418565651Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:53.421711 env[1335]: time="2025-09-06T00:23:53.421649041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:53.422521 env[1335]: time="2025-09-06T00:23:53.422422917Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:23:53.426337 env[1335]: time="2025-09-06T00:23:53.426252615Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:23:53.428503 env[1335]: time="2025-09-06T00:23:53.428453235Z" level=info msg="CreateContainer within sandbox \"d3138b8cd316c35de3afb6b69f437fdad7e928ee241b9a4a42eaab20d5774043\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:23:53.451503 env[1335]: time="2025-09-06T00:23:53.451424911Z" level=info msg="CreateContainer within sandbox \"d3138b8cd316c35de3afb6b69f437fdad7e928ee241b9a4a42eaab20d5774043\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\"" Sep 6 00:23:53.454752 env[1335]: time="2025-09-06T00:23:53.454688734Z" level=info msg="StartContainer for \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\"" Sep 6 00:23:53.568480 env[1335]: time="2025-09-06T00:23:53.568408929Z" level=info msg="StartContainer for \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\" returns successfully" Sep 6 00:23:54.442524 systemd[1]: run-containerd-runc-k8s.io-3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978-runc.fOi6eZ.mount: Deactivated successfully. Sep 6 00:24:00.828645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535495412.mount: Deactivated successfully. Sep 6 00:24:04.307719 env[1335]: time="2025-09-06T00:24:04.307631463Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:24:04.310470 env[1335]: time="2025-09-06T00:24:04.310415980Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:24:04.312832 env[1335]: time="2025-09-06T00:24:04.312786346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:24:04.313784 env[1335]: time="2025-09-06T00:24:04.313737446Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:24:04.317488 env[1335]: time="2025-09-06T00:24:04.317446656Z" level=info msg="CreateContainer within sandbox \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:24:04.342249 env[1335]: time="2025-09-06T00:24:04.342180265Z" level=info msg="CreateContainer within sandbox \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6\"" Sep 6 00:24:04.344488 env[1335]: time="2025-09-06T00:24:04.343120055Z" level=info msg="StartContainer for \"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6\"" Sep 6 00:24:04.435661 env[1335]: time="2025-09-06T00:24:04.435589494Z" level=info msg="StartContainer for \"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6\" returns successfully" Sep 6 00:24:05.236411 kubelet[2181]: I0906 00:24:05.236243 2181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pv8sf" podStartSLOduration=14.804985116 podStartE2EDuration="17.236218193s" podCreationTimestamp="2025-09-06 00:23:48 +0000 UTC" firstStartedPulling="2025-09-06 00:23:50.993138332 +0000 UTC m=+9.082252556" lastFinishedPulling="2025-09-06 00:23:53.424371385 +0000 UTC m=+11.513485633" observedRunningTime="2025-09-06 00:23:54.290797267 +0000 UTC m=+12.379911518" watchObservedRunningTime="2025-09-06 00:24:05.236218193 +0000 UTC m=+23.325332443" Sep 6 00:24:05.330779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6-rootfs.mount: Deactivated successfully. Sep 6 00:24:06.839918 env[1335]: time="2025-09-06T00:24:06.839811093Z" level=info msg="shim disconnected" id=01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6 Sep 6 00:24:06.839918 env[1335]: time="2025-09-06T00:24:06.839909815Z" level=warning msg="cleaning up after shim disconnected" id=01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6 namespace=k8s.io Sep 6 00:24:06.839918 env[1335]: time="2025-09-06T00:24:06.839928201Z" level=info msg="cleaning up dead shim" Sep 6 00:24:06.855024 env[1335]: time="2025-09-06T00:24:06.854943587Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2646 runtime=io.containerd.runc.v2\n" Sep 6 00:24:07.226612 env[1335]: time="2025-09-06T00:24:07.226539151Z" level=info msg="CreateContainer within sandbox \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:24:07.251101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2878116903.mount: Deactivated successfully. Sep 6 00:24:07.261791 env[1335]: time="2025-09-06T00:24:07.261690109Z" level=info msg="CreateContainer within sandbox \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b\"" Sep 6 00:24:07.269659 env[1335]: time="2025-09-06T00:24:07.269571719Z" level=info msg="StartContainer for \"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b\"" Sep 6 00:24:07.383115 env[1335]: time="2025-09-06T00:24:07.380713370Z" level=info msg="StartContainer for \"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b\" returns successfully" Sep 6 00:24:07.400693 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:24:07.402206 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:24:07.402753 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:24:07.407755 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:24:07.433024 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:24:07.461436 env[1335]: time="2025-09-06T00:24:07.461218881Z" level=info msg="shim disconnected" id=de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b Sep 6 00:24:07.461436 env[1335]: time="2025-09-06T00:24:07.461322314Z" level=warning msg="cleaning up after shim disconnected" id=de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b namespace=k8s.io Sep 6 00:24:07.461436 env[1335]: time="2025-09-06T00:24:07.461342196Z" level=info msg="cleaning up dead shim" Sep 6 00:24:07.477676 env[1335]: time="2025-09-06T00:24:07.476814379Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2711 runtime=io.containerd.runc.v2\n" Sep 6 00:24:08.233952 env[1335]: time="2025-09-06T00:24:08.233386791Z" level=info msg="CreateContainer within sandbox \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:24:08.248377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b-rootfs.mount: Deactivated successfully. Sep 6 00:24:08.281750 env[1335]: time="2025-09-06T00:24:08.281648594Z" level=info msg="CreateContainer within sandbox \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e\"" Sep 6 00:24:08.285559 env[1335]: time="2025-09-06T00:24:08.282998769Z" level=info msg="StartContainer for \"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e\"" Sep 6 00:24:08.395735 env[1335]: time="2025-09-06T00:24:08.395658849Z" level=info msg="StartContainer for \"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e\" returns successfully" Sep 6 00:24:08.444562 env[1335]: time="2025-09-06T00:24:08.444458599Z" level=info msg="shim disconnected" id=7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e Sep 6 00:24:08.444562 env[1335]: time="2025-09-06T00:24:08.444547481Z" level=warning msg="cleaning up after shim disconnected" id=7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e namespace=k8s.io Sep 6 00:24:08.444562 env[1335]: time="2025-09-06T00:24:08.444565193Z" level=info msg="cleaning up dead shim" Sep 6 00:24:08.457661 env[1335]: time="2025-09-06T00:24:08.457595851Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2771 runtime=io.containerd.runc.v2\n" Sep 6 00:24:09.237279 env[1335]: time="2025-09-06T00:24:09.237206708Z" level=info msg="CreateContainer within sandbox \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:24:09.246871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e-rootfs.mount: Deactivated successfully. Sep 6 00:24:09.283478 env[1335]: time="2025-09-06T00:24:09.283418693Z" level=info msg="CreateContainer within sandbox \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b\"" Sep 6 00:24:09.286956 env[1335]: time="2025-09-06T00:24:09.286896396Z" level=info msg="StartContainer for \"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b\"" Sep 6 00:24:09.389328 env[1335]: time="2025-09-06T00:24:09.389259898Z" level=info msg="StartContainer for \"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b\" returns successfully" Sep 6 00:24:09.425009 env[1335]: time="2025-09-06T00:24:09.424904762Z" level=info msg="shim disconnected" id=1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b Sep 6 00:24:09.425009 env[1335]: time="2025-09-06T00:24:09.424972803Z" level=warning msg="cleaning up after shim disconnected" id=1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b namespace=k8s.io Sep 6 00:24:09.425009 env[1335]: time="2025-09-06T00:24:09.424989872Z" level=info msg="cleaning up dead shim" Sep 6 00:24:09.436550 env[1335]: time="2025-09-06T00:24:09.436476361Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2828 runtime=io.containerd.runc.v2\n" Sep 6 00:24:10.244968 env[1335]: time="2025-09-06T00:24:10.244902082Z" level=info msg="CreateContainer within sandbox \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:24:10.250986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b-rootfs.mount: Deactivated successfully. Sep 6 00:24:10.281047 env[1335]: time="2025-09-06T00:24:10.280968199Z" level=info msg="CreateContainer within sandbox \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\"" Sep 6 00:24:10.282652 env[1335]: time="2025-09-06T00:24:10.282581265Z" level=info msg="StartContainer for \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\"" Sep 6 00:24:10.400197 env[1335]: time="2025-09-06T00:24:10.398424254Z" level=info msg="StartContainer for \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\" returns successfully" Sep 6 00:24:10.591389 kubelet[2181]: I0906 00:24:10.590735 2181 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:24:10.742550 kubelet[2181]: I0906 00:24:10.742489 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f45104e-db4d-4e03-8998-39337e087052-config-volume\") pod \"coredns-7c65d6cfc9-52t2f\" (UID: \"7f45104e-db4d-4e03-8998-39337e087052\") " pod="kube-system/coredns-7c65d6cfc9-52t2f" Sep 6 00:24:10.742910 kubelet[2181]: I0906 00:24:10.742864 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbnp8\" (UniqueName: \"kubernetes.io/projected/3be96d7f-960a-4623-96f6-14d9362b0915-kube-api-access-wbnp8\") pod \"coredns-7c65d6cfc9-5njxt\" (UID: \"3be96d7f-960a-4623-96f6-14d9362b0915\") " pod="kube-system/coredns-7c65d6cfc9-5njxt" Sep 6 00:24:10.743120 kubelet[2181]: I0906 00:24:10.743089 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg5hx\" (UniqueName: \"kubernetes.io/projected/7f45104e-db4d-4e03-8998-39337e087052-kube-api-access-mg5hx\") pod \"coredns-7c65d6cfc9-52t2f\" (UID: \"7f45104e-db4d-4e03-8998-39337e087052\") " pod="kube-system/coredns-7c65d6cfc9-52t2f" Sep 6 00:24:10.743339 kubelet[2181]: I0906 00:24:10.743307 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3be96d7f-960a-4623-96f6-14d9362b0915-config-volume\") pod \"coredns-7c65d6cfc9-5njxt\" (UID: \"3be96d7f-960a-4623-96f6-14d9362b0915\") " pod="kube-system/coredns-7c65d6cfc9-5njxt" Sep 6 00:24:10.973660 env[1335]: time="2025-09-06T00:24:10.973580298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-52t2f,Uid:7f45104e-db4d-4e03-8998-39337e087052,Namespace:kube-system,Attempt:0,}" Sep 6 00:24:11.015602 env[1335]: time="2025-09-06T00:24:11.010757544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5njxt,Uid:3be96d7f-960a-4623-96f6-14d9362b0915,Namespace:kube-system,Attempt:0,}" Sep 6 00:24:11.255105 systemd[1]: run-containerd-runc-k8s.io-bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe-runc.z6hr6m.mount: Deactivated successfully. Sep 6 00:24:12.823029 systemd-networkd[1079]: cilium_host: Link UP Sep 6 00:24:12.825350 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 00:24:12.823317 systemd-networkd[1079]: cilium_net: Link UP Sep 6 00:24:12.834936 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:24:12.834421 systemd-networkd[1079]: cilium_net: Gained carrier Sep 6 00:24:12.835557 systemd-networkd[1079]: cilium_host: Gained carrier Sep 6 00:24:12.840922 systemd-networkd[1079]: cilium_net: Gained IPv6LL Sep 6 00:24:13.006006 systemd-networkd[1079]: cilium_host: Gained IPv6LL Sep 6 00:24:13.006591 systemd-networkd[1079]: cilium_vxlan: Link UP Sep 6 00:24:13.006600 systemd-networkd[1079]: cilium_vxlan: Gained carrier Sep 6 00:24:13.313223 kernel: NET: Registered PF_ALG protocol family Sep 6 00:24:14.213464 systemd-networkd[1079]: lxc_health: Link UP Sep 6 00:24:14.236204 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:24:14.233724 systemd-networkd[1079]: lxc_health: Gained carrier Sep 6 00:24:14.562412 systemd-networkd[1079]: lxc7a65d287bbb9: Link UP Sep 6 00:24:14.572196 kernel: eth0: renamed from tmpb7380 Sep 6 00:24:14.585846 systemd-networkd[1079]: lxccfd70a861288: Link UP Sep 6 00:24:14.599209 kernel: eth0: renamed from tmp5f0fd Sep 6 00:24:14.640643 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7a65d287bbb9: link becomes ready Sep 6 00:24:14.628737 systemd-networkd[1079]: lxc7a65d287bbb9: Gained carrier Sep 6 00:24:14.651530 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccfd70a861288: link becomes ready Sep 6 00:24:14.660454 systemd-networkd[1079]: lxccfd70a861288: Gained carrier Sep 6 00:24:14.698475 kubelet[2181]: I0906 00:24:14.697791 2181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kp7dz" podStartSLOduration=13.425302907 podStartE2EDuration="26.69776656s" podCreationTimestamp="2025-09-06 00:23:48 +0000 UTC" firstStartedPulling="2025-09-06 00:23:51.043235249 +0000 UTC m=+9.132349479" lastFinishedPulling="2025-09-06 00:24:04.315698891 +0000 UTC m=+22.404813132" observedRunningTime="2025-09-06 00:24:11.279143352 +0000 UTC m=+29.368257608" watchObservedRunningTime="2025-09-06 00:24:14.69776656 +0000 UTC m=+32.786880806" Sep 6 00:24:15.003396 systemd-networkd[1079]: cilium_vxlan: Gained IPv6LL Sep 6 00:24:15.579498 systemd-networkd[1079]: lxc_health: Gained IPv6LL Sep 6 00:24:15.899458 systemd-networkd[1079]: lxccfd70a861288: Gained IPv6LL Sep 6 00:24:16.603498 systemd-networkd[1079]: lxc7a65d287bbb9: Gained IPv6LL Sep 6 00:24:19.841291 env[1335]: time="2025-09-06T00:24:19.819257307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:24:19.841291 env[1335]: time="2025-09-06T00:24:19.819360935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:24:19.841291 env[1335]: time="2025-09-06T00:24:19.819411906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:24:19.841291 env[1335]: time="2025-09-06T00:24:19.820491687Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f0fd234f5758cedff706f85bf9d8930997225a6245f82c9c108542fb720abf9 pid=3378 runtime=io.containerd.runc.v2 Sep 6 00:24:19.861574 systemd[1]: run-containerd-runc-k8s.io-5f0fd234f5758cedff706f85bf9d8930997225a6245f82c9c108542fb720abf9-runc.wVbPB4.mount: Deactivated successfully. Sep 6 00:24:19.910018 env[1335]: time="2025-09-06T00:24:19.909596393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:24:19.910018 env[1335]: time="2025-09-06T00:24:19.909673167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:24:19.910018 env[1335]: time="2025-09-06T00:24:19.909941365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:24:19.910711 env[1335]: time="2025-09-06T00:24:19.910633511Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b738081987f13810cbb1e144fbbacf3389dd955dbd1003b0b9a3d26b9edf26f1 pid=3410 runtime=io.containerd.runc.v2 Sep 6 00:24:20.007287 env[1335]: time="2025-09-06T00:24:20.007221582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5njxt,Uid:3be96d7f-960a-4623-96f6-14d9362b0915,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f0fd234f5758cedff706f85bf9d8930997225a6245f82c9c108542fb720abf9\"" Sep 6 00:24:20.016425 env[1335]: time="2025-09-06T00:24:20.016370033Z" level=info msg="CreateContainer within sandbox \"5f0fd234f5758cedff706f85bf9d8930997225a6245f82c9c108542fb720abf9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:24:20.036878 env[1335]: time="2025-09-06T00:24:20.036813615Z" level=info msg="CreateContainer within sandbox \"5f0fd234f5758cedff706f85bf9d8930997225a6245f82c9c108542fb720abf9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"123b954ecab46e8cc42738b5468067635e2ef7c62e9f30747fff894a78223d48\"" Sep 6 00:24:20.047873 env[1335]: time="2025-09-06T00:24:20.043482825Z" level=info msg="StartContainer for \"123b954ecab46e8cc42738b5468067635e2ef7c62e9f30747fff894a78223d48\"" Sep 6 00:24:20.150954 env[1335]: time="2025-09-06T00:24:20.150894075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-52t2f,Uid:7f45104e-db4d-4e03-8998-39337e087052,Namespace:kube-system,Attempt:0,} returns sandbox id \"b738081987f13810cbb1e144fbbacf3389dd955dbd1003b0b9a3d26b9edf26f1\"" Sep 6 00:24:20.154478 env[1335]: time="2025-09-06T00:24:20.154423604Z" level=info msg="CreateContainer within sandbox \"b738081987f13810cbb1e144fbbacf3389dd955dbd1003b0b9a3d26b9edf26f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:24:20.186409 env[1335]: time="2025-09-06T00:24:20.186343222Z" level=info msg="CreateContainer within sandbox \"b738081987f13810cbb1e144fbbacf3389dd955dbd1003b0b9a3d26b9edf26f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"53888a8e592445fb91469f1178e1e91c3d52f06ce2cb8f4a04e00a7de8d7f837\"" Sep 6 00:24:20.190823 env[1335]: time="2025-09-06T00:24:20.190047124Z" level=info msg="StartContainer for \"53888a8e592445fb91469f1178e1e91c3d52f06ce2cb8f4a04e00a7de8d7f837\"" Sep 6 00:24:20.207348 env[1335]: time="2025-09-06T00:24:20.206823820Z" level=info msg="StartContainer for \"123b954ecab46e8cc42738b5468067635e2ef7c62e9f30747fff894a78223d48\" returns successfully" Sep 6 00:24:20.309277 kubelet[2181]: I0906 00:24:20.309198 2181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5njxt" podStartSLOduration=32.309157136 podStartE2EDuration="32.309157136s" podCreationTimestamp="2025-09-06 00:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:24:20.309025361 +0000 UTC m=+38.398139635" watchObservedRunningTime="2025-09-06 00:24:20.309157136 +0000 UTC m=+38.398271387" Sep 6 00:24:20.331759 env[1335]: time="2025-09-06T00:24:20.331689388Z" level=info msg="StartContainer for \"53888a8e592445fb91469f1178e1e91c3d52f06ce2cb8f4a04e00a7de8d7f837\" returns successfully" Sep 6 00:24:20.832028 systemd[1]: run-containerd-runc-k8s.io-b738081987f13810cbb1e144fbbacf3389dd955dbd1003b0b9a3d26b9edf26f1-runc.xQRQFl.mount: Deactivated successfully. Sep 6 00:24:21.315986 kubelet[2181]: I0906 00:24:21.315896 2181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-52t2f" podStartSLOduration=33.315859638 podStartE2EDuration="33.315859638s" podCreationTimestamp="2025-09-06 00:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:24:21.31583591 +0000 UTC m=+39.404950159" watchObservedRunningTime="2025-09-06 00:24:21.315859638 +0000 UTC m=+39.404973889" Sep 6 00:24:59.984598 systemd[1]: Started sshd@5-10.128.0.81:22-139.178.89.65:41632.service. Sep 6 00:25:00.290401 sshd[3546]: Accepted publickey for core from 139.178.89.65 port 41632 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:00.292552 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:00.302011 systemd[1]: Started session-6.scope. Sep 6 00:25:00.302508 systemd-logind[1321]: New session 6 of user core. Sep 6 00:25:00.735399 sshd[3546]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:00.740752 systemd[1]: sshd@5-10.128.0.81:22-139.178.89.65:41632.service: Deactivated successfully. Sep 6 00:25:00.743725 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:25:00.744910 systemd-logind[1321]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:25:00.747797 systemd-logind[1321]: Removed session 6. Sep 6 00:25:05.781418 systemd[1]: Started sshd@6-10.128.0.81:22-139.178.89.65:41638.service. Sep 6 00:25:06.082282 sshd[3562]: Accepted publickey for core from 139.178.89.65 port 41638 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:06.084350 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:06.094885 systemd[1]: Started session-7.scope. Sep 6 00:25:06.095757 systemd-logind[1321]: New session 7 of user core. Sep 6 00:25:06.397379 sshd[3562]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:06.402885 systemd[1]: sshd@6-10.128.0.81:22-139.178.89.65:41638.service: Deactivated successfully. Sep 6 00:25:06.405235 systemd-logind[1321]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:25:06.405401 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:25:06.408304 systemd-logind[1321]: Removed session 7. Sep 6 00:25:11.443875 systemd[1]: Started sshd@7-10.128.0.81:22-139.178.89.65:34534.service. Sep 6 00:25:11.747419 sshd[3575]: Accepted publickey for core from 139.178.89.65 port 34534 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:11.749687 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:11.758447 systemd[1]: Started session-8.scope. Sep 6 00:25:11.759447 systemd-logind[1321]: New session 8 of user core. Sep 6 00:25:12.046066 sshd[3575]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:12.051429 systemd[1]: sshd@7-10.128.0.81:22-139.178.89.65:34534.service: Deactivated successfully. Sep 6 00:25:12.053635 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:25:12.054507 systemd-logind[1321]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:25:12.056282 systemd-logind[1321]: Removed session 8. Sep 6 00:25:17.093002 systemd[1]: Started sshd@8-10.128.0.81:22-139.178.89.65:34542.service. Sep 6 00:25:17.393976 sshd[3589]: Accepted publickey for core from 139.178.89.65 port 34542 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:17.396460 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:17.404911 systemd[1]: Started session-9.scope. Sep 6 00:25:17.405966 systemd-logind[1321]: New session 9 of user core. Sep 6 00:25:17.714983 sshd[3589]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:17.720752 systemd[1]: sshd@8-10.128.0.81:22-139.178.89.65:34542.service: Deactivated successfully. Sep 6 00:25:17.722932 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:25:17.722954 systemd-logind[1321]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:25:17.724950 systemd-logind[1321]: Removed session 9. Sep 6 00:25:22.761276 systemd[1]: Started sshd@9-10.128.0.81:22-139.178.89.65:60336.service. Sep 6 00:25:23.061420 sshd[3605]: Accepted publickey for core from 139.178.89.65 port 60336 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:23.064024 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:23.072577 systemd-logind[1321]: New session 10 of user core. Sep 6 00:25:23.072992 systemd[1]: Started session-10.scope. Sep 6 00:25:23.388500 sshd[3605]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:23.395535 systemd[1]: sshd@9-10.128.0.81:22-139.178.89.65:60336.service: Deactivated successfully. Sep 6 00:25:23.398201 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:25:23.399756 systemd-logind[1321]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:25:23.401867 systemd-logind[1321]: Removed session 10. Sep 6 00:25:23.432028 systemd[1]: Started sshd@10-10.128.0.81:22-139.178.89.65:60340.service. Sep 6 00:25:23.731066 sshd[3619]: Accepted publickey for core from 139.178.89.65 port 60340 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:23.733936 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:23.741962 systemd-logind[1321]: New session 11 of user core. Sep 6 00:25:23.743024 systemd[1]: Started session-11.scope. Sep 6 00:25:24.086764 sshd[3619]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:24.099316 systemd-logind[1321]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:25:24.100626 systemd[1]: sshd@10-10.128.0.81:22-139.178.89.65:60340.service: Deactivated successfully. Sep 6 00:25:24.102118 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:25:24.103459 systemd-logind[1321]: Removed session 11. Sep 6 00:25:24.131994 systemd[1]: Started sshd@11-10.128.0.81:22-139.178.89.65:60344.service. Sep 6 00:25:24.441297 sshd[3630]: Accepted publickey for core from 139.178.89.65 port 60344 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:24.443467 sshd[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:24.451000 systemd[1]: Started session-12.scope. Sep 6 00:25:24.451834 systemd-logind[1321]: New session 12 of user core. Sep 6 00:25:24.739958 sshd[3630]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:24.745850 systemd[1]: sshd@11-10.128.0.81:22-139.178.89.65:60344.service: Deactivated successfully. Sep 6 00:25:24.747317 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:25:24.748269 systemd-logind[1321]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:25:24.749734 systemd-logind[1321]: Removed session 12. Sep 6 00:25:29.785338 systemd[1]: Started sshd@12-10.128.0.81:22-139.178.89.65:60352.service. Sep 6 00:25:30.081131 sshd[3643]: Accepted publickey for core from 139.178.89.65 port 60352 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:30.082940 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:30.091182 systemd[1]: Started session-13.scope. Sep 6 00:25:30.091776 systemd-logind[1321]: New session 13 of user core. Sep 6 00:25:30.375014 sshd[3643]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:30.380922 systemd[1]: sshd@12-10.128.0.81:22-139.178.89.65:60352.service: Deactivated successfully. Sep 6 00:25:30.384005 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:25:30.387206 systemd-logind[1321]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:25:30.391727 systemd-logind[1321]: Removed session 13. Sep 6 00:25:35.422223 systemd[1]: Started sshd@13-10.128.0.81:22-139.178.89.65:47932.service. Sep 6 00:25:35.720710 sshd[3658]: Accepted publickey for core from 139.178.89.65 port 47932 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:35.723319 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:35.731810 systemd[1]: Started session-14.scope. Sep 6 00:25:35.732492 systemd-logind[1321]: New session 14 of user core. Sep 6 00:25:36.025908 sshd[3658]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:36.031509 systemd[1]: sshd@13-10.128.0.81:22-139.178.89.65:47932.service: Deactivated successfully. Sep 6 00:25:36.033915 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:25:36.033954 systemd-logind[1321]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:25:36.036125 systemd-logind[1321]: Removed session 14. Sep 6 00:25:36.072542 systemd[1]: Started sshd@14-10.128.0.81:22-139.178.89.65:47944.service. Sep 6 00:25:36.375678 sshd[3671]: Accepted publickey for core from 139.178.89.65 port 47944 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:36.378774 sshd[3671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:36.387580 systemd[1]: Started session-15.scope. Sep 6 00:25:36.388401 systemd-logind[1321]: New session 15 of user core. Sep 6 00:25:36.761821 sshd[3671]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:36.767610 systemd[1]: sshd@14-10.128.0.81:22-139.178.89.65:47944.service: Deactivated successfully. Sep 6 00:25:36.771052 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:25:36.771736 systemd-logind[1321]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:25:36.775449 systemd-logind[1321]: Removed session 15. Sep 6 00:25:36.808049 systemd[1]: Started sshd@15-10.128.0.81:22-139.178.89.65:47952.service. Sep 6 00:25:37.108849 sshd[3681]: Accepted publickey for core from 139.178.89.65 port 47952 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:37.110735 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:37.118269 systemd[1]: Started session-16.scope. Sep 6 00:25:37.119634 systemd-logind[1321]: New session 16 of user core. Sep 6 00:25:38.931486 sshd[3681]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:38.937802 systemd[1]: sshd@15-10.128.0.81:22-139.178.89.65:47952.service: Deactivated successfully. Sep 6 00:25:38.939246 systemd-logind[1321]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:25:38.940034 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:25:38.943089 systemd-logind[1321]: Removed session 16. Sep 6 00:25:38.977815 systemd[1]: Started sshd@16-10.128.0.81:22-139.178.89.65:47966.service. Sep 6 00:25:39.282058 sshd[3699]: Accepted publickey for core from 139.178.89.65 port 47966 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:39.284711 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:39.293462 systemd[1]: Started session-17.scope. Sep 6 00:25:39.294568 systemd-logind[1321]: New session 17 of user core. Sep 6 00:25:39.716750 sshd[3699]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:39.721689 systemd[1]: sshd@16-10.128.0.81:22-139.178.89.65:47966.service: Deactivated successfully. Sep 6 00:25:39.723815 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:25:39.723870 systemd-logind[1321]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:25:39.726016 systemd-logind[1321]: Removed session 17. Sep 6 00:25:39.762911 systemd[1]: Started sshd@17-10.128.0.81:22-139.178.89.65:47968.service. Sep 6 00:25:40.063291 sshd[3710]: Accepted publickey for core from 139.178.89.65 port 47968 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:40.065282 sshd[3710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:40.073289 systemd[1]: Started session-18.scope. Sep 6 00:25:40.074516 systemd-logind[1321]: New session 18 of user core. Sep 6 00:25:40.359023 sshd[3710]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:40.363493 systemd[1]: sshd@17-10.128.0.81:22-139.178.89.65:47968.service: Deactivated successfully. Sep 6 00:25:40.364881 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:25:40.366835 systemd-logind[1321]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:25:40.368410 systemd-logind[1321]: Removed session 18. Sep 6 00:25:45.406249 systemd[1]: Started sshd@18-10.128.0.81:22-139.178.89.65:43360.service. Sep 6 00:25:45.706284 sshd[3724]: Accepted publickey for core from 139.178.89.65 port 43360 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:45.708861 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:45.717592 systemd[1]: Started session-19.scope. Sep 6 00:25:45.718685 systemd-logind[1321]: New session 19 of user core. Sep 6 00:25:46.009762 sshd[3724]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:46.015487 systemd[1]: sshd@18-10.128.0.81:22-139.178.89.65:43360.service: Deactivated successfully. Sep 6 00:25:46.018106 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:25:46.018186 systemd-logind[1321]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:25:46.021193 systemd-logind[1321]: Removed session 19. Sep 6 00:25:51.056526 systemd[1]: Started sshd@19-10.128.0.81:22-139.178.89.65:42562.service. Sep 6 00:25:51.356755 sshd[3740]: Accepted publickey for core from 139.178.89.65 port 42562 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:51.357872 sshd[3740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:51.365744 systemd[1]: Started session-20.scope. Sep 6 00:25:51.366930 systemd-logind[1321]: New session 20 of user core. Sep 6 00:25:51.649272 sshd[3740]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:51.654912 systemd[1]: sshd@19-10.128.0.81:22-139.178.89.65:42562.service: Deactivated successfully. Sep 6 00:25:51.657356 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:25:51.658212 systemd-logind[1321]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:25:51.660060 systemd-logind[1321]: Removed session 20. Sep 6 00:25:56.695631 systemd[1]: Started sshd@20-10.128.0.81:22-139.178.89.65:42570.service. Sep 6 00:25:56.992753 sshd[3754]: Accepted publickey for core from 139.178.89.65 port 42570 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:25:56.995544 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:25:57.004636 systemd[1]: Started session-21.scope. Sep 6 00:25:57.006062 systemd-logind[1321]: New session 21 of user core. Sep 6 00:25:57.295100 sshd[3754]: pam_unix(sshd:session): session closed for user core Sep 6 00:25:57.300055 systemd[1]: sshd@20-10.128.0.81:22-139.178.89.65:42570.service: Deactivated successfully. Sep 6 00:25:57.302723 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:25:57.303803 systemd-logind[1321]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:25:57.306750 systemd-logind[1321]: Removed session 21. Sep 6 00:26:02.341190 systemd[1]: Started sshd@21-10.128.0.81:22-139.178.89.65:48664.service. Sep 6 00:26:02.640884 sshd[3767]: Accepted publickey for core from 139.178.89.65 port 48664 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:26:02.643122 sshd[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:26:02.650689 systemd[1]: Started session-22.scope. Sep 6 00:26:02.651971 systemd-logind[1321]: New session 22 of user core. Sep 6 00:26:02.931560 sshd[3767]: pam_unix(sshd:session): session closed for user core Sep 6 00:26:02.936332 systemd[1]: sshd@21-10.128.0.81:22-139.178.89.65:48664.service: Deactivated successfully. Sep 6 00:26:02.937955 systemd-logind[1321]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:26:02.937965 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:26:02.940430 systemd-logind[1321]: Removed session 22. Sep 6 00:26:02.976456 systemd[1]: Started sshd@22-10.128.0.81:22-139.178.89.65:48672.service. Sep 6 00:26:03.271247 sshd[3780]: Accepted publickey for core from 139.178.89.65 port 48672 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:26:03.273507 sshd[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:26:03.280218 systemd-logind[1321]: New session 23 of user core. Sep 6 00:26:03.281038 systemd[1]: Started session-23.scope. Sep 6 00:26:05.197119 env[1335]: time="2025-09-06T00:26:05.196821415Z" level=info msg="StopContainer for \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\" with timeout 30 (s)" Sep 6 00:26:05.199794 env[1335]: time="2025-09-06T00:26:05.197518186Z" level=info msg="Stop container \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\" with signal terminated" Sep 6 00:26:05.198853 systemd[1]: run-containerd-runc-k8s.io-bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe-runc.gOVxVF.mount: Deactivated successfully. Sep 6 00:26:05.240466 env[1335]: time="2025-09-06T00:26:05.240386022Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:26:05.250639 env[1335]: time="2025-09-06T00:26:05.250583520Z" level=info msg="StopContainer for \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\" with timeout 2 (s)" Sep 6 00:26:05.251011 env[1335]: time="2025-09-06T00:26:05.250970762Z" level=info msg="Stop container \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\" with signal terminated" Sep 6 00:26:05.261215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978-rootfs.mount: Deactivated successfully. Sep 6 00:26:05.272130 systemd-networkd[1079]: lxc_health: Link DOWN Sep 6 00:26:05.272143 systemd-networkd[1079]: lxc_health: Lost carrier Sep 6 00:26:05.305557 env[1335]: time="2025-09-06T00:26:05.303005123Z" level=info msg="shim disconnected" id=3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978 Sep 6 00:26:05.305557 env[1335]: time="2025-09-06T00:26:05.303076832Z" level=warning msg="cleaning up after shim disconnected" id=3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978 namespace=k8s.io Sep 6 00:26:05.305557 env[1335]: time="2025-09-06T00:26:05.303097837Z" level=info msg="cleaning up dead shim" Sep 6 00:26:05.336624 env[1335]: time="2025-09-06T00:26:05.336569787Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3838 runtime=io.containerd.runc.v2\n" Sep 6 00:26:05.339146 env[1335]: time="2025-09-06T00:26:05.339088312Z" level=info msg="StopContainer for \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\" returns successfully" Sep 6 00:26:05.343696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe-rootfs.mount: Deactivated successfully. Sep 6 00:26:05.345623 env[1335]: time="2025-09-06T00:26:05.345577483Z" level=info msg="StopPodSandbox for \"d3138b8cd316c35de3afb6b69f437fdad7e928ee241b9a4a42eaab20d5774043\"" Sep 6 00:26:05.345907 env[1335]: time="2025-09-06T00:26:05.345871997Z" level=info msg="Container to stop \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:26:05.351635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3138b8cd316c35de3afb6b69f437fdad7e928ee241b9a4a42eaab20d5774043-shm.mount: Deactivated successfully. Sep 6 00:26:05.358698 env[1335]: time="2025-09-06T00:26:05.358639489Z" level=info msg="shim disconnected" id=bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe Sep 6 00:26:05.359052 env[1335]: time="2025-09-06T00:26:05.359005011Z" level=warning msg="cleaning up after shim disconnected" id=bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe namespace=k8s.io Sep 6 00:26:05.359225 env[1335]: time="2025-09-06T00:26:05.359197396Z" level=info msg="cleaning up dead shim" Sep 6 00:26:05.388457 env[1335]: time="2025-09-06T00:26:05.388380908Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3870 runtime=io.containerd.runc.v2\n" Sep 6 00:26:05.391474 env[1335]: time="2025-09-06T00:26:05.391357489Z" level=info msg="StopContainer for \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\" returns successfully" Sep 6 00:26:05.392078 env[1335]: time="2025-09-06T00:26:05.392032779Z" level=info msg="StopPodSandbox for \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\"" Sep 6 00:26:05.392253 env[1335]: time="2025-09-06T00:26:05.392121022Z" level=info msg="Container to stop \"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:26:05.392253 env[1335]: time="2025-09-06T00:26:05.392146078Z" level=info msg="Container to stop \"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:26:05.392253 env[1335]: time="2025-09-06T00:26:05.392185382Z" level=info msg="Container to stop \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:26:05.392253 env[1335]: time="2025-09-06T00:26:05.392207556Z" level=info msg="Container to stop \"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:26:05.392253 env[1335]: time="2025-09-06T00:26:05.392225507Z" level=info msg="Container to stop \"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:26:05.404492 env[1335]: time="2025-09-06T00:26:05.404429683Z" level=info msg="shim disconnected" id=d3138b8cd316c35de3afb6b69f437fdad7e928ee241b9a4a42eaab20d5774043 Sep 6 00:26:05.405310 env[1335]: time="2025-09-06T00:26:05.405235467Z" level=warning msg="cleaning up after shim disconnected" id=d3138b8cd316c35de3afb6b69f437fdad7e928ee241b9a4a42eaab20d5774043 namespace=k8s.io Sep 6 00:26:05.405594 env[1335]: time="2025-09-06T00:26:05.405566739Z" level=info msg="cleaning up dead shim" Sep 6 00:26:05.432603 env[1335]: time="2025-09-06T00:26:05.432545581Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3904 runtime=io.containerd.runc.v2\n" Sep 6 00:26:05.433935 env[1335]: time="2025-09-06T00:26:05.433878237Z" level=info msg="TearDown network for sandbox \"d3138b8cd316c35de3afb6b69f437fdad7e928ee241b9a4a42eaab20d5774043\" successfully" Sep 6 00:26:05.434248 env[1335]: time="2025-09-06T00:26:05.434192204Z" level=info msg="StopPodSandbox for \"d3138b8cd316c35de3afb6b69f437fdad7e928ee241b9a4a42eaab20d5774043\" returns successfully" Sep 6 00:26:05.462868 env[1335]: time="2025-09-06T00:26:05.460946648Z" level=info msg="shim disconnected" id=aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9 Sep 6 00:26:05.462868 env[1335]: time="2025-09-06T00:26:05.461020070Z" level=warning msg="cleaning up after shim disconnected" id=aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9 namespace=k8s.io Sep 6 00:26:05.462868 env[1335]: time="2025-09-06T00:26:05.461042643Z" level=info msg="cleaning up dead shim" Sep 6 00:26:05.473574 env[1335]: time="2025-09-06T00:26:05.473507875Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3931 runtime=io.containerd.runc.v2\n" Sep 6 00:26:05.474216 env[1335]: time="2025-09-06T00:26:05.474145813Z" level=info msg="TearDown network for sandbox \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" successfully" Sep 6 00:26:05.474374 env[1335]: time="2025-09-06T00:26:05.474213821Z" level=info msg="StopPodSandbox for \"aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9\" returns successfully" Sep 6 00:26:05.506150 kubelet[2181]: I0906 00:26:05.506074 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b11815a-be38-404f-9321-32219122ff53-cilium-config-path\") pod \"8b11815a-be38-404f-9321-32219122ff53\" (UID: \"8b11815a-be38-404f-9321-32219122ff53\") " Sep 6 00:26:05.506933 kubelet[2181]: I0906 00:26:05.506152 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmqcr\" (UniqueName: \"kubernetes.io/projected/8b11815a-be38-404f-9321-32219122ff53-kube-api-access-bmqcr\") pod \"8b11815a-be38-404f-9321-32219122ff53\" (UID: \"8b11815a-be38-404f-9321-32219122ff53\") " Sep 6 00:26:05.511064 kubelet[2181]: I0906 00:26:05.510994 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b11815a-be38-404f-9321-32219122ff53-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8b11815a-be38-404f-9321-32219122ff53" (UID: "8b11815a-be38-404f-9321-32219122ff53"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:26:05.511498 kubelet[2181]: I0906 00:26:05.511453 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b11815a-be38-404f-9321-32219122ff53-kube-api-access-bmqcr" (OuterVolumeSpecName: "kube-api-access-bmqcr") pod "8b11815a-be38-404f-9321-32219122ff53" (UID: "8b11815a-be38-404f-9321-32219122ff53"). InnerVolumeSpecName "kube-api-access-bmqcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:26:05.570613 kubelet[2181]: I0906 00:26:05.570575 2181 scope.go:117] "RemoveContainer" containerID="bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe" Sep 6 00:26:05.574296 env[1335]: time="2025-09-06T00:26:05.574115806Z" level=info msg="RemoveContainer for \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\"" Sep 6 00:26:05.582877 env[1335]: time="2025-09-06T00:26:05.582804090Z" level=info msg="RemoveContainer for \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\" returns successfully" Sep 6 00:26:05.583659 kubelet[2181]: I0906 00:26:05.583608 2181 scope.go:117] "RemoveContainer" containerID="1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b" Sep 6 00:26:05.587029 env[1335]: time="2025-09-06T00:26:05.585825686Z" level=info msg="RemoveContainer for \"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b\"" Sep 6 00:26:05.591905 env[1335]: time="2025-09-06T00:26:05.591862212Z" level=info msg="RemoveContainer for \"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b\" returns successfully" Sep 6 00:26:05.592443 kubelet[2181]: I0906 00:26:05.592293 2181 scope.go:117] "RemoveContainer" containerID="7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e" Sep 6 00:26:05.594190 env[1335]: time="2025-09-06T00:26:05.594131787Z" level=info msg="RemoveContainer for \"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e\"" Sep 6 00:26:05.602551 env[1335]: time="2025-09-06T00:26:05.602475050Z" level=info msg="RemoveContainer for \"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e\" returns successfully" Sep 6 00:26:05.602779 kubelet[2181]: I0906 00:26:05.602747 2181 scope.go:117] "RemoveContainer" containerID="de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b" Sep 6 00:26:05.604452 env[1335]: time="2025-09-06T00:26:05.604404925Z" level=info msg="RemoveContainer for \"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b\"" Sep 6 00:26:05.608313 kubelet[2181]: I0906 00:26:05.607242 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:05.608313 kubelet[2181]: I0906 00:26:05.607320 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-run\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.608313 kubelet[2181]: I0906 00:26:05.607414 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-hostproc" (OuterVolumeSpecName: "hostproc") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:05.608313 kubelet[2181]: I0906 00:26:05.607370 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-hostproc\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.608313 kubelet[2181]: I0906 00:26:05.607498 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-cgroup\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.608313 kubelet[2181]: I0906 00:26:05.607525 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-etc-cni-netd\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.608730 kubelet[2181]: I0906 00:26:05.607564 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:05.608730 kubelet[2181]: I0906 00:26:05.607609 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-lib-modules\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.608730 kubelet[2181]: I0906 00:26:05.607664 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:05.608730 kubelet[2181]: I0906 00:26:05.607707 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:05.608730 kubelet[2181]: I0906 00:26:05.607635 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-bpf-maps\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.609015 kubelet[2181]: I0906 00:26:05.607768 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28f463ed-0cb8-48ce-988a-aaffa74730c9-hubble-tls\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.609015 kubelet[2181]: I0906 00:26:05.608219 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cni-path\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.609015 kubelet[2181]: I0906 00:26:05.608274 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-xtables-lock\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.609015 kubelet[2181]: I0906 00:26:05.608309 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lz2g\" (UniqueName: \"kubernetes.io/projected/28f463ed-0cb8-48ce-988a-aaffa74730c9-kube-api-access-4lz2g\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.609015 kubelet[2181]: I0906 00:26:05.608360 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-host-proc-sys-kernel\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.609015 kubelet[2181]: I0906 00:26:05.608386 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-host-proc-sys-net\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.609368 kubelet[2181]: I0906 00:26:05.608447 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-config-path\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.609368 kubelet[2181]: I0906 00:26:05.608482 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28f463ed-0cb8-48ce-988a-aaffa74730c9-clustermesh-secrets\") pod \"28f463ed-0cb8-48ce-988a-aaffa74730c9\" (UID: \"28f463ed-0cb8-48ce-988a-aaffa74730c9\") " Sep 6 00:26:05.609368 kubelet[2181]: I0906 00:26:05.608571 2181 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmqcr\" (UniqueName: \"kubernetes.io/projected/8b11815a-be38-404f-9321-32219122ff53-kube-api-access-bmqcr\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.609368 kubelet[2181]: I0906 00:26:05.608615 2181 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-run\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.609368 kubelet[2181]: I0906 00:26:05.608633 2181 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-hostproc\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.609368 kubelet[2181]: I0906 00:26:05.608651 2181 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-cgroup\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.609368 kubelet[2181]: I0906 00:26:05.608688 2181 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b11815a-be38-404f-9321-32219122ff53-cilium-config-path\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.609829 kubelet[2181]: I0906 00:26:05.608707 2181 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-etc-cni-netd\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.609829 kubelet[2181]: I0906 00:26:05.608726 2181 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-lib-modules\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.609829 kubelet[2181]: I0906 00:26:05.607802 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:05.609829 kubelet[2181]: I0906 00:26:05.609575 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cni-path" (OuterVolumeSpecName: "cni-path") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:05.609829 kubelet[2181]: I0906 00:26:05.609621 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:05.611017 env[1335]: time="2025-09-06T00:26:05.610893460Z" level=info msg="RemoveContainer for \"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b\" returns successfully" Sep 6 00:26:05.612260 kubelet[2181]: I0906 00:26:05.612212 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:05.612380 kubelet[2181]: I0906 00:26:05.612279 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:05.615974 kubelet[2181]: I0906 00:26:05.615934 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:26:05.616229 kubelet[2181]: I0906 00:26:05.616204 2181 scope.go:117] "RemoveContainer" containerID="01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6" Sep 6 00:26:05.619145 env[1335]: time="2025-09-06T00:26:05.619096903Z" level=info msg="RemoveContainer for \"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6\"" Sep 6 00:26:05.621364 kubelet[2181]: I0906 00:26:05.621298 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f463ed-0cb8-48ce-988a-aaffa74730c9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:26:05.624306 env[1335]: time="2025-09-06T00:26:05.624245077Z" level=info msg="RemoveContainer for \"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6\" returns successfully" Sep 6 00:26:05.624738 kubelet[2181]: I0906 00:26:05.624711 2181 scope.go:117] "RemoveContainer" containerID="bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe" Sep 6 00:26:05.625253 env[1335]: time="2025-09-06T00:26:05.625122408Z" level=error msg="ContainerStatus for \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\": not found" Sep 6 00:26:05.625486 kubelet[2181]: E0906 00:26:05.625410 2181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\": not found" containerID="bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe" Sep 6 00:26:05.625644 kubelet[2181]: I0906 00:26:05.625502 2181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe"} err="failed to get container status \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"bca5e4ba2f3e7640d0f59d46c5247335acca81601b88f6b78bec94d7d29ed8fe\": not found" Sep 6 00:26:05.625765 kubelet[2181]: I0906 00:26:05.625648 2181 scope.go:117] "RemoveContainer" containerID="1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b" Sep 6 00:26:05.625971 env[1335]: time="2025-09-06T00:26:05.625884281Z" level=error msg="ContainerStatus for \"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b\": not found" Sep 6 00:26:05.626142 kubelet[2181]: E0906 00:26:05.626110 2181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b\": not found" containerID="1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b" Sep 6 00:26:05.626265 kubelet[2181]: I0906 00:26:05.626151 2181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b"} err="failed to get container status \"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"1327d429e31974f7406f1f3c04513b5b170cf8e2f1f77e171c0b49e363ce0d2b\": not found" Sep 6 00:26:05.626265 kubelet[2181]: I0906 00:26:05.626210 2181 scope.go:117] "RemoveContainer" containerID="7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e" Sep 6 00:26:05.626523 env[1335]: time="2025-09-06T00:26:05.626430482Z" level=error msg="ContainerStatus for \"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e\": not found" Sep 6 00:26:05.626672 kubelet[2181]: E0906 00:26:05.626643 2181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e\": not found" containerID="7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e" Sep 6 00:26:05.626781 kubelet[2181]: I0906 00:26:05.626680 2181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e"} err="failed to get container status \"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cb7a09b1bfabb1bdb2865b2bafcff73d4f08be83587edafbdd50d18cd10621e\": not found" Sep 6 00:26:05.626781 kubelet[2181]: I0906 00:26:05.626705 2181 scope.go:117] "RemoveContainer" containerID="de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b" Sep 6 00:26:05.627008 env[1335]: time="2025-09-06T00:26:05.626930909Z" level=error msg="ContainerStatus for \"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b\": not found" Sep 6 00:26:05.627188 kubelet[2181]: E0906 00:26:05.627135 2181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b\": not found" containerID="de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b" Sep 6 00:26:05.627277 kubelet[2181]: I0906 00:26:05.627192 2181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b"} err="failed to get container status \"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b\": rpc error: code = NotFound desc = an error occurred when try to find container \"de594238dc087b440654b689e1e419cc9de92d4ce9c475d3c1d9492f95fae01b\": not found" Sep 6 00:26:05.627277 kubelet[2181]: I0906 00:26:05.627217 2181 scope.go:117] "RemoveContainer" containerID="01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6" Sep 6 00:26:05.627525 env[1335]: time="2025-09-06T00:26:05.627418927Z" level=error msg="ContainerStatus for \"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6\": not found" Sep 6 00:26:05.627646 kubelet[2181]: E0906 00:26:05.627613 2181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6\": not found" containerID="01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6" Sep 6 00:26:05.627743 kubelet[2181]: I0906 00:26:05.627655 2181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6"} err="failed to get container status \"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"01dd7217afbfbac486bc8dc8689756ad26bbe6fb98139b816815d49f0a2416c6\": not found" Sep 6 00:26:05.627743 kubelet[2181]: I0906 00:26:05.627679 2181 scope.go:117] "RemoveContainer" containerID="3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978" Sep 6 00:26:05.629510 env[1335]: time="2025-09-06T00:26:05.629455725Z" level=info msg="RemoveContainer for \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\"" Sep 6 00:26:05.632509 kubelet[2181]: I0906 00:26:05.632462 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f463ed-0cb8-48ce-988a-aaffa74730c9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:26:05.637442 env[1335]: time="2025-09-06T00:26:05.637380706Z" level=info msg="RemoveContainer for \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\" returns successfully" Sep 6 00:26:05.639126 kubelet[2181]: I0906 00:26:05.639094 2181 scope.go:117] "RemoveContainer" containerID="3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978" Sep 6 00:26:05.640006 kubelet[2181]: I0906 00:26:05.639929 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f463ed-0cb8-48ce-988a-aaffa74730c9-kube-api-access-4lz2g" (OuterVolumeSpecName: "kube-api-access-4lz2g") pod "28f463ed-0cb8-48ce-988a-aaffa74730c9" (UID: "28f463ed-0cb8-48ce-988a-aaffa74730c9"). InnerVolumeSpecName "kube-api-access-4lz2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:26:05.640147 env[1335]: time="2025-09-06T00:26:05.640058224Z" level=error msg="ContainerStatus for \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\": not found" Sep 6 00:26:05.640538 kubelet[2181]: E0906 00:26:05.640460 2181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\": not found" containerID="3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978" Sep 6 00:26:05.640538 kubelet[2181]: I0906 00:26:05.640503 2181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978"} err="failed to get container status \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\": rpc error: code = NotFound desc = an error occurred when try to find container \"3395c392e839cc83597b15725eac7fb903f6f5c3e31b17e78251e0e639a24978\": not found" Sep 6 00:26:05.709505 kubelet[2181]: I0906 00:26:05.709421 2181 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-xtables-lock\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.709505 kubelet[2181]: I0906 00:26:05.709483 2181 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-bpf-maps\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.709505 kubelet[2181]: I0906 00:26:05.709505 2181 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28f463ed-0cb8-48ce-988a-aaffa74730c9-hubble-tls\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.709862 kubelet[2181]: I0906 00:26:05.709523 2181 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-cni-path\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.709862 kubelet[2181]: I0906 00:26:05.709547 2181 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lz2g\" (UniqueName: \"kubernetes.io/projected/28f463ed-0cb8-48ce-988a-aaffa74730c9-kube-api-access-4lz2g\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.709862 kubelet[2181]: I0906 00:26:05.709565 2181 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-host-proc-sys-kernel\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.709862 kubelet[2181]: I0906 00:26:05.709582 2181 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28f463ed-0cb8-48ce-988a-aaffa74730c9-cilium-config-path\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.709862 kubelet[2181]: I0906 00:26:05.709600 2181 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28f463ed-0cb8-48ce-988a-aaffa74730c9-host-proc-sys-net\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:05.709862 kubelet[2181]: I0906 00:26:05.709624 2181 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28f463ed-0cb8-48ce-988a-aaffa74730c9-clustermesh-secrets\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:06.091129 kubelet[2181]: I0906 00:26:06.091062 2181 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f463ed-0cb8-48ce-988a-aaffa74730c9" path="/var/lib/kubelet/pods/28f463ed-0cb8-48ce-988a-aaffa74730c9/volumes" Sep 6 00:26:06.091971 kubelet[2181]: I0906 00:26:06.091923 2181 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b11815a-be38-404f-9321-32219122ff53" path="/var/lib/kubelet/pods/8b11815a-be38-404f-9321-32219122ff53/volumes" Sep 6 00:26:06.185031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9-rootfs.mount: Deactivated successfully. Sep 6 00:26:06.185284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aeea022d7b23b8f322dbc2554e62e631555394b2c6fcdff51d5461c1269c09f9-shm.mount: Deactivated successfully. Sep 6 00:26:06.185458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3138b8cd316c35de3afb6b69f437fdad7e928ee241b9a4a42eaab20d5774043-rootfs.mount: Deactivated successfully. Sep 6 00:26:06.185684 systemd[1]: var-lib-kubelet-pods-28f463ed\x2d0cb8\x2d48ce\x2d988a\x2daaffa74730c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4lz2g.mount: Deactivated successfully. Sep 6 00:26:06.185863 systemd[1]: var-lib-kubelet-pods-8b11815a\x2dbe38\x2d404f\x2d9321\x2d32219122ff53-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbmqcr.mount: Deactivated successfully. Sep 6 00:26:06.186023 systemd[1]: var-lib-kubelet-pods-28f463ed\x2d0cb8\x2d48ce\x2d988a\x2daaffa74730c9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:26:06.186212 systemd[1]: var-lib-kubelet-pods-28f463ed\x2d0cb8\x2d48ce\x2d988a\x2daaffa74730c9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:26:07.132612 sshd[3780]: pam_unix(sshd:session): session closed for user core Sep 6 00:26:07.137010 systemd[1]: sshd@22-10.128.0.81:22-139.178.89.65:48672.service: Deactivated successfully. Sep 6 00:26:07.140416 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:26:07.141698 systemd-logind[1321]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:26:07.143608 systemd-logind[1321]: Removed session 23. Sep 6 00:26:07.177379 systemd[1]: Started sshd@23-10.128.0.81:22-139.178.89.65:48684.service. Sep 6 00:26:07.222806 kubelet[2181]: E0906 00:26:07.222737 2181 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:26:07.477340 sshd[3953]: Accepted publickey for core from 139.178.89.65 port 48684 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:26:07.479413 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:26:07.486471 systemd-logind[1321]: New session 24 of user core. Sep 6 00:26:07.487468 systemd[1]: Started session-24.scope. Sep 6 00:26:08.448084 kubelet[2181]: E0906 00:26:08.448021 2181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28f463ed-0cb8-48ce-988a-aaffa74730c9" containerName="mount-cgroup" Sep 6 00:26:08.448084 kubelet[2181]: E0906 00:26:08.448074 2181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28f463ed-0cb8-48ce-988a-aaffa74730c9" containerName="apply-sysctl-overwrites" Sep 6 00:26:08.448084 kubelet[2181]: E0906 00:26:08.448089 2181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28f463ed-0cb8-48ce-988a-aaffa74730c9" containerName="mount-bpf-fs" Sep 6 00:26:08.448084 kubelet[2181]: E0906 00:26:08.448101 2181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28f463ed-0cb8-48ce-988a-aaffa74730c9" containerName="cilium-agent" Sep 6 00:26:08.449073 kubelet[2181]: E0906 00:26:08.448113 2181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b11815a-be38-404f-9321-32219122ff53" containerName="cilium-operator" Sep 6 00:26:08.449073 kubelet[2181]: E0906 00:26:08.448124 2181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28f463ed-0cb8-48ce-988a-aaffa74730c9" containerName="clean-cilium-state" Sep 6 00:26:08.449073 kubelet[2181]: I0906 00:26:08.448186 2181 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b11815a-be38-404f-9321-32219122ff53" containerName="cilium-operator" Sep 6 00:26:08.449073 kubelet[2181]: I0906 00:26:08.448200 2181 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f463ed-0cb8-48ce-988a-aaffa74730c9" containerName="cilium-agent" Sep 6 00:26:08.473104 sshd[3953]: pam_unix(sshd:session): session closed for user core Sep 6 00:26:08.478895 systemd[1]: sshd@23-10.128.0.81:22-139.178.89.65:48684.service: Deactivated successfully. Sep 6 00:26:08.480904 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:26:08.486326 systemd-logind[1321]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:26:08.487629 kubelet[2181]: W0906 00:26:08.487187 2181 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object Sep 6 00:26:08.487629 kubelet[2181]: E0906 00:26:08.487251 2181 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object" logger="UnhandledError" Sep 6 00:26:08.487629 kubelet[2181]: W0906 00:26:08.487468 2181 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object Sep 6 00:26:08.487629 kubelet[2181]: E0906 00:26:08.487494 2181 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object" logger="UnhandledError" Sep 6 00:26:08.488019 kubelet[2181]: W0906 00:26:08.487553 2181 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object Sep 6 00:26:08.488019 kubelet[2181]: E0906 00:26:08.487580 2181 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object" logger="UnhandledError" Sep 6 00:26:08.488019 kubelet[2181]: W0906 00:26:08.487638 2181 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object Sep 6 00:26:08.488019 kubelet[2181]: E0906 00:26:08.487668 2181 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d' and this object" logger="UnhandledError" Sep 6 00:26:08.490824 systemd-logind[1321]: Removed session 24. Sep 6 00:26:08.522868 systemd[1]: Started sshd@24-10.128.0.81:22-139.178.89.65:48690.service. Sep 6 00:26:08.533584 kubelet[2181]: I0906 00:26:08.533527 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-run\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544190 kubelet[2181]: I0906 00:26:08.541560 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-bpf-maps\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544190 kubelet[2181]: I0906 00:26:08.541695 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-hostproc\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544190 kubelet[2181]: I0906 00:26:08.541736 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-etc-cni-netd\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544190 kubelet[2181]: I0906 00:26:08.541770 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-lib-modules\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544190 kubelet[2181]: I0906 00:26:08.541809 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-ipsec-secrets\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544190 kubelet[2181]: I0906 00:26:08.541844 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-host-proc-sys-kernel\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544691 kubelet[2181]: I0906 00:26:08.541880 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-cgroup\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544691 kubelet[2181]: I0906 00:26:08.541914 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-clustermesh-secrets\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544691 kubelet[2181]: I0906 00:26:08.541953 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-hubble-tls\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544691 kubelet[2181]: I0906 00:26:08.541984 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cni-path\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544691 kubelet[2181]: I0906 00:26:08.542019 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-xtables-lock\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.544691 kubelet[2181]: I0906 00:26:08.542055 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-config-path\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.545009 kubelet[2181]: I0906 00:26:08.542136 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-host-proc-sys-net\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.545009 kubelet[2181]: I0906 00:26:08.542196 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlq9d\" (UniqueName: \"kubernetes.io/projected/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-kube-api-access-mlq9d\") pod \"cilium-c89nm\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " pod="kube-system/cilium-c89nm" Sep 6 00:26:08.856582 sshd[3964]: Accepted publickey for core from 139.178.89.65 port 48690 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:26:08.857934 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:26:08.865751 systemd[1]: Started session-25.scope. Sep 6 00:26:08.866428 systemd-logind[1321]: New session 25 of user core. Sep 6 00:26:09.152879 kubelet[2181]: E0906 00:26:09.152812 2181 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-c89nm" podUID="0c53e12d-5b63-43a6-9c17-b959e21d7ffe" Sep 6 00:26:09.180327 sshd[3964]: pam_unix(sshd:session): session closed for user core Sep 6 00:26:09.186003 systemd-logind[1321]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:26:09.186876 systemd[1]: sshd@24-10.128.0.81:22-139.178.89.65:48690.service: Deactivated successfully. Sep 6 00:26:09.189016 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:26:09.190920 systemd-logind[1321]: Removed session 25. Sep 6 00:26:09.224294 systemd[1]: Started sshd@25-10.128.0.81:22-139.178.89.65:48696.service. Sep 6 00:26:09.528984 sshd[3979]: Accepted publickey for core from 139.178.89.65 port 48696 ssh2: RSA SHA256:O4b1lx+UphQ1XQCPwsrjL8IoqrnWSgynNYcpg4eKVRo Sep 6 00:26:09.530771 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:26:09.541484 systemd[1]: Started session-26.scope. Sep 6 00:26:09.543273 systemd-logind[1321]: New session 26 of user core. Sep 6 00:26:09.644287 kubelet[2181]: E0906 00:26:09.644221 2181 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 6 00:26:09.645028 kubelet[2181]: E0906 00:26:09.644347 2181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-clustermesh-secrets podName:0c53e12d-5b63-43a6-9c17-b959e21d7ffe nodeName:}" failed. No retries permitted until 2025-09-06 00:26:10.144320869 +0000 UTC m=+148.233435111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-clustermesh-secrets") pod "cilium-c89nm" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:26:09.652676 kubelet[2181]: I0906 00:26:09.652610 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-host-proc-sys-net\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.652676 kubelet[2181]: I0906 00:26:09.652676 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-lib-modules\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.652970 kubelet[2181]: I0906 00:26:09.652704 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-hostproc\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.652970 kubelet[2181]: I0906 00:26:09.652775 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-host-proc-sys-kernel\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.652970 kubelet[2181]: I0906 00:26:09.652811 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-run\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.652970 kubelet[2181]: I0906 00:26:09.652846 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlq9d\" (UniqueName: \"kubernetes.io/projected/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-kube-api-access-mlq9d\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.652970 kubelet[2181]: I0906 00:26:09.652873 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-hubble-tls\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.652970 kubelet[2181]: I0906 00:26:09.652911 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-config-path\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.653356 kubelet[2181]: I0906 00:26:09.652941 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-bpf-maps\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.653356 kubelet[2181]: I0906 00:26:09.652967 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cni-path\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.653356 kubelet[2181]: I0906 00:26:09.653002 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-cgroup\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.653356 kubelet[2181]: I0906 00:26:09.653033 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-xtables-lock\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.653356 kubelet[2181]: I0906 00:26:09.653062 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-etc-cni-netd\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.653356 kubelet[2181]: I0906 00:26:09.653227 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:09.653669 kubelet[2181]: I0906 00:26:09.653275 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:09.653669 kubelet[2181]: I0906 00:26:09.653303 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:09.653669 kubelet[2181]: I0906 00:26:09.653328 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-hostproc" (OuterVolumeSpecName: "hostproc") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:09.653669 kubelet[2181]: I0906 00:26:09.653351 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:09.653669 kubelet[2181]: I0906 00:26:09.653376 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:09.654191 kubelet[2181]: I0906 00:26:09.654130 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:09.655264 kubelet[2181]: I0906 00:26:09.654393 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cni-path" (OuterVolumeSpecName: "cni-path") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:09.655463 kubelet[2181]: I0906 00:26:09.654418 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:09.655680 kubelet[2181]: I0906 00:26:09.654438 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:26:09.658244 kubelet[2181]: I0906 00:26:09.658134 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:26:09.664823 systemd[1]: var-lib-kubelet-pods-0c53e12d\x2d5b63\x2d43a6\x2d9c17\x2db959e21d7ffe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:26:09.665951 kubelet[2181]: I0906 00:26:09.665899 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:26:09.673234 kubelet[2181]: I0906 00:26:09.672273 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-kube-api-access-mlq9d" (OuterVolumeSpecName: "kube-api-access-mlq9d") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "kube-api-access-mlq9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:26:09.672472 systemd[1]: var-lib-kubelet-pods-0c53e12d\x2d5b63\x2d43a6\x2d9c17\x2db959e21d7ffe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmlq9d.mount: Deactivated successfully. Sep 6 00:26:09.756137 kubelet[2181]: I0906 00:26:09.754363 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-ipsec-secrets\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:09.756137 kubelet[2181]: I0906 00:26:09.754470 2181 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-etc-cni-netd\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756137 kubelet[2181]: I0906 00:26:09.754501 2181 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-cgroup\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756137 kubelet[2181]: I0906 00:26:09.754529 2181 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-xtables-lock\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756137 kubelet[2181]: I0906 00:26:09.754546 2181 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-host-proc-sys-net\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756137 kubelet[2181]: I0906 00:26:09.754565 2181 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-lib-modules\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756137 kubelet[2181]: I0906 00:26:09.754581 2181 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-hostproc\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756752 kubelet[2181]: I0906 00:26:09.754598 2181 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-host-proc-sys-kernel\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756752 kubelet[2181]: I0906 00:26:09.754614 2181 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-run\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756752 kubelet[2181]: I0906 00:26:09.754631 2181 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlq9d\" (UniqueName: \"kubernetes.io/projected/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-kube-api-access-mlq9d\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756752 kubelet[2181]: I0906 00:26:09.754647 2181 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-hubble-tls\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756752 kubelet[2181]: I0906 00:26:09.754665 2181 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-config-path\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756752 kubelet[2181]: I0906 00:26:09.754682 2181 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-bpf-maps\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.756752 kubelet[2181]: I0906 00:26:09.754698 2181 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cni-path\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:09.763019 kubelet[2181]: I0906 00:26:09.762952 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:26:09.765637 systemd[1]: var-lib-kubelet-pods-0c53e12d\x2d5b63\x2d43a6\x2d9c17\x2db959e21d7ffe-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:26:09.855398 kubelet[2181]: I0906 00:26:09.855234 2181 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-cilium-ipsec-secrets\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:10.259363 kubelet[2181]: I0906 00:26:10.259285 2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-clustermesh-secrets\") pod \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\" (UID: \"0c53e12d-5b63-43a6-9c17-b959e21d7ffe\") " Sep 6 00:26:10.267804 kubelet[2181]: I0906 00:26:10.265402 2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0c53e12d-5b63-43a6-9c17-b959e21d7ffe" (UID: "0c53e12d-5b63-43a6-9c17-b959e21d7ffe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:26:10.267417 systemd[1]: var-lib-kubelet-pods-0c53e12d\x2d5b63\x2d43a6\x2d9c17\x2db959e21d7ffe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:26:10.359818 kubelet[2181]: I0906 00:26:10.359728 2181 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c53e12d-5b63-43a6-9c17-b959e21d7ffe-clustermesh-secrets\") on node \"ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d\" DevicePath \"\"" Sep 6 00:26:10.763026 kubelet[2181]: I0906 00:26:10.762942 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb60a39b-6260-49f4-a15e-de768878148b-xtables-lock\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.763026 kubelet[2181]: I0906 00:26:10.763036 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb60a39b-6260-49f4-a15e-de768878148b-lib-modules\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.763872 kubelet[2181]: I0906 00:26:10.763078 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr7bs\" (UniqueName: \"kubernetes.io/projected/bb60a39b-6260-49f4-a15e-de768878148b-kube-api-access-xr7bs\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.763872 kubelet[2181]: I0906 00:26:10.763138 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb60a39b-6260-49f4-a15e-de768878148b-hubble-tls\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.763872 kubelet[2181]: I0906 00:26:10.763207 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb60a39b-6260-49f4-a15e-de768878148b-cilium-config-path\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.763872 kubelet[2181]: I0906 00:26:10.763240 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb60a39b-6260-49f4-a15e-de768878148b-cilium-cgroup\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.763872 kubelet[2181]: I0906 00:26:10.763275 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb60a39b-6260-49f4-a15e-de768878148b-cni-path\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.763872 kubelet[2181]: I0906 00:26:10.763313 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bb60a39b-6260-49f4-a15e-de768878148b-cilium-ipsec-secrets\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.764092 kubelet[2181]: I0906 00:26:10.763344 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb60a39b-6260-49f4-a15e-de768878148b-host-proc-sys-kernel\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.764092 kubelet[2181]: I0906 00:26:10.763395 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb60a39b-6260-49f4-a15e-de768878148b-cilium-run\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.764092 kubelet[2181]: I0906 00:26:10.763425 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb60a39b-6260-49f4-a15e-de768878148b-bpf-maps\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.764092 kubelet[2181]: I0906 00:26:10.763468 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb60a39b-6260-49f4-a15e-de768878148b-hostproc\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.764092 kubelet[2181]: I0906 00:26:10.763503 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb60a39b-6260-49f4-a15e-de768878148b-clustermesh-secrets\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.764092 kubelet[2181]: I0906 00:26:10.763533 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb60a39b-6260-49f4-a15e-de768878148b-etc-cni-netd\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.764413 kubelet[2181]: I0906 00:26:10.763568 2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb60a39b-6260-49f4-a15e-de768878148b-host-proc-sys-net\") pod \"cilium-p9pn6\" (UID: \"bb60a39b-6260-49f4-a15e-de768878148b\") " pod="kube-system/cilium-p9pn6" Sep 6 00:26:10.960297 env[1335]: time="2025-09-06T00:26:10.959648125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9pn6,Uid:bb60a39b-6260-49f4-a15e-de768878148b,Namespace:kube-system,Attempt:0,}" Sep 6 00:26:11.016868 env[1335]: time="2025-09-06T00:26:11.009822317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:26:11.016868 env[1335]: time="2025-09-06T00:26:11.009890168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:26:11.016868 env[1335]: time="2025-09-06T00:26:11.009912100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:26:11.016868 env[1335]: time="2025-09-06T00:26:11.010191507Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113 pid=4007 runtime=io.containerd.runc.v2 Sep 6 00:26:11.237239 env[1335]: time="2025-09-06T00:26:11.237084757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9pn6,Uid:bb60a39b-6260-49f4-a15e-de768878148b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113\"" Sep 6 00:26:11.242619 env[1335]: time="2025-09-06T00:26:11.242550695Z" level=info msg="CreateContainer within sandbox \"f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:26:11.258462 env[1335]: time="2025-09-06T00:26:11.258385283Z" level=info msg="CreateContainer within sandbox \"f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"33718be2f9690172e876724e15e7f643e1108a98ead233b05c64e568a9f25317\"" Sep 6 00:26:11.259704 env[1335]: time="2025-09-06T00:26:11.259658369Z" level=info msg="StartContainer for \"33718be2f9690172e876724e15e7f643e1108a98ead233b05c64e568a9f25317\"" Sep 6 00:26:11.354801 env[1335]: time="2025-09-06T00:26:11.353969764Z" level=info msg="StartContainer for \"33718be2f9690172e876724e15e7f643e1108a98ead233b05c64e568a9f25317\" returns successfully" Sep 6 00:26:11.402969 env[1335]: time="2025-09-06T00:26:11.402892271Z" level=info msg="shim disconnected" id=33718be2f9690172e876724e15e7f643e1108a98ead233b05c64e568a9f25317 Sep 6 00:26:11.402969 env[1335]: time="2025-09-06T00:26:11.402971215Z" level=warning msg="cleaning up after shim disconnected" id=33718be2f9690172e876724e15e7f643e1108a98ead233b05c64e568a9f25317 namespace=k8s.io Sep 6 00:26:11.403437 env[1335]: time="2025-09-06T00:26:11.402986844Z" level=info msg="cleaning up dead shim" Sep 6 00:26:11.416328 env[1335]: time="2025-09-06T00:26:11.416270253Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4092 runtime=io.containerd.runc.v2\n" Sep 6 00:26:11.607873 env[1335]: time="2025-09-06T00:26:11.607702415Z" level=info msg="CreateContainer within sandbox \"f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:26:11.632818 env[1335]: time="2025-09-06T00:26:11.632728194Z" level=info msg="CreateContainer within sandbox \"f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b9992787ca61acbdb4ac5fa87c5ac4e8b28fd8e1cafa9be79da04cad850dfb35\"" Sep 6 00:26:11.633854 env[1335]: time="2025-09-06T00:26:11.633807853Z" level=info msg="StartContainer for \"b9992787ca61acbdb4ac5fa87c5ac4e8b28fd8e1cafa9be79da04cad850dfb35\"" Sep 6 00:26:11.712995 env[1335]: time="2025-09-06T00:26:11.710042631Z" level=info msg="StartContainer for \"b9992787ca61acbdb4ac5fa87c5ac4e8b28fd8e1cafa9be79da04cad850dfb35\" returns successfully" Sep 6 00:26:11.747958 env[1335]: time="2025-09-06T00:26:11.746935256Z" level=info msg="shim disconnected" id=b9992787ca61acbdb4ac5fa87c5ac4e8b28fd8e1cafa9be79da04cad850dfb35 Sep 6 00:26:11.747958 env[1335]: time="2025-09-06T00:26:11.747002280Z" level=warning msg="cleaning up after shim disconnected" id=b9992787ca61acbdb4ac5fa87c5ac4e8b28fd8e1cafa9be79da04cad850dfb35 namespace=k8s.io Sep 6 00:26:11.747958 env[1335]: time="2025-09-06T00:26:11.747018973Z" level=info msg="cleaning up dead shim" Sep 6 00:26:11.764484 env[1335]: time="2025-09-06T00:26:11.764427907Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4154 runtime=io.containerd.runc.v2\n" Sep 6 00:26:12.096277 kubelet[2181]: I0906 00:26:12.096225 2181 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c53e12d-5b63-43a6-9c17-b959e21d7ffe" path="/var/lib/kubelet/pods/0c53e12d-5b63-43a6-9c17-b959e21d7ffe/volumes" Sep 6 00:26:12.224133 kubelet[2181]: E0906 00:26:12.224088 2181 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:26:12.606613 env[1335]: time="2025-09-06T00:26:12.605532896Z" level=info msg="CreateContainer within sandbox \"f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:26:12.635710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188250127.mount: Deactivated successfully. Sep 6 00:26:12.649897 env[1335]: time="2025-09-06T00:26:12.649799335Z" level=info msg="CreateContainer within sandbox \"f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"75a91510ebe0c40d8ab4dbffc18b010145da19886257ec3cc44f4640017f197d\"" Sep 6 00:26:12.650670 env[1335]: time="2025-09-06T00:26:12.650568680Z" level=info msg="StartContainer for \"75a91510ebe0c40d8ab4dbffc18b010145da19886257ec3cc44f4640017f197d\"" Sep 6 00:26:12.765232 env[1335]: time="2025-09-06T00:26:12.760604356Z" level=info msg="StartContainer for \"75a91510ebe0c40d8ab4dbffc18b010145da19886257ec3cc44f4640017f197d\" returns successfully" Sep 6 00:26:12.802068 env[1335]: time="2025-09-06T00:26:12.801998901Z" level=info msg="shim disconnected" id=75a91510ebe0c40d8ab4dbffc18b010145da19886257ec3cc44f4640017f197d Sep 6 00:26:12.802518 env[1335]: time="2025-09-06T00:26:12.802075287Z" level=warning msg="cleaning up after shim disconnected" id=75a91510ebe0c40d8ab4dbffc18b010145da19886257ec3cc44f4640017f197d namespace=k8s.io Sep 6 00:26:12.802518 env[1335]: time="2025-09-06T00:26:12.802092260Z" level=info msg="cleaning up dead shim" Sep 6 00:26:12.814556 env[1335]: time="2025-09-06T00:26:12.814496966Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4212 runtime=io.containerd.runc.v2\n" Sep 6 00:26:12.874827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75a91510ebe0c40d8ab4dbffc18b010145da19886257ec3cc44f4640017f197d-rootfs.mount: Deactivated successfully. Sep 6 00:26:13.609947 env[1335]: time="2025-09-06T00:26:13.609883522Z" level=info msg="CreateContainer within sandbox \"f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:26:13.633954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339093409.mount: Deactivated successfully. Sep 6 00:26:13.649618 env[1335]: time="2025-09-06T00:26:13.649457423Z" level=info msg="CreateContainer within sandbox \"f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"23594525cc2d590d1da35418cf8d79a586e2b0c4dc70a741ae101f265999520b\"" Sep 6 00:26:13.651759 env[1335]: time="2025-09-06T00:26:13.651704529Z" level=info msg="StartContainer for \"23594525cc2d590d1da35418cf8d79a586e2b0c4dc70a741ae101f265999520b\"" Sep 6 00:26:13.736116 env[1335]: time="2025-09-06T00:26:13.736051224Z" level=info msg="StartContainer for \"23594525cc2d590d1da35418cf8d79a586e2b0c4dc70a741ae101f265999520b\" returns successfully" Sep 6 00:26:13.764533 env[1335]: time="2025-09-06T00:26:13.764466859Z" level=info msg="shim disconnected" id=23594525cc2d590d1da35418cf8d79a586e2b0c4dc70a741ae101f265999520b Sep 6 00:26:13.764533 env[1335]: time="2025-09-06T00:26:13.764531940Z" level=warning msg="cleaning up after shim disconnected" id=23594525cc2d590d1da35418cf8d79a586e2b0c4dc70a741ae101f265999520b namespace=k8s.io Sep 6 00:26:13.764533 env[1335]: time="2025-09-06T00:26:13.764546711Z" level=info msg="cleaning up dead shim" Sep 6 00:26:13.777258 env[1335]: time="2025-09-06T00:26:13.777202218Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4270 runtime=io.containerd.runc.v2\n" Sep 6 00:26:13.874994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23594525cc2d590d1da35418cf8d79a586e2b0c4dc70a741ae101f265999520b-rootfs.mount: Deactivated successfully. Sep 6 00:26:14.617321 env[1335]: time="2025-09-06T00:26:14.616619095Z" level=info msg="CreateContainer within sandbox \"f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:26:14.645461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110046227.mount: Deactivated successfully. Sep 6 00:26:14.657007 env[1335]: time="2025-09-06T00:26:14.656943929Z" level=info msg="CreateContainer within sandbox \"f896f45c269ed89965cc327f5aa9f017114b4d07424a84424d8f283550d45113\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"19c602c5210512f2b705bd928c93f3dd5ab9d7b5ed683bd94f11afda2c9ac201\"" Sep 6 00:26:14.657976 env[1335]: time="2025-09-06T00:26:14.657937950Z" level=info msg="StartContainer for \"19c602c5210512f2b705bd928c93f3dd5ab9d7b5ed683bd94f11afda2c9ac201\"" Sep 6 00:26:14.743531 env[1335]: time="2025-09-06T00:26:14.743445317Z" level=info msg="StartContainer for \"19c602c5210512f2b705bd928c93f3dd5ab9d7b5ed683bd94f11afda2c9ac201\" returns successfully" Sep 6 00:26:15.226232 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:26:15.971010 kubelet[2181]: I0906 00:26:15.970929 2181 setters.go:600] "Node became not ready" node="ci-3510-3-8-nightly-20250905-2100-2c9755ec5393edc8923d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:26:15Z","lastTransitionTime":"2025-09-06T00:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:26:16.056458 systemd[1]: run-containerd-runc-k8s.io-19c602c5210512f2b705bd928c93f3dd5ab9d7b5ed683bd94f11afda2c9ac201-runc.xNWxXS.mount: Deactivated successfully. Sep 6 00:26:18.526087 systemd-networkd[1079]: lxc_health: Link UP Sep 6 00:26:18.538201 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:26:18.542468 systemd-networkd[1079]: lxc_health: Gained carrier Sep 6 00:26:18.994822 kubelet[2181]: I0906 00:26:18.994632 2181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p9pn6" podStartSLOduration=8.994575691 podStartE2EDuration="8.994575691s" podCreationTimestamp="2025-09-06 00:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:26:15.651871607 +0000 UTC m=+153.740985885" watchObservedRunningTime="2025-09-06 00:26:18.994575691 +0000 UTC m=+157.083689939" Sep 6 00:26:20.443947 systemd-networkd[1079]: lxc_health: Gained IPv6LL Sep 6 00:26:22.819850 systemd[1]: run-containerd-runc-k8s.io-19c602c5210512f2b705bd928c93f3dd5ab9d7b5ed683bd94f11afda2c9ac201-runc.XzoWyU.mount: Deactivated successfully. Sep 6 00:26:25.181333 systemd[1]: run-containerd-runc-k8s.io-19c602c5210512f2b705bd928c93f3dd5ab9d7b5ed683bd94f11afda2c9ac201-runc.ry3jap.mount: Deactivated successfully. Sep 6 00:26:25.330783 sshd[3979]: pam_unix(sshd:session): session closed for user core Sep 6 00:26:25.336100 systemd[1]: sshd@25-10.128.0.81:22-139.178.89.65:48696.service: Deactivated successfully. Sep 6 00:26:25.337535 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:26:25.338277 systemd-logind[1321]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:26:25.340223 systemd-logind[1321]: Removed session 26.