Aug 13 01:09:40.082169 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 01:09:40.082212 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:09:40.082230 kernel: BIOS-provided physical RAM map: Aug 13 01:09:40.082243 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 13 01:09:40.082255 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 13 01:09:40.082268 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 13 01:09:40.082288 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 13 01:09:40.082317 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 13 01:09:40.082331 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd27bfff] usable Aug 13 01:09:40.082345 kernel: BIOS-e820: [mem 0x00000000bd27c000-0x00000000bd285fff] ACPI data Aug 13 01:09:40.082358 kernel: BIOS-e820: [mem 0x00000000bd286000-0x00000000bf8ecfff] usable Aug 13 01:09:40.082371 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Aug 13 01:09:40.082384 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 13 01:09:40.082398 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 13 01:09:40.082420 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 13 01:09:40.082434 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 13 01:09:40.082457 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 13 01:09:40.082472 kernel: NX (Execute Disable) protection: active Aug 13 01:09:40.082486 kernel: efi: EFI v2.70 by EDK II Aug 13 01:09:40.082501 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd27c018 Aug 13 01:09:40.082516 kernel: random: crng init done Aug 13 01:09:40.082530 kernel: SMBIOS 2.4 present. Aug 13 01:09:40.082549 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Aug 13 01:09:40.082564 kernel: Hypervisor detected: KVM Aug 13 01:09:40.082578 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:09:40.082592 kernel: kvm-clock: cpu 0, msr 11b19e001, primary cpu clock Aug 13 01:09:40.082607 kernel: kvm-clock: using sched offset of 12603420444 cycles Aug 13 01:09:40.082623 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:09:40.082638 kernel: tsc: Detected 2299.998 MHz processor Aug 13 01:09:40.082654 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:09:40.082669 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:09:40.082684 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 13 01:09:40.082703 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:09:40.082718 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 13 01:09:40.082732 kernel: Using GB pages for direct mapping Aug 13 01:09:40.082746 kernel: Secure boot disabled Aug 13 01:09:40.082761 kernel: ACPI: Early table checksum verification disabled Aug 13 01:09:40.082776 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 13 01:09:40.082790 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 13 01:09:40.082806 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 13 01:09:40.082833 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 13 01:09:40.082848 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 13 01:09:40.082864 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Aug 13 01:09:40.082881 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 13 01:09:40.082897 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 13 01:09:40.082913 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 13 01:09:40.082933 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 13 01:09:40.082949 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 13 01:09:40.082965 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 13 01:09:40.082981 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 13 01:09:40.082997 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 13 01:09:40.083013 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 13 01:09:40.083028 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 13 01:09:40.083045 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 13 01:09:40.083061 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 13 01:09:40.083083 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 13 01:09:40.083099 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 13 01:09:40.083114 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 01:09:40.083129 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 01:09:40.083146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 01:09:40.083162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 13 01:09:40.083177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 13 01:09:40.083193 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Aug 13 01:09:40.083210 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Aug 13 01:09:40.083230 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Aug 13 01:09:40.083247 kernel: Zone ranges: Aug 13 01:09:40.083263 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:09:40.083278 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:09:40.083294 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 13 01:09:40.083325 kernel: Movable zone start for each node Aug 13 01:09:40.083341 kernel: Early memory node ranges Aug 13 01:09:40.083357 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 13 01:09:40.083373 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 13 01:09:40.083394 kernel: node 0: [mem 0x0000000000100000-0x00000000bd27bfff] Aug 13 01:09:40.083410 kernel: node 0: [mem 0x00000000bd286000-0x00000000bf8ecfff] Aug 13 01:09:40.083426 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 13 01:09:40.083442 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 13 01:09:40.083475 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 13 01:09:40.083492 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:09:40.083507 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 13 01:09:40.083523 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 13 01:09:40.083539 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Aug 13 01:09:40.083560 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 13 01:09:40.083576 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 13 01:09:40.083592 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 01:09:40.083608 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:09:40.083624 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:09:40.083640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:09:40.083657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:09:40.083673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:09:40.083688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:09:40.083708 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:09:40.083724 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 01:09:40.083740 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 01:09:40.083757 kernel: Booting paravirtualized kernel on KVM Aug 13 01:09:40.083773 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:09:40.083788 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:09:40.083802 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Aug 13 01:09:40.083817 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Aug 13 01:09:40.083832 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:09:40.083850 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:09:40.083865 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:09:40.083881 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Aug 13 01:09:40.083896 kernel: Policy zone: Normal Aug 13 01:09:40.083914 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:09:40.083930 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:09:40.083946 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 01:09:40.083961 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:09:40.083977 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:09:40.083997 kernel: Memory: 7515428K/7860544K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 344856K reserved, 0K cma-reserved) Aug 13 01:09:40.084013 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:09:40.084029 kernel: Kernel/User page tables isolation: enabled Aug 13 01:09:40.084045 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 01:09:40.084062 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 01:09:40.084079 kernel: rcu: Hierarchical RCU implementation. Aug 13 01:09:40.084097 kernel: rcu: RCU event tracing is enabled. Aug 13 01:09:40.084114 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:09:40.084135 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:09:40.084166 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:09:40.084184 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:09:40.084207 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:09:40.084225 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:09:40.084242 kernel: Console: colour dummy device 80x25 Aug 13 01:09:40.084259 kernel: printk: console [ttyS0] enabled Aug 13 01:09:40.084277 kernel: ACPI: Core revision 20210730 Aug 13 01:09:40.084294 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:09:40.084618 kernel: x2apic enabled Aug 13 01:09:40.084641 kernel: Switched APIC routing to physical x2apic. Aug 13 01:09:40.084658 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 13 01:09:40.084810 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 01:09:40.084827 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 13 01:09:40.084844 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 13 01:09:40.084861 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 13 01:09:40.084877 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:09:40.085025 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 01:09:40.085041 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 01:09:40.085058 kernel: Spectre V2 : Mitigation: IBRS Aug 13 01:09:40.085075 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:09:40.085092 kernel: RETBleed: Mitigation: IBRS Aug 13 01:09:40.085236 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:09:40.085252 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Aug 13 01:09:40.085269 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 01:09:40.085286 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 01:09:40.085429 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:09:40.085460 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 01:09:40.085478 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:09:40.085496 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:09:40.085513 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:09:40.085530 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:09:40.085547 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 01:09:40.085565 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:09:40.085582 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:09:40.085604 kernel: LSM: Security Framework initializing Aug 13 01:09:40.085621 kernel: SELinux: Initializing. Aug 13 01:09:40.085637 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 01:09:40.085655 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 01:09:40.085673 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 13 01:09:40.085690 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 13 01:09:40.085706 kernel: signal: max sigframe size: 1776 Aug 13 01:09:40.085724 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:09:40.085741 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 01:09:40.085762 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:09:40.085779 kernel: x86: Booting SMP configuration: Aug 13 01:09:40.085796 kernel: .... node #0, CPUs: #1 Aug 13 01:09:40.085813 kernel: kvm-clock: cpu 1, msr 11b19e041, secondary cpu clock Aug 13 01:09:40.085830 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 01:09:40.085849 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 01:09:40.085866 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:09:40.085883 kernel: smpboot: Max logical packages: 1 Aug 13 01:09:40.085905 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 13 01:09:40.085923 kernel: devtmpfs: initialized Aug 13 01:09:40.085940 kernel: x86/mm: Memory block size: 128MB Aug 13 01:09:40.085958 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 13 01:09:40.085975 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:09:40.085992 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:09:40.086009 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:09:40.086027 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:09:40.086043 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:09:40.086063 kernel: audit: type=2000 audit(1755047379.093:1): state=initialized audit_enabled=0 res=1 Aug 13 01:09:40.086079 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:09:40.086095 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:09:40.086112 kernel: cpuidle: using governor menu Aug 13 01:09:40.086129 kernel: ACPI: bus type PCI registered Aug 13 01:09:40.086145 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:09:40.086162 kernel: dca service started, version 1.12.1 Aug 13 01:09:40.086179 kernel: PCI: Using configuration type 1 for base access Aug 13 01:09:40.086198 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:09:40.086218 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:09:40.086234 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:09:40.086250 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:09:40.086266 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:09:40.086282 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:09:40.086345 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 01:09:40.086363 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 01:09:40.086379 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 01:09:40.086396 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 01:09:40.086416 kernel: ACPI: Interpreter enabled Aug 13 01:09:40.086433 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:09:40.086458 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:09:40.086475 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:09:40.086491 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 13 01:09:40.086508 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:09:40.086761 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:09:40.086936 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Aug 13 01:09:40.086963 kernel: PCI host bridge to bus 0000:00 Aug 13 01:09:40.087126 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:09:40.087279 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:09:40.087461 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:09:40.087617 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 13 01:09:40.087770 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:09:40.087967 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 01:09:40.088158 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Aug 13 01:09:40.092556 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 01:09:40.092758 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 01:09:40.092946 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Aug 13 01:09:40.093121 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 01:09:40.093603 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Aug 13 01:09:40.094100 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 01:09:40.094538 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 01:09:40.094716 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Aug 13 01:09:40.094899 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 01:09:40.095073 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 13 01:09:40.095245 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Aug 13 01:09:40.095268 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:09:40.095294 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:09:40.100993 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:09:40.101015 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:09:40.101033 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 01:09:40.101052 kernel: iommu: Default domain type: Translated Aug 13 01:09:40.101070 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:09:40.101088 kernel: vgaarb: loaded Aug 13 01:09:40.101106 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 01:09:40.101123 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 01:09:40.101149 kernel: PTP clock support registered Aug 13 01:09:40.101167 kernel: Registered efivars operations Aug 13 01:09:40.101185 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:09:40.101204 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:09:40.101221 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 13 01:09:40.101240 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 13 01:09:40.101257 kernel: e820: reserve RAM buffer [mem 0xbd27c000-0xbfffffff] Aug 13 01:09:40.101275 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 13 01:09:40.101292 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 13 01:09:40.101335 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:09:40.101352 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:09:40.101370 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:09:40.101388 kernel: pnp: PnP ACPI init Aug 13 01:09:40.101407 kernel: pnp: PnP ACPI: found 7 devices Aug 13 01:09:40.101424 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:09:40.101442 kernel: NET: Registered PF_INET protocol family Aug 13 01:09:40.101469 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 01:09:40.101491 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 01:09:40.101510 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:09:40.101528 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:09:40.101546 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Aug 13 01:09:40.101564 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 01:09:40.101582 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 01:09:40.101600 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 01:09:40.101618 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:09:40.101636 kernel: NET: Registered PF_XDP protocol family Aug 13 01:09:40.101839 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:09:40.101998 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:09:40.102151 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:09:40.102563 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 13 01:09:40.103039 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 01:09:40.103068 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:09:40.103215 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:09:40.103241 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 13 01:09:40.103259 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 01:09:40.103277 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 01:09:40.103421 kernel: clocksource: Switched to clocksource tsc Aug 13 01:09:40.103441 kernel: Initialise system trusted keyrings Aug 13 01:09:40.103466 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 01:09:40.103483 kernel: Key type asymmetric registered Aug 13 01:09:40.103500 kernel: Asymmetric key parser 'x509' registered Aug 13 01:09:40.103518 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 01:09:40.103542 kernel: io scheduler mq-deadline registered Aug 13 01:09:40.103560 kernel: io scheduler kyber registered Aug 13 01:09:40.103578 kernel: io scheduler bfq registered Aug 13 01:09:40.103596 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:09:40.103615 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 01:09:40.103804 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 13 01:09:40.103830 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 13 01:09:40.104004 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 13 01:09:40.104028 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 01:09:40.104206 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 13 01:09:40.104229 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:09:40.104248 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:09:40.104266 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 01:09:40.104284 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 13 01:09:40.104317 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 13 01:09:40.104507 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 13 01:09:40.104534 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:09:40.104556 kernel: i8042: Warning: Keylock active Aug 13 01:09:40.104574 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:09:40.104592 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:09:40.104772 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 01:09:40.104932 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 01:09:40.105090 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T01:09:39 UTC (1755047379) Aug 13 01:09:40.105245 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 01:09:40.105267 kernel: intel_pstate: CPU model not supported Aug 13 01:09:40.105289 kernel: pstore: Registered efi as persistent store backend Aug 13 01:09:40.105322 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:09:40.105340 kernel: Segment Routing with IPv6 Aug 13 01:09:40.105358 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:09:40.105376 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:09:40.105394 kernel: Key type dns_resolver registered Aug 13 01:09:40.105412 kernel: IPI shorthand broadcast: enabled Aug 13 01:09:40.105429 kernel: sched_clock: Marking stable (753281655, 125862352)->(911321860, -32177853) Aug 13 01:09:40.105457 kernel: registered taskstats version 1 Aug 13 01:09:40.105480 kernel: Loading compiled-in X.509 certificates Aug 13 01:09:40.105497 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:09:40.105515 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 01:09:40.105533 kernel: Key type .fscrypt registered Aug 13 01:09:40.105550 kernel: Key type fscrypt-provisioning registered Aug 13 01:09:40.105568 kernel: pstore: Using crash dump compression: deflate Aug 13 01:09:40.105586 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:09:40.105604 kernel: ima: No architecture policies found Aug 13 01:09:40.105622 kernel: clk: Disabling unused clocks Aug 13 01:09:40.105644 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 01:09:40.105661 kernel: Write protecting the kernel read-only data: 28672k Aug 13 01:09:40.105679 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 01:09:40.105697 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 01:09:40.105714 kernel: Run /init as init process Aug 13 01:09:40.105732 kernel: with arguments: Aug 13 01:09:40.105750 kernel: /init Aug 13 01:09:40.105767 kernel: with environment: Aug 13 01:09:40.105785 kernel: HOME=/ Aug 13 01:09:40.105806 kernel: TERM=linux Aug 13 01:09:40.105823 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:09:40.105845 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:09:40.105868 systemd[1]: Detected virtualization kvm. Aug 13 01:09:40.105887 systemd[1]: Detected architecture x86-64. Aug 13 01:09:40.105905 systemd[1]: Running in initrd. Aug 13 01:09:40.105923 systemd[1]: No hostname configured, using default hostname. Aug 13 01:09:40.105945 systemd[1]: Hostname set to . Aug 13 01:09:40.105964 systemd[1]: Initializing machine ID from VM UUID. Aug 13 01:09:40.105983 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:09:40.106002 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:09:40.106020 systemd[1]: Reached target cryptsetup.target. Aug 13 01:09:40.106038 systemd[1]: Reached target paths.target. Aug 13 01:09:40.106057 systemd[1]: Reached target slices.target. Aug 13 01:09:40.106075 systemd[1]: Reached target swap.target. Aug 13 01:09:40.106096 systemd[1]: Reached target timers.target. Aug 13 01:09:40.106116 systemd[1]: Listening on iscsid.socket. Aug 13 01:09:40.106135 systemd[1]: Listening on iscsiuio.socket. Aug 13 01:09:40.106154 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 01:09:40.106173 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 01:09:40.106191 systemd[1]: Listening on systemd-journald.socket. Aug 13 01:09:40.106210 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:09:40.106229 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:09:40.106251 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:09:40.106271 systemd[1]: Reached target sockets.target. Aug 13 01:09:40.124234 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:09:40.124271 systemd[1]: Finished network-cleanup.service. Aug 13 01:09:40.124292 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:09:40.124327 systemd[1]: Starting systemd-journald.service... Aug 13 01:09:40.124347 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:09:40.124370 systemd[1]: Starting systemd-resolved.service... Aug 13 01:09:40.124388 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 01:09:40.124406 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:09:40.124424 kernel: audit: type=1130 audit(1755047380.087:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.124454 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:09:40.124475 kernel: audit: type=1130 audit(1755047380.094:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.124495 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 01:09:40.124514 kernel: audit: type=1130 audit(1755047380.105:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.124537 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 01:09:40.124556 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 01:09:40.124582 systemd-journald[190]: Journal started Aug 13 01:09:40.124681 systemd-journald[190]: Runtime Journal (/run/log/journal/70efdf7cf8309d9d0712260f2c22b0ca) is 8.0M, max 148.8M, 140.8M free. Aug 13 01:09:40.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.118533 systemd-modules-load[191]: Inserted module 'overlay' Aug 13 01:09:40.136576 systemd[1]: Started systemd-journald.service. Aug 13 01:09:40.136771 kernel: audit: type=1130 audit(1755047380.131:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.143341 kernel: audit: type=1130 audit(1755047380.138:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.139232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 01:09:40.170694 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 01:09:40.201597 kernel: audit: type=1130 audit(1755047380.173:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.201636 kernel: audit: type=1130 audit(1755047380.194:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.175851 systemd[1]: Starting dracut-cmdline.service... Aug 13 01:09:40.208454 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:09:40.186875 systemd-resolved[192]: Positive Trust Anchors: Aug 13 01:09:40.212460 dracut-cmdline[206]: dracut-dracut-053 Aug 13 01:09:40.217426 kernel: Bridge firewalling registered Aug 13 01:09:40.186895 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:09:40.221429 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:09:40.186957 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:09:40.191628 systemd-resolved[192]: Defaulting to hostname 'linux'. Aug 13 01:09:40.193714 systemd[1]: Started systemd-resolved.service. Aug 13 01:09:40.254444 kernel: SCSI subsystem initialized Aug 13 01:09:40.195515 systemd[1]: Reached target nss-lookup.target. Aug 13 01:09:40.216650 systemd-modules-load[191]: Inserted module 'br_netfilter' Aug 13 01:09:40.275825 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:09:40.275900 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:09:40.278859 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 01:09:40.283811 systemd-modules-load[191]: Inserted module 'dm_multipath' Aug 13 01:09:40.285727 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:09:40.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.294644 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:09:40.306459 kernel: audit: type=1130 audit(1755047380.292:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.306728 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:09:40.319470 kernel: audit: type=1130 audit(1755047380.309:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.328332 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:09:40.349343 kernel: iscsi: registered transport (tcp) Aug 13 01:09:40.377660 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:09:40.377760 kernel: QLogic iSCSI HBA Driver Aug 13 01:09:40.422907 systemd[1]: Finished dracut-cmdline.service. Aug 13 01:09:40.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.425144 systemd[1]: Starting dracut-pre-udev.service... Aug 13 01:09:40.482365 kernel: raid6: avx2x4 gen() 18418 MB/s Aug 13 01:09:40.499346 kernel: raid6: avx2x4 xor() 7825 MB/s Aug 13 01:09:40.516342 kernel: raid6: avx2x2 gen() 18394 MB/s Aug 13 01:09:40.533339 kernel: raid6: avx2x2 xor() 18593 MB/s Aug 13 01:09:40.550357 kernel: raid6: avx2x1 gen() 14261 MB/s Aug 13 01:09:40.567346 kernel: raid6: avx2x1 xor() 15994 MB/s Aug 13 01:09:40.584374 kernel: raid6: sse2x4 gen() 11062 MB/s Aug 13 01:09:40.601347 kernel: raid6: sse2x4 xor() 6610 MB/s Aug 13 01:09:40.618342 kernel: raid6: sse2x2 gen() 11988 MB/s Aug 13 01:09:40.635347 kernel: raid6: sse2x2 xor() 7429 MB/s Aug 13 01:09:40.652335 kernel: raid6: sse2x1 gen() 10558 MB/s Aug 13 01:09:40.669841 kernel: raid6: sse2x1 xor() 5196 MB/s Aug 13 01:09:40.669877 kernel: raid6: using algorithm avx2x4 gen() 18418 MB/s Aug 13 01:09:40.669898 kernel: raid6: .... xor() 7825 MB/s, rmw enabled Aug 13 01:09:40.670622 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:09:40.686343 kernel: xor: automatically using best checksumming function avx Aug 13 01:09:40.795440 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 01:09:40.807126 systemd[1]: Finished dracut-pre-udev.service. Aug 13 01:09:40.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.810000 audit: BPF prog-id=7 op=LOAD Aug 13 01:09:40.810000 audit: BPF prog-id=8 op=LOAD Aug 13 01:09:40.812249 systemd[1]: Starting systemd-udevd.service... Aug 13 01:09:40.829844 systemd-udevd[389]: Using default interface naming scheme 'v252'. Aug 13 01:09:40.837078 systemd[1]: Started systemd-udevd.service. Aug 13 01:09:40.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.843830 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 01:09:40.864383 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Aug 13 01:09:40.904313 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 01:09:40.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:40.910080 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:09:40.975537 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:09:40.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:41.063325 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:09:41.070358 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:09:41.089334 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 01:09:41.100387 kernel: AES CTR mode by8 optimization enabled Aug 13 01:09:41.111327 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 13 01:09:41.206497 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 13 01:09:41.221487 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 13 01:09:41.221664 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 13 01:09:41.221806 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 13 01:09:41.221944 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:09:41.222080 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:09:41.222095 kernel: GPT:17805311 != 25165823 Aug 13 01:09:41.222117 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:09:41.222130 kernel: GPT:17805311 != 25165823 Aug 13 01:09:41.222143 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:09:41.222156 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:09:41.222171 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 13 01:09:41.263389 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 01:09:41.269494 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (440) Aug 13 01:09:41.273460 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 01:09:41.284811 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 01:09:41.304000 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 01:09:41.313614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:09:41.315017 systemd[1]: Starting disk-uuid.service... Aug 13 01:09:41.338340 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:09:41.338540 disk-uuid[517]: Primary Header is updated. Aug 13 01:09:41.338540 disk-uuid[517]: Secondary Entries is updated. Aug 13 01:09:41.338540 disk-uuid[517]: Secondary Header is updated. Aug 13 01:09:42.364047 disk-uuid[518]: The operation has completed successfully. Aug 13 01:09:42.370451 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:09:42.430244 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:09:42.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:42.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:42.430411 systemd[1]: Finished disk-uuid.service. Aug 13 01:09:42.448809 systemd[1]: Starting verity-setup.service... Aug 13 01:09:42.477809 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 01:09:42.550846 systemd[1]: Found device dev-mapper-usr.device. Aug 13 01:09:42.552550 systemd[1]: Mounting sysusr-usr.mount... Aug 13 01:09:42.571930 systemd[1]: Finished verity-setup.service. Aug 13 01:09:42.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:42.654333 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 01:09:42.655105 systemd[1]: Mounted sysusr-usr.mount. Aug 13 01:09:42.662706 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 01:09:42.699523 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:09:42.699554 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:09:42.699569 kernel: BTRFS info (device sda6): has skinny extents Aug 13 01:09:42.663718 systemd[1]: Starting ignition-setup.service... Aug 13 01:09:42.713328 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:09:42.720645 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 01:09:42.732501 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 01:09:42.751956 systemd[1]: Finished ignition-setup.service. Aug 13 01:09:42.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:42.765869 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 01:09:42.836675 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 01:09:42.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:42.845000 audit: BPF prog-id=9 op=LOAD Aug 13 01:09:42.847916 systemd[1]: Starting systemd-networkd.service... Aug 13 01:09:42.885703 systemd-networkd[692]: lo: Link UP Aug 13 01:09:42.885716 systemd-networkd[692]: lo: Gained carrier Aug 13 01:09:42.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:42.887135 systemd-networkd[692]: Enumeration completed Aug 13 01:09:42.887328 systemd[1]: Started systemd-networkd.service. Aug 13 01:09:42.887603 systemd-networkd[692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:09:42.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:42.890062 systemd-networkd[692]: eth0: Link UP Aug 13 01:09:42.962451 iscsid[701]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:09:42.962451 iscsid[701]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Aug 13 01:09:42.962451 iscsid[701]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 01:09:42.962451 iscsid[701]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 01:09:42.962451 iscsid[701]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 01:09:42.962451 iscsid[701]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:09:42.962451 iscsid[701]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 01:09:43.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:42.890070 systemd-networkd[692]: eth0: Gained carrier Aug 13 01:09:43.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:43.013365 ignition[620]: Ignition 2.14.0 Aug 13 01:09:42.900446 systemd-networkd[692]: eth0: DHCPv4 address 10.128.0.44/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 01:09:43.013382 ignition[620]: Stage: fetch-offline Aug 13 01:09:42.901769 systemd[1]: Reached target network.target. Aug 13 01:09:43.013459 ignition[620]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:09:42.917591 systemd[1]: Starting iscsiuio.service... Aug 13 01:09:43.013500 ignition[620]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 01:09:42.930635 systemd[1]: Started iscsiuio.service. Aug 13 01:09:43.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:43.029915 ignition[620]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 01:09:43.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:42.940004 systemd[1]: Starting iscsid.service... Aug 13 01:09:43.030150 ignition[620]: parsed url from cmdline: "" Aug 13 01:09:43.046696 systemd[1]: Started iscsid.service. Aug 13 01:09:43.030157 ignition[620]: no config URL provided Aug 13 01:09:43.065882 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 01:09:43.030177 ignition[620]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:09:43.085809 systemd[1]: Starting dracut-initqueue.service... Aug 13 01:09:43.030190 ignition[620]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:09:43.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:43.106597 systemd[1]: Starting ignition-fetch.service... Aug 13 01:09:43.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:43.030199 ignition[620]: failed to fetch config: resource requires networking Aug 13 01:09:43.140239 unknown[711]: fetched base config from "system" Aug 13 01:09:43.030400 ignition[620]: Ignition finished successfully Aug 13 01:09:43.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:43.140249 unknown[711]: fetched base config from "system" Aug 13 01:09:43.117874 ignition[711]: Ignition 2.14.0 Aug 13 01:09:43.140260 unknown[711]: fetched user config from "gcp" Aug 13 01:09:43.117885 ignition[711]: Stage: fetch Aug 13 01:09:43.149875 systemd[1]: Finished dracut-initqueue.service. Aug 13 01:09:43.118013 ignition[711]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:09:43.164801 systemd[1]: Finished ignition-fetch.service. Aug 13 01:09:43.118043 ignition[711]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 01:09:43.180730 systemd[1]: Reached target remote-fs-pre.target. Aug 13 01:09:43.128792 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 01:09:43.198460 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:09:43.129005 ignition[711]: parsed url from cmdline: "" Aug 13 01:09:43.213451 systemd[1]: Reached target remote-fs.target. Aug 13 01:09:43.129010 ignition[711]: no config URL provided Aug 13 01:09:43.227627 systemd[1]: Starting dracut-pre-mount.service... Aug 13 01:09:43.129017 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:09:43.250529 systemd[1]: Starting ignition-kargs.service... Aug 13 01:09:43.129027 ignition[711]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:09:43.260968 systemd[1]: Finished dracut-pre-mount.service. Aug 13 01:09:43.129064 ignition[711]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 13 01:09:43.282830 systemd[1]: Finished ignition-kargs.service. Aug 13 01:09:43.132487 ignition[711]: GET result: OK Aug 13 01:09:43.299673 systemd[1]: Starting ignition-disks.service... Aug 13 01:09:43.132576 ignition[711]: parsing config with SHA512: 8165e982384c13257f759e3c18693f4c3f3767c3773ba6b45de1f78e9a0036bdee318d4b52ef8e531cf55c2b17b9e352cc08f78ff23f84f6c1ef7ccc266a9b2a Aug 13 01:09:43.322922 systemd[1]: Finished ignition-disks.service. Aug 13 01:09:43.141173 ignition[711]: fetch: fetch complete Aug 13 01:09:43.337834 systemd[1]: Reached target initrd-root-device.target. Aug 13 01:09:43.141180 ignition[711]: fetch: fetch passed Aug 13 01:09:43.353464 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:09:43.141233 ignition[711]: Ignition finished successfully Aug 13 01:09:43.366460 systemd[1]: Reached target local-fs.target. Aug 13 01:09:43.263944 ignition[722]: Ignition 2.14.0 Aug 13 01:09:43.378459 systemd[1]: Reached target sysinit.target. Aug 13 01:09:43.263952 ignition[722]: Stage: kargs Aug 13 01:09:43.391434 systemd[1]: Reached target basic.target. Aug 13 01:09:43.264086 ignition[722]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:09:43.403637 systemd[1]: Starting systemd-fsck-root.service... Aug 13 01:09:43.264117 ignition[722]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 01:09:43.271193 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 01:09:43.272538 ignition[722]: kargs: kargs passed Aug 13 01:09:43.272588 ignition[722]: Ignition finished successfully Aug 13 01:09:43.311691 ignition[728]: Ignition 2.14.0 Aug 13 01:09:43.311702 ignition[728]: Stage: disks Aug 13 01:09:43.311835 ignition[728]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:09:43.311867 ignition[728]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 01:09:43.319551 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 01:09:43.320934 ignition[728]: disks: disks passed Aug 13 01:09:43.320984 ignition[728]: Ignition finished successfully Aug 13 01:09:43.450673 systemd-fsck[736]: ROOT: clean, 629/1628000 files, 124064/1617920 blocks Aug 13 01:09:43.646254 systemd[1]: Finished systemd-fsck-root.service. Aug 13 01:09:43.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:43.661640 systemd[1]: Mounting sysroot.mount... Aug 13 01:09:43.686349 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 01:09:43.687037 systemd[1]: Mounted sysroot.mount. Aug 13 01:09:43.694610 systemd[1]: Reached target initrd-root-fs.target. Aug 13 01:09:43.704800 systemd[1]: Mounting sysroot-usr.mount... Aug 13 01:09:43.716109 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 01:09:43.716160 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:09:43.716194 systemd[1]: Reached target ignition-diskful.target. Aug 13 01:09:43.735807 systemd[1]: Mounted sysroot-usr.mount. Aug 13 01:09:43.820554 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (742) Aug 13 01:09:43.820602 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:09:43.820625 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:09:43.820648 kernel: BTRFS info (device sda6): has skinny extents Aug 13 01:09:43.820677 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:09:43.757441 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 01:09:43.810623 systemd[1]: Starting initrd-setup-root.service... Aug 13 01:09:43.851547 initrd-setup-root[765]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:09:43.830075 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 01:09:43.870471 initrd-setup-root[773]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:09:43.880469 initrd-setup-root[781]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:09:43.890442 initrd-setup-root[789]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:09:43.903771 systemd[1]: Finished initrd-setup-root.service. Aug 13 01:09:43.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:43.904995 systemd[1]: Starting ignition-mount.service... Aug 13 01:09:43.932407 systemd[1]: Starting sysroot-boot.service... Aug 13 01:09:43.940526 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Aug 13 01:09:43.940657 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Aug 13 01:09:43.967625 ignition[807]: INFO : Ignition 2.14.0 Aug 13 01:09:43.967625 ignition[807]: INFO : Stage: mount Aug 13 01:09:43.967625 ignition[807]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:09:43.967625 ignition[807]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 01:09:44.067213 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (817) Aug 13 01:09:44.067245 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:09:44.067270 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:09:44.067285 kernel: BTRFS info (device sda6): has skinny extents Aug 13 01:09:44.067329 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:09:43.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:43.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:44.067473 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 01:09:44.067473 ignition[807]: INFO : mount: mount passed Aug 13 01:09:44.067473 ignition[807]: INFO : Ignition finished successfully Aug 13 01:09:43.972607 systemd[1]: Finished sysroot-boot.service. Aug 13 01:09:43.983735 systemd[1]: Finished ignition-mount.service. Aug 13 01:09:43.999548 systemd[1]: Starting ignition-files.service... Aug 13 01:09:44.013544 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 01:09:44.138490 ignition[836]: INFO : Ignition 2.14.0 Aug 13 01:09:44.138490 ignition[836]: INFO : Stage: files Aug 13 01:09:44.138490 ignition[836]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:09:44.138490 ignition[836]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 01:09:44.138490 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 01:09:44.138490 ignition[836]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:09:44.138490 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:09:44.138490 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:09:44.138490 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:09:44.138490 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:09:44.138490 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:09:44.138490 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Aug 13 01:09:44.138490 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 01:09:44.088748 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 01:09:44.301438 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1347630520" Aug 13 01:09:44.301438 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1347630520": device or resource busy Aug 13 01:09:44.301438 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1347630520", trying btrfs: device or resource busy Aug 13 01:09:44.301438 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1347630520" Aug 13 01:09:44.301438 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1347630520" Aug 13 01:09:44.301438 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem1347630520" Aug 13 01:09:44.301438 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem1347630520" Aug 13 01:09:44.301438 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Aug 13 01:09:44.301438 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:09:44.301438 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 01:09:44.129800 unknown[836]: wrote ssh authorized keys file for user: core Aug 13 01:09:44.478449 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Aug 13 01:09:44.917529 systemd-networkd[692]: eth0: Gained IPv6LL Aug 13 01:09:46.854620 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:09:46.871487 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:09:46.871487 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 01:09:47.046269 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Aug 13 01:09:47.207636 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:09:47.223484 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Aug 13 01:09:47.223484 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2224894298" Aug 13 01:09:47.266450 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2224894298": device or resource busy Aug 13 01:09:47.266450 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2224894298", trying btrfs: device or resource busy Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2224894298" Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2224894298" Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem2224894298" Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem2224894298" Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:09:47.266450 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:09:47.238999 systemd[1]: mnt-oem2224894298.mount: Deactivated successfully. Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem886324527" Aug 13 01:09:47.516445 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem886324527": device or resource busy Aug 13 01:09:47.516445 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem886324527", trying btrfs: device or resource busy Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem886324527" Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem886324527" Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem886324527" Aug 13 01:09:47.516445 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem886324527" Aug 13 01:09:47.255238 systemd[1]: mnt-oem886324527.mount: Deactivated successfully. Aug 13 01:09:47.767484 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Aug 13 01:09:47.767484 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:09:47.767484 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 01:09:47.767484 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Aug 13 01:09:48.037646 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:09:48.037646 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Aug 13 01:09:48.073493 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 01:09:48.073493 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3236680851" Aug 13 01:09:48.073493 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3236680851": device or resource busy Aug 13 01:09:48.073493 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3236680851", trying btrfs: device or resource busy Aug 13 01:09:48.073493 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3236680851" Aug 13 01:09:48.073493 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3236680851" Aug 13 01:09:48.073493 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem3236680851" Aug 13 01:09:48.073493 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem3236680851" Aug 13 01:09:48.073493 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Aug 13 01:09:48.073493 ignition[836]: INFO : files: op(1c): [started] processing unit "oem-gce.service" Aug 13 01:09:48.073493 ignition[836]: INFO : files: op(1c): [finished] processing unit "oem-gce.service" Aug 13 01:09:48.073493 ignition[836]: INFO : files: op(1d): [started] processing unit "oem-gce-enable-oslogin.service" Aug 13 01:09:48.073493 ignition[836]: INFO : files: op(1d): [finished] processing unit "oem-gce-enable-oslogin.service" Aug 13 01:09:48.073493 ignition[836]: INFO : files: op(1e): [started] processing unit "coreos-metadata-sshkeys@.service" Aug 13 01:09:48.073493 ignition[836]: INFO : files: op(1e): [finished] processing unit "coreos-metadata-sshkeys@.service" Aug 13 01:09:48.073493 ignition[836]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" Aug 13 01:09:48.073493 ignition[836]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:09:48.564597 kernel: kauditd_printk_skb: 26 callbacks suppressed Aug 13 01:09:48.564635 kernel: audit: type=1130 audit(1755047388.072:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.564653 kernel: audit: type=1130 audit(1755047388.180:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.564669 kernel: audit: type=1130 audit(1755047388.222:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.564694 kernel: audit: type=1131 audit(1755047388.222:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.564718 kernel: audit: type=1130 audit(1755047388.339:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.564733 kernel: audit: type=1131 audit(1755047388.360:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.564747 kernel: audit: type=1130 audit(1755047388.459:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.071045 systemd[1]: Finished ignition-files.service. Aug 13 01:09:48.613471 kernel: audit: type=1131 audit(1755047388.585:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.613592 ignition[836]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:09:48.613592 ignition[836]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" Aug 13 01:09:48.613592 ignition[836]: INFO : files: op(21): [started] setting preset to enabled for "oem-gce.service" Aug 13 01:09:48.613592 ignition[836]: INFO : files: op(21): [finished] setting preset to enabled for "oem-gce.service" Aug 13 01:09:48.613592 ignition[836]: INFO : files: op(22): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Aug 13 01:09:48.613592 ignition[836]: INFO : files: op(22): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Aug 13 01:09:48.613592 ignition[836]: INFO : files: op(23): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 01:09:48.613592 ignition[836]: INFO : files: op(23): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 01:09:48.613592 ignition[836]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:09:48.613592 ignition[836]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:09:48.613592 ignition[836]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:09:48.613592 ignition[836]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:09:48.613592 ignition[836]: INFO : files: files passed Aug 13 01:09:48.613592 ignition[836]: INFO : Ignition finished successfully Aug 13 01:09:48.907480 kernel: audit: type=1131 audit(1755047388.843:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.084126 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 01:09:48.945515 kernel: audit: type=1131 audit(1755047388.914:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.128517 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 01:09:48.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.972541 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:09:48.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.129733 systemd[1]: Starting ignition-quench.service... Aug 13 01:09:48.152821 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 01:09:48.181905 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:09:48.182067 systemd[1]: Finished ignition-quench.service. Aug 13 01:09:48.223862 systemd[1]: Reached target ignition-complete.target. Aug 13 01:09:48.286650 systemd[1]: Starting initrd-parse-etc.service... Aug 13 01:09:49.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.318321 systemd[1]: mnt-oem3236680851.mount: Deactivated successfully. Aug 13 01:09:49.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:49.089630 ignition[874]: INFO : Ignition 2.14.0 Aug 13 01:09:49.089630 ignition[874]: INFO : Stage: umount Aug 13 01:09:49.089630 ignition[874]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:09:49.089630 ignition[874]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Aug 13 01:09:49.089630 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 01:09:49.089630 ignition[874]: INFO : umount: umount passed Aug 13 01:09:49.089630 ignition[874]: INFO : Ignition finished successfully Aug 13 01:09:49.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:49.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:49.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:49.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:49.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:49.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.332080 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:09:49.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.332203 systemd[1]: Finished initrd-parse-etc.service. Aug 13 01:09:48.362139 systemd[1]: Reached target initrd-fs.target. Aug 13 01:09:48.395661 systemd[1]: Reached target initrd.target. Aug 13 01:09:48.417751 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 01:09:48.419074 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 01:09:48.441793 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 01:09:48.461917 systemd[1]: Starting initrd-cleanup.service... Aug 13 01:09:49.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.502745 systemd[1]: Stopped target nss-lookup.target. Aug 13 01:09:49.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.510836 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 01:09:48.528854 systemd[1]: Stopped target timers.target. Aug 13 01:09:48.547802 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:09:49.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.548022 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 01:09:49.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.587047 systemd[1]: Stopped target initrd.target. Aug 13 01:09:49.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:49.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:49.413000 audit: BPF prog-id=6 op=UNLOAD Aug 13 01:09:48.620724 systemd[1]: Stopped target basic.target. Aug 13 01:09:48.644624 systemd[1]: Stopped target ignition-complete.target. Aug 13 01:09:48.663608 systemd[1]: Stopped target ignition-diskful.target. Aug 13 01:09:49.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.683610 systemd[1]: Stopped target initrd-root-device.target. Aug 13 01:09:49.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.703654 systemd[1]: Stopped target remote-fs.target. Aug 13 01:09:49.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.725693 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 01:09:48.747714 systemd[1]: Stopped target sysinit.target. Aug 13 01:09:49.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.761898 systemd[1]: Stopped target local-fs.target. Aug 13 01:09:48.789728 systemd[1]: Stopped target local-fs-pre.target. Aug 13 01:09:48.810691 systemd[1]: Stopped target swap.target. Aug 13 01:09:49.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.829672 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:09:49.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.829882 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 01:09:49.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.844956 systemd[1]: Stopped target cryptsetup.target. Aug 13 01:09:48.899751 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:09:48.899967 systemd[1]: Stopped dracut-initqueue.service. Aug 13 01:09:49.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.915945 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:09:49.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.916228 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 01:09:49.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.955848 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:09:49.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.956038 systemd[1]: Stopped ignition-files.service. Aug 13 01:09:49.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:49.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:48.982254 systemd[1]: Stopping ignition-mount.service... Aug 13 01:09:49.018964 systemd[1]: Stopping iscsiuio.service... Aug 13 01:09:49.034995 systemd[1]: Stopping sysroot-boot.service... Aug 13 01:09:49.056468 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:09:49.056774 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 01:09:49.065721 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:09:49.774428 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). Aug 13 01:09:49.774496 iscsid[701]: iscsid shutting down. Aug 13 01:09:49.065893 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 01:09:49.085221 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:09:49.086356 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 01:09:49.086473 systemd[1]: Stopped iscsiuio.service. Aug 13 01:09:49.097083 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:09:49.097189 systemd[1]: Stopped ignition-mount.service. Aug 13 01:09:49.112995 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:09:49.113102 systemd[1]: Stopped sysroot-boot.service. Aug 13 01:09:49.127128 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:09:49.127271 systemd[1]: Stopped ignition-disks.service. Aug 13 01:09:49.145534 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:09:49.145617 systemd[1]: Stopped ignition-kargs.service. Aug 13 01:09:49.172550 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:09:49.172625 systemd[1]: Stopped ignition-fetch.service. Aug 13 01:09:49.191574 systemd[1]: Stopped target network.target. Aug 13 01:09:49.205444 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:09:49.205553 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 01:09:49.221527 systemd[1]: Stopped target paths.target. Aug 13 01:09:49.235410 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:09:49.237393 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 01:09:49.250416 systemd[1]: Stopped target slices.target. Aug 13 01:09:49.262404 systemd[1]: Stopped target sockets.target. Aug 13 01:09:49.275508 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:09:49.275588 systemd[1]: Closed iscsid.socket. Aug 13 01:09:49.293514 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:09:49.293592 systemd[1]: Closed iscsiuio.socket. Aug 13 01:09:49.308489 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:09:49.308578 systemd[1]: Stopped ignition-setup.service. Aug 13 01:09:49.323525 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:09:49.323611 systemd[1]: Stopped initrd-setup-root.service. Aug 13 01:09:49.338729 systemd[1]: Stopping systemd-networkd.service... Aug 13 01:09:49.342379 systemd-networkd[692]: eth0: DHCPv6 lease lost Aug 13 01:09:49.353703 systemd[1]: Stopping systemd-resolved.service... Aug 13 01:09:49.363114 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:09:49.363233 systemd[1]: Stopped systemd-resolved.service. Aug 13 01:09:49.383140 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:09:49.383270 systemd[1]: Stopped systemd-networkd.service. Aug 13 01:09:49.399176 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:09:49.399283 systemd[1]: Finished initrd-cleanup.service. Aug 13 01:09:49.415688 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:09:49.415737 systemd[1]: Closed systemd-networkd.socket. Aug 13 01:09:49.430538 systemd[1]: Stopping network-cleanup.service... Aug 13 01:09:49.436583 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:09:49.436660 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 01:09:49.449738 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:09:49.449807 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:09:49.471679 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:09:49.471742 systemd[1]: Stopped systemd-modules-load.service. Aug 13 01:09:49.487805 systemd[1]: Stopping systemd-udevd.service... Aug 13 01:09:49.503118 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:09:49.503818 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:09:49.503973 systemd[1]: Stopped systemd-udevd.service. Aug 13 01:09:49.518906 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:09:49.518994 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 01:09:49.533497 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:09:49.533566 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 01:09:49.548451 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:09:49.548536 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 01:09:49.563560 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:09:49.563630 systemd[1]: Stopped dracut-cmdline.service. Aug 13 01:09:49.578532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:09:49.776000 audit: BPF prog-id=9 op=UNLOAD Aug 13 01:09:49.578605 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 01:09:49.595570 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 01:09:49.608762 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:09:49.608836 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 01:09:49.623769 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:09:49.623830 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 01:09:49.645625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:09:49.645694 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 01:09:49.664949 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:09:49.665632 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:09:49.665740 systemd[1]: Stopped network-cleanup.service. Aug 13 01:09:49.679807 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:09:49.679928 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 01:09:49.695746 systemd[1]: Reached target initrd-switch-root.target. Aug 13 01:09:49.711533 systemd[1]: Starting initrd-switch-root.service... Aug 13 01:09:49.732590 systemd[1]: Switching root. Aug 13 01:09:49.778002 systemd-journald[190]: Journal stopped Aug 13 01:09:54.293266 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 01:09:54.293422 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 01:09:54.293455 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 01:09:54.293479 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:09:54.293503 kernel: SELinux: policy capability open_perms=1 Aug 13 01:09:54.293527 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:09:54.293557 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:09:54.293583 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:09:54.293606 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:09:54.293629 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:09:54.293652 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:09:54.293680 systemd[1]: Successfully loaded SELinux policy in 109.338ms. Aug 13 01:09:54.293724 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.584ms. Aug 13 01:09:54.293750 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:09:54.293776 systemd[1]: Detected virtualization kvm. Aug 13 01:09:54.293804 systemd[1]: Detected architecture x86-64. Aug 13 01:09:54.293829 systemd[1]: Detected first boot. Aug 13 01:09:54.293852 systemd[1]: Initializing machine ID from VM UUID. Aug 13 01:09:54.293877 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 01:09:54.293907 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:09:54.293932 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:09:54.293963 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:09:54.293994 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:09:54.294025 kernel: kauditd_printk_skb: 48 callbacks suppressed Aug 13 01:09:54.294047 kernel: audit: type=1334 audit(1755047393.403:88): prog-id=12 op=LOAD Aug 13 01:09:54.294070 kernel: audit: type=1334 audit(1755047393.403:89): prog-id=3 op=UNLOAD Aug 13 01:09:54.294092 kernel: audit: type=1334 audit(1755047393.415:90): prog-id=13 op=LOAD Aug 13 01:09:54.294114 kernel: audit: type=1334 audit(1755047393.429:91): prog-id=14 op=LOAD Aug 13 01:09:54.294137 kernel: audit: type=1334 audit(1755047393.429:92): prog-id=4 op=UNLOAD Aug 13 01:09:54.294161 kernel: audit: type=1334 audit(1755047393.429:93): prog-id=5 op=UNLOAD Aug 13 01:09:54.294188 kernel: audit: type=1334 audit(1755047393.436:94): prog-id=15 op=LOAD Aug 13 01:09:54.294211 kernel: audit: type=1334 audit(1755047393.436:95): prog-id=12 op=UNLOAD Aug 13 01:09:54.294236 kernel: audit: type=1334 audit(1755047393.443:96): prog-id=16 op=LOAD Aug 13 01:09:54.294257 kernel: audit: type=1334 audit(1755047393.450:97): prog-id=17 op=LOAD Aug 13 01:09:54.294280 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 01:09:54.298521 systemd[1]: Stopped iscsid.service. Aug 13 01:09:54.298565 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:09:54.298591 systemd[1]: Stopped initrd-switch-root.service. Aug 13 01:09:54.298616 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:09:54.298648 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 01:09:54.298673 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 01:09:54.298698 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Aug 13 01:09:54.298722 systemd[1]: Created slice system-getty.slice. Aug 13 01:09:54.298746 systemd[1]: Created slice system-modprobe.slice. Aug 13 01:09:54.298771 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 01:09:54.298797 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 01:09:54.298821 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 01:09:54.298849 systemd[1]: Created slice user.slice. Aug 13 01:09:54.298873 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:09:54.298897 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 01:09:54.298921 systemd[1]: Set up automount boot.automount. Aug 13 01:09:54.298947 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 01:09:54.298972 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 01:09:54.298996 systemd[1]: Stopped target initrd-fs.target. Aug 13 01:09:54.299020 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 01:09:54.299044 systemd[1]: Reached target integritysetup.target. Aug 13 01:09:54.299072 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:09:54.299106 systemd[1]: Reached target remote-fs.target. Aug 13 01:09:54.299130 systemd[1]: Reached target slices.target. Aug 13 01:09:54.299154 systemd[1]: Reached target swap.target. Aug 13 01:09:54.299178 systemd[1]: Reached target torcx.target. Aug 13 01:09:54.299202 systemd[1]: Reached target veritysetup.target. Aug 13 01:09:54.299225 systemd[1]: Listening on systemd-coredump.socket. Aug 13 01:09:54.299250 systemd[1]: Listening on systemd-initctl.socket. Aug 13 01:09:54.299273 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:09:54.299316 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:09:54.299340 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:09:54.299365 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 01:09:54.299389 systemd[1]: Mounting dev-hugepages.mount... Aug 13 01:09:54.299416 systemd[1]: Mounting dev-mqueue.mount... Aug 13 01:09:54.299440 systemd[1]: Mounting media.mount... Aug 13 01:09:54.299465 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:09:54.299491 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 01:09:54.299514 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 01:09:54.299538 systemd[1]: Mounting tmp.mount... Aug 13 01:09:54.299566 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 01:09:54.299590 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:09:54.299614 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:09:54.299638 systemd[1]: Starting modprobe@configfs.service... Aug 13 01:09:54.299661 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:09:54.299685 systemd[1]: Starting modprobe@drm.service... Aug 13 01:09:54.299708 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:09:54.299732 systemd[1]: Starting modprobe@fuse.service... Aug 13 01:09:54.299763 systemd[1]: Starting modprobe@loop.service... Aug 13 01:09:54.299789 kernel: fuse: init (API version 7.34) Aug 13 01:09:54.299814 kernel: loop: module loaded Aug 13 01:09:54.299838 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:09:54.299862 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:09:54.299885 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 01:09:54.299908 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:09:54.299932 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:09:54.299957 systemd[1]: Stopped systemd-journald.service. Aug 13 01:09:54.299981 systemd[1]: Starting systemd-journald.service... Aug 13 01:09:54.300010 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:09:54.300034 systemd[1]: Starting systemd-network-generator.service... Aug 13 01:09:54.300062 systemd-journald[998]: Journal started Aug 13 01:09:54.300157 systemd-journald[998]: Runtime Journal (/run/log/journal/70efdf7cf8309d9d0712260f2c22b0ca) is 8.0M, max 148.8M, 140.8M free. Aug 13 01:09:50.034000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:09:50.172000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:09:50.172000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:09:50.172000 audit: BPF prog-id=10 op=LOAD Aug 13 01:09:50.172000 audit: BPF prog-id=10 op=UNLOAD Aug 13 01:09:50.172000 audit: BPF prog-id=11 op=LOAD Aug 13 01:09:50.172000 audit: BPF prog-id=11 op=UNLOAD Aug 13 01:09:50.322000 audit[907]: AVC avc: denied { associate } for pid=907 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 01:09:50.322000 audit[907]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018f8e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=890 pid=907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:09:50.322000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 01:09:50.333000 audit[907]: AVC avc: denied { associate } for pid=907 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 01:09:50.333000 audit[907]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018f9c9 a2=1ed a3=0 items=2 ppid=890 pid=907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:09:50.333000 audit: CWD cwd="/" Aug 13 01:09:50.333000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:50.333000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:50.333000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 01:09:53.403000 audit: BPF prog-id=12 op=LOAD Aug 13 01:09:53.403000 audit: BPF prog-id=3 op=UNLOAD Aug 13 01:09:53.415000 audit: BPF prog-id=13 op=LOAD Aug 13 01:09:53.429000 audit: BPF prog-id=14 op=LOAD Aug 13 01:09:53.429000 audit: BPF prog-id=4 op=UNLOAD Aug 13 01:09:53.429000 audit: BPF prog-id=5 op=UNLOAD Aug 13 01:09:53.436000 audit: BPF prog-id=15 op=LOAD Aug 13 01:09:53.436000 audit: BPF prog-id=12 op=UNLOAD Aug 13 01:09:53.443000 audit: BPF prog-id=16 op=LOAD Aug 13 01:09:53.450000 audit: BPF prog-id=17 op=LOAD Aug 13 01:09:53.450000 audit: BPF prog-id=13 op=UNLOAD Aug 13 01:09:53.450000 audit: BPF prog-id=14 op=UNLOAD Aug 13 01:09:53.457000 audit: BPF prog-id=18 op=LOAD Aug 13 01:09:53.457000 audit: BPF prog-id=15 op=UNLOAD Aug 13 01:09:53.478000 audit: BPF prog-id=19 op=LOAD Aug 13 01:09:53.478000 audit: BPF prog-id=20 op=LOAD Aug 13 01:09:53.478000 audit: BPF prog-id=16 op=UNLOAD Aug 13 01:09:53.478000 audit: BPF prog-id=17 op=UNLOAD Aug 13 01:09:53.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:53.493000 audit: BPF prog-id=18 op=UNLOAD Aug 13 01:09:53.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:53.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:53.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.258000 audit: BPF prog-id=21 op=LOAD Aug 13 01:09:54.258000 audit: BPF prog-id=22 op=LOAD Aug 13 01:09:54.258000 audit: BPF prog-id=23 op=LOAD Aug 13 01:09:54.258000 audit: BPF prog-id=19 op=UNLOAD Aug 13 01:09:54.258000 audit: BPF prog-id=20 op=UNLOAD Aug 13 01:09:54.289000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 01:09:54.289000 audit[998]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd72798470 a2=4000 a3=7ffd7279850c items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:09:54.289000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 01:09:53.402111 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:09:50.317154 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:09:53.402128 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 01:09:50.318520 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 01:09:53.480456 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:09:50.318582 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 01:09:50.318638 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 01:09:50.318660 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 01:09:50.318720 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 01:09:50.318744 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 01:09:50.319075 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 01:09:50.319145 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 01:09:50.319171 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 01:09:50.322488 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 01:09:50.322568 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 01:09:50.322605 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 01:09:50.322635 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 01:09:50.322668 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 01:09:50.322696 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 01:09:52.815186 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:52Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:09:52.815532 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:52Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:09:52.815707 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:52Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:09:52.816602 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:52Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:09:54.318403 systemd[1]: Starting systemd-remount-fs.service... Aug 13 01:09:52.816699 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:52Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 01:09:52.816793 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2025-08-13T01:09:52Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 01:09:54.334340 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:09:54.348331 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:09:54.354344 systemd[1]: Stopped verity-setup.service. Aug 13 01:09:54.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.374530 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:09:54.383342 systemd[1]: Started systemd-journald.service. Aug 13 01:09:54.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.392767 systemd[1]: Mounted dev-hugepages.mount. Aug 13 01:09:54.399612 systemd[1]: Mounted dev-mqueue.mount. Aug 13 01:09:54.406590 systemd[1]: Mounted media.mount. Aug 13 01:09:54.413573 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 01:09:54.421602 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 01:09:54.430589 systemd[1]: Mounted tmp.mount. Aug 13 01:09:54.437707 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 01:09:54.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.446807 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:09:54.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.455812 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:09:54.456032 systemd[1]: Finished modprobe@configfs.service. Aug 13 01:09:54.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.464860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:09:54.465085 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:09:54.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.474829 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:09:54.475043 systemd[1]: Finished modprobe@drm.service. Aug 13 01:09:54.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.483822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:09:54.484033 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:09:54.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.492806 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:09:54.493009 systemd[1]: Finished modprobe@fuse.service. Aug 13 01:09:54.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.501812 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:09:54.502019 systemd[1]: Finished modprobe@loop.service. Aug 13 01:09:54.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.511854 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:09:54.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.521767 systemd[1]: Finished systemd-network-generator.service. Aug 13 01:09:54.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.530760 systemd[1]: Finished systemd-remount-fs.service. Aug 13 01:09:54.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.539787 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:09:54.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.549204 systemd[1]: Reached target network-pre.target. Aug 13 01:09:54.558900 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 01:09:54.568774 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 01:09:54.575468 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:09:54.579964 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 01:09:54.588990 systemd[1]: Starting systemd-journal-flush.service... Aug 13 01:09:54.597490 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:09:54.599220 systemd[1]: Starting systemd-random-seed.service... Aug 13 01:09:54.606483 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:09:54.608202 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:09:54.609949 systemd-journald[998]: Time spent on flushing to /var/log/journal/70efdf7cf8309d9d0712260f2c22b0ca is 75.244ms for 1169 entries. Aug 13 01:09:54.609949 systemd-journald[998]: System Journal (/var/log/journal/70efdf7cf8309d9d0712260f2c22b0ca) is 8.0M, max 584.8M, 576.8M free. Aug 13 01:09:54.724749 systemd-journald[998]: Received client request to flush runtime journal. Aug 13 01:09:54.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.625385 systemd[1]: Starting systemd-sysusers.service... Aug 13 01:09:54.633958 systemd[1]: Starting systemd-udev-settle.service... Aug 13 01:09:54.644934 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 01:09:54.726560 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 01:09:54.654607 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 01:09:54.663914 systemd[1]: Finished systemd-random-seed.service. Aug 13 01:09:54.672869 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:09:54.685064 systemd[1]: Reached target first-boot-complete.target. Aug 13 01:09:54.696027 systemd[1]: Finished systemd-sysusers.service. Aug 13 01:09:54.706944 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 01:09:54.725899 systemd[1]: Finished systemd-journal-flush.service. Aug 13 01:09:54.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:54.768914 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 01:09:54.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:55.290904 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 01:09:55.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:55.298000 audit: BPF prog-id=24 op=LOAD Aug 13 01:09:55.299000 audit: BPF prog-id=25 op=LOAD Aug 13 01:09:55.299000 audit: BPF prog-id=7 op=UNLOAD Aug 13 01:09:55.299000 audit: BPF prog-id=8 op=UNLOAD Aug 13 01:09:55.301323 systemd[1]: Starting systemd-udevd.service... Aug 13 01:09:55.324073 systemd-udevd[1017]: Using default interface naming scheme 'v252'. Aug 13 01:09:55.366247 systemd[1]: Started systemd-udevd.service. Aug 13 01:09:55.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:55.375000 audit: BPF prog-id=26 op=LOAD Aug 13 01:09:55.377715 systemd[1]: Starting systemd-networkd.service... Aug 13 01:09:55.388000 audit: BPF prog-id=27 op=LOAD Aug 13 01:09:55.388000 audit: BPF prog-id=28 op=LOAD Aug 13 01:09:55.388000 audit: BPF prog-id=29 op=LOAD Aug 13 01:09:55.390727 systemd[1]: Starting systemd-userdbd.service... Aug 13 01:09:55.445810 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Aug 13 01:09:55.463952 systemd[1]: Started systemd-userdbd.service. Aug 13 01:09:55.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:55.557333 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 01:09:55.582145 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:09:55.625428 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Aug 13 01:09:55.653094 systemd-networkd[1030]: lo: Link UP Aug 13 01:09:55.653539 systemd-networkd[1030]: lo: Gained carrier Aug 13 01:09:55.607000 audit[1042]: AVC avc: denied { confidentiality } for pid=1042 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 01:09:55.657470 systemd-networkd[1030]: Enumeration completed Aug 13 01:09:55.657812 systemd[1]: Started systemd-networkd.service. Aug 13 01:09:55.658257 systemd-networkd[1030]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:09:55.607000 audit[1042]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b25e56d040 a1=338ac a2=7f0a5ee99bc5 a3=5 items=110 ppid=1017 pid=1042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:09:55.607000 audit: CWD cwd="/" Aug 13 01:09:55.607000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=1 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=2 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=3 name=(null) inode=14476 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=4 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=5 name=(null) inode=14477 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=6 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=7 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.660415 systemd-networkd[1030]: eth0: Link UP Aug 13 01:09:55.660529 systemd-networkd[1030]: eth0: Gained carrier Aug 13 01:09:55.607000 audit: PATH item=8 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=9 name=(null) inode=14479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=10 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=11 name=(null) inode=14480 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=12 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=13 name=(null) inode=14481 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=14 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=15 name=(null) inode=14482 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=16 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=17 name=(null) inode=14483 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=18 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=19 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=20 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=21 name=(null) inode=14485 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=22 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=23 name=(null) inode=14486 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=24 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=25 name=(null) inode=14487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:55.607000 audit: PATH item=26 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=27 name=(null) inode=14488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=28 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=29 name=(null) inode=14489 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=30 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=31 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=32 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=33 name=(null) inode=14491 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=34 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=35 name=(null) inode=14492 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=36 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.668525 systemd-networkd[1030]: eth0: DHCPv4 address 10.128.0.44/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 01:09:55.607000 audit: PATH item=37 name=(null) inode=14493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=38 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=39 name=(null) inode=14494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=40 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=41 name=(null) inode=14495 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=42 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=43 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=44 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=45 name=(null) inode=14497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=46 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=47 name=(null) inode=14498 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=48 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=49 name=(null) inode=14499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=50 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=51 name=(null) inode=14500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=52 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=53 name=(null) inode=14501 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=55 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=56 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=57 name=(null) inode=14503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=58 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=59 name=(null) inode=14504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=60 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=61 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=62 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=63 name=(null) inode=14506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=64 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=65 name=(null) inode=14507 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=66 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=67 name=(null) inode=14508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=68 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=69 name=(null) inode=14509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=70 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=71 name=(null) inode=14510 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=72 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=73 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=74 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=75 name=(null) inode=14512 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=76 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=77 name=(null) inode=14513 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=78 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=79 name=(null) inode=14514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=80 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=81 name=(null) inode=14515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=82 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=83 name=(null) inode=14516 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=84 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=85 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=86 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=87 name=(null) inode=14518 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=88 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=89 name=(null) inode=14519 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=90 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=91 name=(null) inode=14520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=92 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=93 name=(null) inode=14521 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=94 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=95 name=(null) inode=14522 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=96 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=97 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=98 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=99 name=(null) inode=14524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=100 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=101 name=(null) inode=14525 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=102 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=103 name=(null) inode=14526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=104 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=105 name=(null) inode=14527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=106 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=107 name=(null) inode=14528 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PATH item=109 name=(null) inode=14529 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:09:55.607000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 01:09:55.684329 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:09:55.691324 kernel: ACPI: button: Sleep Button [SLPF] Aug 13 01:09:55.706329 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 13 01:09:55.725239 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 01:09:55.764329 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:09:55.789150 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:09:55.797847 systemd[1]: Finished systemd-udev-settle.service. Aug 13 01:09:55.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:55.809185 systemd[1]: Starting lvm2-activation-early.service... Aug 13 01:09:55.835705 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:09:55.865622 systemd[1]: Finished lvm2-activation-early.service. Aug 13 01:09:55.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:55.874643 systemd[1]: Reached target cryptsetup.target. Aug 13 01:09:55.884937 systemd[1]: Starting lvm2-activation.service... Aug 13 01:09:55.891099 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:09:55.917650 systemd[1]: Finished lvm2-activation.service. Aug 13 01:09:55.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:55.926695 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:09:55.935457 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:09:55.935509 systemd[1]: Reached target local-fs.target. Aug 13 01:09:55.943457 systemd[1]: Reached target machines.target. Aug 13 01:09:55.953081 systemd[1]: Starting ldconfig.service... Aug 13 01:09:55.958977 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:09:55.959073 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:09:55.960866 systemd[1]: Starting systemd-boot-update.service... Aug 13 01:09:55.969176 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 01:09:55.981145 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 01:09:55.983708 systemd[1]: Starting systemd-sysext.service... Aug 13 01:09:55.984493 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1057 (bootctl) Aug 13 01:09:55.987419 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 01:09:56.004185 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 01:09:56.014261 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 01:09:56.016549 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 01:09:56.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.038201 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 01:09:56.047658 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 01:09:56.133848 systemd-fsck[1068]: fsck.fat 4.2 (2021-01-31) Aug 13 01:09:56.133848 systemd-fsck[1068]: /dev/sda1: 789 files, 119324/258078 clusters Aug 13 01:09:56.138002 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 01:09:56.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.149643 systemd[1]: Mounting boot.mount... Aug 13 01:09:56.177755 systemd[1]: Mounted boot.mount. Aug 13 01:09:56.198448 systemd[1]: Finished systemd-boot-update.service. Aug 13 01:09:56.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.466843 ldconfig[1056]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:09:56.573097 systemd[1]: Finished ldconfig.service. Aug 13 01:09:56.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.581756 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:09:56.582505 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 01:09:56.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.601333 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:09:56.627346 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 01:09:56.645470 (sd-sysext)[1073]: Using extensions 'kubernetes'. Aug 13 01:09:56.646150 (sd-sysext)[1073]: Merged extensions into '/usr'. Aug 13 01:09:56.667865 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:09:56.669865 systemd[1]: Mounting usr-share-oem.mount... Aug 13 01:09:56.677658 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:09:56.679556 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:09:56.688058 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:09:56.697010 systemd[1]: Starting modprobe@loop.service... Aug 13 01:09:56.704494 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:09:56.704721 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:09:56.704931 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:09:56.709241 systemd[1]: Mounted usr-share-oem.mount. Aug 13 01:09:56.716895 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:09:56.717132 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:09:56.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.726093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:09:56.726331 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:09:56.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.735097 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:09:56.735345 systemd[1]: Finished modprobe@loop.service. Aug 13 01:09:56.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.744147 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:09:56.744366 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:09:56.745936 systemd[1]: Finished systemd-sysext.service. Aug 13 01:09:56.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:56.756052 systemd[1]: Starting ensure-sysext.service... Aug 13 01:09:56.765791 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 01:09:56.777779 systemd[1]: Reloading. Aug 13 01:09:56.784853 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 01:09:56.787098 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:09:56.790983 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:09:56.902565 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2025-08-13T01:09:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:09:56.902637 /usr/lib/systemd/system-generators/torcx-generator[1100]: time="2025-08-13T01:09:56Z" level=info msg="torcx already run" Aug 13 01:09:56.949550 systemd-networkd[1030]: eth0: Gained IPv6LL Aug 13 01:09:57.059043 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:09:57.059068 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:09:57.082867 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:09:57.158000 audit: BPF prog-id=30 op=LOAD Aug 13 01:09:57.159000 audit: BPF prog-id=31 op=LOAD Aug 13 01:09:57.159000 audit: BPF prog-id=24 op=UNLOAD Aug 13 01:09:57.159000 audit: BPF prog-id=25 op=UNLOAD Aug 13 01:09:57.160000 audit: BPF prog-id=32 op=LOAD Aug 13 01:09:57.160000 audit: BPF prog-id=21 op=UNLOAD Aug 13 01:09:57.160000 audit: BPF prog-id=33 op=LOAD Aug 13 01:09:57.160000 audit: BPF prog-id=34 op=LOAD Aug 13 01:09:57.160000 audit: BPF prog-id=22 op=UNLOAD Aug 13 01:09:57.160000 audit: BPF prog-id=23 op=UNLOAD Aug 13 01:09:57.164000 audit: BPF prog-id=35 op=LOAD Aug 13 01:09:57.164000 audit: BPF prog-id=26 op=UNLOAD Aug 13 01:09:57.165000 audit: BPF prog-id=36 op=LOAD Aug 13 01:09:57.165000 audit: BPF prog-id=27 op=UNLOAD Aug 13 01:09:57.165000 audit: BPF prog-id=37 op=LOAD Aug 13 01:09:57.165000 audit: BPF prog-id=38 op=LOAD Aug 13 01:09:57.165000 audit: BPF prog-id=28 op=UNLOAD Aug 13 01:09:57.166000 audit: BPF prog-id=29 op=UNLOAD Aug 13 01:09:57.174553 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 01:09:57.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:57.189588 systemd[1]: Starting audit-rules.service... Aug 13 01:09:57.198460 systemd[1]: Starting clean-ca-certificates.service... Aug 13 01:09:57.208570 systemd[1]: Starting oem-gce-enable-oslogin.service... Aug 13 01:09:57.219567 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 01:09:57.228000 audit: BPF prog-id=39 op=LOAD Aug 13 01:09:57.231594 systemd[1]: Starting systemd-resolved.service... Aug 13 01:09:57.238000 audit: BPF prog-id=40 op=LOAD Aug 13 01:09:57.241545 systemd[1]: Starting systemd-timesyncd.service... Aug 13 01:09:57.250542 systemd[1]: Starting systemd-update-utmp.service... Aug 13 01:09:57.260761 systemd[1]: Finished clean-ca-certificates.service. Aug 13 01:09:57.258000 audit[1169]: SYSTEM_BOOT pid=1169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 01:09:57.271142 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Aug 13 01:09:57.271407 systemd[1]: Finished oem-gce-enable-oslogin.service. Aug 13 01:09:57.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:09:57.277000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 01:09:57.277000 audit[1174]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffb0a86fc0 a2=420 a3=0 items=0 ppid=1144 pid=1174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:09:57.277000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 01:09:57.279280 augenrules[1174]: No rules Aug 13 01:09:57.280000 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 01:09:57.290037 systemd[1]: Finished audit-rules.service. Aug 13 01:09:57.302901 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:09:57.303632 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:09:57.309086 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:09:57.318565 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:09:57.327513 systemd[1]: Starting modprobe@loop.service... Aug 13 01:09:57.336588 systemd[1]: Starting oem-gce-enable-oslogin.service... Aug 13 01:09:57.345503 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:09:57.348598 enable-oslogin[1182]: /etc/pam.d/sshd already exists. Not enabling OS Login Aug 13 01:09:57.345762 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:09:57.348187 systemd[1]: Starting systemd-update-done.service... Aug 13 01:09:57.355417 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:09:57.355622 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:09:57.358443 systemd[1]: Finished systemd-update-utmp.service. Aug 13 01:09:57.367202 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:09:57.367430 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:09:57.376149 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:09:57.376374 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:09:57.385132 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:09:57.385365 systemd[1]: Finished modprobe@loop.service. Aug 13 01:09:57.394291 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Aug 13 01:09:57.394545 systemd[1]: Finished oem-gce-enable-oslogin.service. Aug 13 01:09:57.404081 systemd[1]: Finished systemd-update-done.service. Aug 13 01:09:57.417665 systemd[1]: Started systemd-timesyncd.service. Aug 13 01:09:57.418176 systemd-timesyncd[1164]: Contacted time server 169.254.169.254:123 (169.254.169.254). Aug 13 01:09:57.418257 systemd-timesyncd[1164]: Initial clock synchronization to Wed 2025-08-13 01:09:57.618294 UTC. Aug 13 01:09:57.427614 systemd[1]: Reached target time-set.target. Aug 13 01:09:57.432831 systemd-resolved[1161]: Positive Trust Anchors: Aug 13 01:09:57.433185 systemd-resolved[1161]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:09:57.433426 systemd-resolved[1161]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:09:57.436561 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:09:57.436933 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:09:57.439553 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:09:57.448460 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:09:57.457209 systemd[1]: Starting modprobe@loop.service... Aug 13 01:09:57.467701 systemd[1]: Starting oem-gce-enable-oslogin.service... Aug 13 01:09:57.472914 systemd-resolved[1161]: Defaulting to hostname 'linux'. Aug 13 01:09:57.475171 enable-oslogin[1188]: /etc/pam.d/sshd already exists. Not enabling OS Login Aug 13 01:09:57.476481 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:09:57.476734 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:09:57.476946 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:09:57.477114 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:09:57.479025 systemd[1]: Started systemd-resolved.service. Aug 13 01:09:57.487955 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:09:57.488184 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:09:57.496927 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:09:57.497144 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:09:57.505951 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:09:57.506175 systemd[1]: Finished modprobe@loop.service. Aug 13 01:09:57.514934 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Aug 13 01:09:57.515174 systemd[1]: Finished oem-gce-enable-oslogin.service. Aug 13 01:09:57.527779 systemd[1]: Reached target network.target. Aug 13 01:09:57.537548 systemd[1]: Reached target nss-lookup.target. Aug 13 01:09:57.546542 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:09:57.546965 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:09:57.548939 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:09:57.558139 systemd[1]: Starting modprobe@drm.service... Aug 13 01:09:57.567046 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:09:57.576016 systemd[1]: Starting modprobe@loop.service... Aug 13 01:09:57.584960 systemd[1]: Starting oem-gce-enable-oslogin.service... Aug 13 01:09:57.589194 enable-oslogin[1193]: /etc/pam.d/sshd already exists. Not enabling OS Login Aug 13 01:09:57.593514 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:09:57.593758 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:09:57.595564 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 01:09:57.604464 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:09:57.604672 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:09:57.606466 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:09:57.606711 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:09:57.615935 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:09:57.616155 systemd[1]: Finished modprobe@drm.service. Aug 13 01:09:57.625035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:09:57.625271 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:09:57.634995 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:09:57.635217 systemd[1]: Finished modprobe@loop.service. Aug 13 01:09:57.643956 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Aug 13 01:09:57.644210 systemd[1]: Finished oem-gce-enable-oslogin.service. Aug 13 01:09:57.652984 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 01:09:57.664378 systemd[1]: Reached target network-online.target. Aug 13 01:09:57.672499 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:09:57.672558 systemd[1]: Reached target sysinit.target. Aug 13 01:09:57.680522 systemd[1]: Started motdgen.path. Aug 13 01:09:57.687465 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 01:09:57.697631 systemd[1]: Started logrotate.timer. Aug 13 01:09:57.704565 systemd[1]: Started mdadm.timer. Aug 13 01:09:57.711415 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 01:09:57.719401 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:09:57.719454 systemd[1]: Reached target paths.target. Aug 13 01:09:57.726392 systemd[1]: Reached target timers.target. Aug 13 01:09:57.733788 systemd[1]: Listening on dbus.socket. Aug 13 01:09:57.741892 systemd[1]: Starting docker.socket... Aug 13 01:09:57.753423 systemd[1]: Listening on sshd.socket. Aug 13 01:09:57.761550 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:09:57.761640 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:09:57.762564 systemd[1]: Finished ensure-sysext.service. Aug 13 01:09:57.770632 systemd[1]: Listening on docker.socket. Aug 13 01:09:57.778523 systemd[1]: Reached target sockets.target. Aug 13 01:09:57.786403 systemd[1]: Reached target basic.target. Aug 13 01:09:57.793471 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:09:57.793515 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:09:57.795148 systemd[1]: Starting containerd.service... Aug 13 01:09:57.803843 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Aug 13 01:09:57.814428 systemd[1]: Starting dbus.service... Aug 13 01:09:57.822228 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 01:09:57.830859 systemd[1]: Starting extend-filesystems.service... Aug 13 01:09:57.838517 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 01:09:57.840934 systemd[1]: Starting kubelet.service... Aug 13 01:09:57.849419 jq[1200]: false Aug 13 01:09:57.849169 systemd[1]: Starting motdgen.service... Aug 13 01:09:57.858136 systemd[1]: Starting oem-gce.service... Aug 13 01:09:57.867210 systemd[1]: Starting prepare-helm.service... Aug 13 01:09:57.876269 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 01:09:57.885151 systemd[1]: Starting sshd-keygen.service... Aug 13 01:09:57.896679 systemd[1]: Starting systemd-logind.service... Aug 13 01:09:57.903888 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:09:57.904023 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Aug 13 01:09:57.904805 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:09:57.906143 systemd[1]: Starting update-engine.service... Aug 13 01:09:57.915683 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 01:09:57.922166 jq[1221]: true Aug 13 01:09:57.927873 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:09:57.928191 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 01:09:57.934174 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:09:57.935692 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 01:09:57.970727 extend-filesystems[1201]: Found loop1 Aug 13 01:09:57.970727 extend-filesystems[1201]: Found sda Aug 13 01:09:57.970727 extend-filesystems[1201]: Found sda1 Aug 13 01:09:57.970727 extend-filesystems[1201]: Found sda2 Aug 13 01:09:57.970727 extend-filesystems[1201]: Found sda3 Aug 13 01:09:57.970727 extend-filesystems[1201]: Found usr Aug 13 01:09:57.970727 extend-filesystems[1201]: Found sda4 Aug 13 01:09:57.970727 extend-filesystems[1201]: Found sda6 Aug 13 01:09:57.970727 extend-filesystems[1201]: Found sda7 Aug 13 01:09:57.970727 extend-filesystems[1201]: Found sda9 Aug 13 01:09:57.970727 extend-filesystems[1201]: Checking size of /dev/sda9 Aug 13 01:09:58.163518 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Aug 13 01:09:58.163597 kernel: loop2: detected capacity change from 0 to 2097152 Aug 13 01:09:58.164213 update_engine[1220]: I0813 01:09:58.115710 1220 main.cc:92] Flatcar Update Engine starting Aug 13 01:09:58.164213 update_engine[1220]: I0813 01:09:58.123396 1220 update_check_scheduler.cc:74] Next update check in 11m23s Aug 13 01:09:58.164823 tar[1225]: linux-amd64/helm Aug 13 01:09:58.029075 dbus-daemon[1199]: [system] SELinux support is enabled Aug 13 01:09:58.012068 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:09:58.166011 extend-filesystems[1201]: Resized partition /dev/sda9 Aug 13 01:09:58.172873 jq[1227]: true Aug 13 01:09:58.046552 dbus-daemon[1199]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1030 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:09:58.012396 systemd[1]: Finished motdgen.service. Aug 13 01:09:58.173506 extend-filesystems[1246]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 01:09:58.216777 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Aug 13 01:09:58.216915 mkfs.ext4[1229]: mke2fs 1.46.5 (30-Dec-2021) Aug 13 01:09:58.216915 mkfs.ext4[1229]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Aug 13 01:09:58.216915 mkfs.ext4[1229]: Creating filesystem with 262144 4k blocks and 65536 inodes Aug 13 01:09:58.216915 mkfs.ext4[1229]: Filesystem UUID: 8a816b33-8ad8-4ebc-8830-aa805c16b12a Aug 13 01:09:58.216915 mkfs.ext4[1229]: Superblock backups stored on blocks: Aug 13 01:09:58.216915 mkfs.ext4[1229]: 32768, 98304, 163840, 229376 Aug 13 01:09:58.216915 mkfs.ext4[1229]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Aug 13 01:09:58.216915 mkfs.ext4[1229]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Aug 13 01:09:58.216915 mkfs.ext4[1229]: Creating journal (8192 blocks): done Aug 13 01:09:58.216915 mkfs.ext4[1229]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Aug 13 01:09:58.066190 dbus-daemon[1199]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 01:09:58.030183 systemd[1]: Started dbus.service. Aug 13 01:09:58.044623 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:09:58.044742 systemd[1]: Reached target system-config.target. Aug 13 01:09:58.053545 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:09:58.223642 umount[1245]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Aug 13 01:09:58.053580 systemd[1]: Reached target user-config.target. Aug 13 01:09:58.088120 systemd[1]: Starting systemd-hostnamed.service... Aug 13 01:09:58.123104 systemd[1]: Started update-engine.service. Aug 13 01:09:58.147764 systemd[1]: Started locksmithd.service. Aug 13 01:09:58.232342 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 01:09:58.234806 extend-filesystems[1246]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:09:58.234806 extend-filesystems[1246]: old_desc_blocks = 1, new_desc_blocks = 2 Aug 13 01:09:58.234806 extend-filesystems[1246]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Aug 13 01:09:58.308223 extend-filesystems[1201]: Resized filesystem in /dev/sda9 Aug 13 01:09:58.236987 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:09:58.342691 bash[1264]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:09:58.342907 env[1228]: time="2025-08-13T01:09:58.316380192Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 01:09:58.237264 systemd[1]: Finished extend-filesystems.service. Aug 13 01:09:58.264943 systemd-logind[1219]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 01:09:58.264979 systemd-logind[1219]: Watching system buttons on /dev/input/event2 (Sleep Button) Aug 13 01:09:58.265014 systemd-logind[1219]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:09:58.265121 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 01:09:58.308221 systemd-logind[1219]: New seat seat0. Aug 13 01:09:58.337664 systemd[1]: Started systemd-logind.service. Aug 13 01:09:58.507932 dbus-daemon[1199]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:09:58.509445 systemd[1]: Started systemd-hostnamed.service. Aug 13 01:09:58.527563 dbus-daemon[1199]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1248 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:09:58.533790 systemd[1]: Starting polkit.service... Aug 13 01:09:58.561847 coreos-metadata[1198]: Aug 13 01:09:58.554 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Aug 13 01:09:58.566990 coreos-metadata[1198]: Aug 13 01:09:58.566 INFO Fetch failed with 404: resource not found Aug 13 01:09:58.567287 coreos-metadata[1198]: Aug 13 01:09:58.567 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Aug 13 01:09:58.595377 coreos-metadata[1198]: Aug 13 01:09:58.595 INFO Fetch successful Aug 13 01:09:58.595663 coreos-metadata[1198]: Aug 13 01:09:58.595 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Aug 13 01:09:58.596521 coreos-metadata[1198]: Aug 13 01:09:58.596 INFO Fetch failed with 404: resource not found Aug 13 01:09:58.596810 coreos-metadata[1198]: Aug 13 01:09:58.596 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Aug 13 01:09:58.597041 coreos-metadata[1198]: Aug 13 01:09:58.596 INFO Fetch failed with 404: resource not found Aug 13 01:09:58.597614 coreos-metadata[1198]: Aug 13 01:09:58.597 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Aug 13 01:09:58.597855 coreos-metadata[1198]: Aug 13 01:09:58.597 INFO Fetch successful Aug 13 01:09:58.601975 env[1228]: time="2025-08-13T01:09:58.601923538Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 01:09:58.609925 env[1228]: time="2025-08-13T01:09:58.609861669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:09:58.613421 env[1228]: time="2025-08-13T01:09:58.613360714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:09:58.614621 unknown[1198]: wrote ssh authorized keys file for user: core Aug 13 01:09:58.632124 env[1228]: time="2025-08-13T01:09:58.632066722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:09:58.633371 env[1228]: time="2025-08-13T01:09:58.633289886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:09:58.634418 env[1228]: time="2025-08-13T01:09:58.634383322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 01:09:58.634616 env[1228]: time="2025-08-13T01:09:58.634588810Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 01:09:58.634742 update-ssh-keys[1280]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:09:58.635813 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Aug 13 01:09:58.636594 env[1228]: time="2025-08-13T01:09:58.636560982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 01:09:58.636925 env[1228]: time="2025-08-13T01:09:58.636897046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:09:58.639604 env[1228]: time="2025-08-13T01:09:58.639572447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:09:58.643624 env[1228]: time="2025-08-13T01:09:58.643583539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:09:58.644277 env[1228]: time="2025-08-13T01:09:58.644248058Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 01:09:58.644547 env[1228]: time="2025-08-13T01:09:58.644519415Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 01:09:58.646065 env[1228]: time="2025-08-13T01:09:58.646033424Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:09:58.651750 polkitd[1273]: Started polkitd version 121 Aug 13 01:09:58.653421 env[1228]: time="2025-08-13T01:09:58.653383658Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 01:09:58.653642 env[1228]: time="2025-08-13T01:09:58.653593952Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 01:09:58.654174 env[1228]: time="2025-08-13T01:09:58.654147870Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 01:09:58.654463 env[1228]: time="2025-08-13T01:09:58.654310136Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 01:09:58.655108 env[1228]: time="2025-08-13T01:09:58.655078019Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 01:09:58.655223 env[1228]: time="2025-08-13T01:09:58.655202233Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 01:09:58.655339 env[1228]: time="2025-08-13T01:09:58.655304023Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 01:09:58.655452 env[1228]: time="2025-08-13T01:09:58.655430761Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 01:09:58.655575 env[1228]: time="2025-08-13T01:09:58.655554187Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 01:09:58.655678 env[1228]: time="2025-08-13T01:09:58.655658033Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 01:09:58.655806 env[1228]: time="2025-08-13T01:09:58.655785617Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 01:09:58.655908 env[1228]: time="2025-08-13T01:09:58.655887451Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 01:09:58.656131 env[1228]: time="2025-08-13T01:09:58.656109547Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 01:09:58.657824 env[1228]: time="2025-08-13T01:09:58.657794911Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 01:09:58.661677 env[1228]: time="2025-08-13T01:09:58.661645252Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 01:09:58.664310 env[1228]: time="2025-08-13T01:09:58.664277601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.664580 env[1228]: time="2025-08-13T01:09:58.664552308Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 01:09:58.664896 env[1228]: time="2025-08-13T01:09:58.664870058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.665349 env[1228]: time="2025-08-13T01:09:58.665295754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.666422 env[1228]: time="2025-08-13T01:09:58.666370011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.666575 env[1228]: time="2025-08-13T01:09:58.666547874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.667275 env[1228]: time="2025-08-13T01:09:58.667244562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.668441 env[1228]: time="2025-08-13T01:09:58.667811488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.668613 env[1228]: time="2025-08-13T01:09:58.668583228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.668888 env[1228]: time="2025-08-13T01:09:58.668858186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.669188 env[1228]: time="2025-08-13T01:09:58.669158525Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 01:09:58.671532 env[1228]: time="2025-08-13T01:09:58.671501192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.671973 env[1228]: time="2025-08-13T01:09:58.671942706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.672111 env[1228]: time="2025-08-13T01:09:58.672078460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.672218 env[1228]: time="2025-08-13T01:09:58.672197973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 01:09:58.672395 env[1228]: time="2025-08-13T01:09:58.672370188Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 01:09:58.674389 env[1228]: time="2025-08-13T01:09:58.674358639Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 01:09:58.674672 env[1228]: time="2025-08-13T01:09:58.674643139Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 01:09:58.675116 env[1228]: time="2025-08-13T01:09:58.675088571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 01:09:58.677353 env[1228]: time="2025-08-13T01:09:58.677219195Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 01:09:58.681043 env[1228]: time="2025-08-13T01:09:58.679177079Z" level=info msg="Connect containerd service" Aug 13 01:09:58.681043 env[1228]: time="2025-08-13T01:09:58.679329515Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 01:09:58.681746 env[1228]: time="2025-08-13T01:09:58.681710768Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:09:58.689519 env[1228]: time="2025-08-13T01:09:58.689456798Z" level=info msg="Start subscribing containerd event" Aug 13 01:09:58.689976 env[1228]: time="2025-08-13T01:09:58.689933440Z" level=info msg="Start recovering state" Aug 13 01:09:58.690608 env[1228]: time="2025-08-13T01:09:58.690582571Z" level=info msg="Start event monitor" Aug 13 01:09:58.690737 env[1228]: time="2025-08-13T01:09:58.690716454Z" level=info msg="Start snapshots syncer" Aug 13 01:09:58.690858 env[1228]: time="2025-08-13T01:09:58.690838018Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:09:58.690990 env[1228]: time="2025-08-13T01:09:58.690970585Z" level=info msg="Start streaming server" Aug 13 01:09:58.691744 env[1228]: time="2025-08-13T01:09:58.691719190Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:09:58.693042 env[1228]: time="2025-08-13T01:09:58.693015176Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:09:58.707455 polkitd[1273]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:09:58.707699 polkitd[1273]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:09:58.709498 systemd[1]: Started containerd.service. Aug 13 01:09:58.709887 env[1228]: time="2025-08-13T01:09:58.709840058Z" level=info msg="containerd successfully booted in 0.394557s" Aug 13 01:09:58.711439 polkitd[1273]: Finished loading, compiling and executing 2 rules Aug 13 01:09:58.712050 dbus-daemon[1199]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:09:58.713463 polkitd[1273]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:09:58.717129 systemd[1]: Started polkit.service. Aug 13 01:09:58.750216 systemd-hostnamed[1248]: Hostname set to (transient) Aug 13 01:09:58.752984 systemd-resolved[1161]: System hostname changed to 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal'. Aug 13 01:09:59.802956 tar[1225]: linux-amd64/LICENSE Aug 13 01:09:59.803498 tar[1225]: linux-amd64/README.md Aug 13 01:09:59.820120 systemd[1]: Finished prepare-helm.service. Aug 13 01:10:00.422093 systemd[1]: Started kubelet.service. Aug 13 01:10:01.094633 sshd_keygen[1235]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:10:01.150228 systemd[1]: Finished sshd-keygen.service. Aug 13 01:10:01.160423 systemd[1]: Starting issuegen.service... Aug 13 01:10:01.172576 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:10:01.172854 systemd[1]: Finished issuegen.service. Aug 13 01:10:01.183935 systemd[1]: Starting systemd-user-sessions.service... Aug 13 01:10:01.205142 systemd[1]: Finished systemd-user-sessions.service. Aug 13 01:10:01.215757 systemd[1]: Started getty@tty1.service. Aug 13 01:10:01.227461 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 01:10:01.236926 systemd[1]: Reached target getty.target. Aug 13 01:10:01.255700 locksmithd[1257]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:10:01.628001 kubelet[1292]: E0813 01:10:01.627942 1292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:10:01.630895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:10:01.631174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:10:01.631672 systemd[1]: kubelet.service: Consumed 1.548s CPU time. Aug 13 01:10:04.059641 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Aug 13 01:10:05.872372 kernel: loop2: detected capacity change from 0 to 2097152 Aug 13 01:10:05.886612 systemd-nspawn[1316]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Aug 13 01:10:05.886612 systemd-nspawn[1316]: Press ^] three times within 1s to kill container. Aug 13 01:10:05.899350 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 01:10:05.979406 systemd[1]: Started oem-gce.service. Aug 13 01:10:05.986932 systemd[1]: Reached target multi-user.target. Aug 13 01:10:05.997747 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 01:10:06.010790 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 01:10:06.010987 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 01:10:06.020660 systemd[1]: Startup finished in 1.026s (kernel) + 10.122s (initrd) + 16.108s (userspace) = 27.257s. Aug 13 01:10:06.027398 systemd-nspawn[1316]: + '[' -e /etc/default/instance_configs.cfg.template ']' Aug 13 01:10:06.027398 systemd-nspawn[1316]: + echo -e '[InstanceSetup]\nset_host_keys = false' Aug 13 01:10:06.028356 systemd-nspawn[1316]: + /usr/bin/google_instance_setup Aug 13 01:10:06.588895 instance-setup[1322]: INFO Running google_set_multiqueue. Aug 13 01:10:06.602189 instance-setup[1322]: INFO Set channels for eth0 to 2. Aug 13 01:10:06.606158 instance-setup[1322]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Aug 13 01:10:06.607557 instance-setup[1322]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Aug 13 01:10:06.608037 instance-setup[1322]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Aug 13 01:10:06.609510 instance-setup[1322]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Aug 13 01:10:06.609876 instance-setup[1322]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Aug 13 01:10:06.611251 instance-setup[1322]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Aug 13 01:10:06.611733 instance-setup[1322]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Aug 13 01:10:06.613210 instance-setup[1322]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Aug 13 01:10:06.624645 instance-setup[1322]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Aug 13 01:10:06.624997 instance-setup[1322]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Aug 13 01:10:06.653653 systemd[1]: Created slice system-sshd.slice. Aug 13 01:10:06.655630 systemd[1]: Started sshd@0-10.128.0.44:22-139.178.68.195:48970.service. Aug 13 01:10:06.674347 systemd-nspawn[1316]: + /usr/bin/google_metadata_script_runner --script-type startup Aug 13 01:10:06.987410 sshd[1354]: Accepted publickey for core from 139.178.68.195 port 48970 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:10:06.991104 sshd[1354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:10:07.012424 systemd[1]: Created slice user-500.slice. Aug 13 01:10:07.017127 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 01:10:07.033958 systemd-logind[1219]: New session 1 of user core. Aug 13 01:10:07.042635 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 01:10:07.045711 systemd[1]: Starting user@500.service... Aug 13 01:10:07.048951 startup-script[1355]: INFO Starting startup scripts. Aug 13 01:10:07.065505 (systemd)[1360]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:10:07.069475 startup-script[1355]: INFO No startup scripts found in metadata. Aug 13 01:10:07.069869 startup-script[1355]: INFO Finished running startup scripts. Aug 13 01:10:07.115694 systemd-nspawn[1316]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Aug 13 01:10:07.116417 systemd-nspawn[1316]: + daemon_pids=() Aug 13 01:10:07.116683 systemd-nspawn[1316]: + for d in accounts clock_skew network Aug 13 01:10:07.117152 systemd-nspawn[1316]: + daemon_pids+=($!) Aug 13 01:10:07.117415 systemd-nspawn[1316]: + for d in accounts clock_skew network Aug 13 01:10:07.117841 systemd-nspawn[1316]: + daemon_pids+=($!) Aug 13 01:10:07.118077 systemd-nspawn[1316]: + for d in accounts clock_skew network Aug 13 01:10:07.118554 systemd-nspawn[1316]: + daemon_pids+=($!) Aug 13 01:10:07.118846 systemd-nspawn[1316]: + NOTIFY_SOCKET=/run/systemd/notify Aug 13 01:10:07.119043 systemd-nspawn[1316]: + /usr/bin/systemd-notify --ready Aug 13 01:10:07.120254 systemd-nspawn[1316]: + /usr/bin/google_network_daemon Aug 13 01:10:07.121601 systemd-nspawn[1316]: + /usr/bin/google_clock_skew_daemon Aug 13 01:10:07.123760 systemd-nspawn[1316]: + /usr/bin/google_accounts_daemon Aug 13 01:10:07.168377 systemd-nspawn[1316]: + wait -n 36 37 38 Aug 13 01:10:07.257585 systemd[1360]: Queued start job for default target default.target. Aug 13 01:10:07.258486 systemd[1360]: Reached target paths.target. Aug 13 01:10:07.258532 systemd[1360]: Reached target sockets.target. Aug 13 01:10:07.258567 systemd[1360]: Reached target timers.target. Aug 13 01:10:07.258588 systemd[1360]: Reached target basic.target. Aug 13 01:10:07.258757 systemd[1]: Started user@500.service. Aug 13 01:10:07.260339 systemd[1]: Started session-1.scope. Aug 13 01:10:07.261055 systemd[1360]: Reached target default.target. Aug 13 01:10:07.261150 systemd[1360]: Startup finished in 184ms. Aug 13 01:10:07.491837 systemd[1]: Started sshd@1-10.128.0.44:22-139.178.68.195:48978.service. Aug 13 01:10:07.811238 sshd[1373]: Accepted publickey for core from 139.178.68.195 port 48978 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:10:07.812302 sshd[1373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:10:07.820711 systemd[1]: Started session-2.scope. Aug 13 01:10:07.821363 systemd-logind[1219]: New session 2 of user core. Aug 13 01:10:07.897693 groupadd[1380]: group added to /etc/group: name=google-sudoers, GID=1000 Aug 13 01:10:07.901138 groupadd[1380]: group added to /etc/gshadow: name=google-sudoers Aug 13 01:10:07.925848 groupadd[1380]: new group: name=google-sudoers, GID=1000 Aug 13 01:10:07.963025 google-accounts[1366]: INFO Starting Google Accounts daemon. Aug 13 01:10:08.025158 google-accounts[1366]: WARNING OS Login not installed. Aug 13 01:10:08.026193 google-accounts[1366]: INFO Creating a new user account for 0. Aug 13 01:10:08.033657 sshd[1373]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:08.038120 systemd[1]: sshd@1-10.128.0.44:22-139.178.68.195:48978.service: Deactivated successfully. Aug 13 01:10:08.039337 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:10:08.041788 systemd-logind[1219]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:10:08.043266 systemd-nspawn[1316]: useradd: invalid user name '0': use --badname to ignore Aug 13 01:10:08.043396 systemd-logind[1219]: Removed session 2. Aug 13 01:10:08.044859 google-accounts[1366]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Aug 13 01:10:08.049271 google-clock-skew[1367]: INFO Starting Google Clock Skew daemon. Aug 13 01:10:08.067447 google-clock-skew[1367]: INFO Clock drift token has changed: 0. Aug 13 01:10:08.072871 systemd-nspawn[1316]: hwclock: Cannot access the Hardware Clock via any known method. Aug 13 01:10:08.073198 systemd-nspawn[1316]: hwclock: Use the --verbose option to see the details of our search for an access method. Aug 13 01:10:08.075656 google-clock-skew[1367]: WARNING Failed to sync system time with hardware clock. Aug 13 01:10:08.081565 systemd[1]: Started sshd@2-10.128.0.44:22-139.178.68.195:48994.service. Aug 13 01:10:08.083067 google-networking[1368]: INFO Starting Google Networking daemon. Aug 13 01:10:08.385023 sshd[1396]: Accepted publickey for core from 139.178.68.195 port 48994 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:10:08.387249 sshd[1396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:10:08.393358 systemd-logind[1219]: New session 3 of user core. Aug 13 01:10:08.394132 systemd[1]: Started session-3.scope. Aug 13 01:10:08.599084 sshd[1396]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:08.603609 systemd[1]: sshd@2-10.128.0.44:22-139.178.68.195:48994.service: Deactivated successfully. Aug 13 01:10:08.604687 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:10:08.605603 systemd-logind[1219]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:10:08.606915 systemd-logind[1219]: Removed session 3. Aug 13 01:10:08.645899 systemd[1]: Started sshd@3-10.128.0.44:22-139.178.68.195:49010.service. Aug 13 01:10:08.941987 sshd[1403]: Accepted publickey for core from 139.178.68.195 port 49010 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:10:08.943770 sshd[1403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:10:08.950622 systemd[1]: Started session-4.scope. Aug 13 01:10:08.951497 systemd-logind[1219]: New session 4 of user core. Aug 13 01:10:09.160435 sshd[1403]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:09.164587 systemd[1]: sshd@3-10.128.0.44:22-139.178.68.195:49010.service: Deactivated successfully. Aug 13 01:10:09.165663 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:10:09.166568 systemd-logind[1219]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:10:09.167797 systemd-logind[1219]: Removed session 4. Aug 13 01:10:09.207710 systemd[1]: Started sshd@4-10.128.0.44:22-139.178.68.195:49016.service. Aug 13 01:10:09.501771 sshd[1409]: Accepted publickey for core from 139.178.68.195 port 49016 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:10:09.503744 sshd[1409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:10:09.510400 systemd[1]: Started session-5.scope. Aug 13 01:10:09.511193 systemd-logind[1219]: New session 5 of user core. Aug 13 01:10:09.696256 sudo[1412]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:10:09.696718 sudo[1412]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:10:09.728033 systemd[1]: Starting docker.service... Aug 13 01:10:09.781229 env[1422]: time="2025-08-13T01:10:09.780414351Z" level=info msg="Starting up" Aug 13 01:10:09.782132 env[1422]: time="2025-08-13T01:10:09.782074673Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:10:09.782132 env[1422]: time="2025-08-13T01:10:09.782103057Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:10:09.782977 env[1422]: time="2025-08-13T01:10:09.782140189Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:10:09.782977 env[1422]: time="2025-08-13T01:10:09.782160377Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:10:09.784879 env[1422]: time="2025-08-13T01:10:09.784854177Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:10:09.785005 env[1422]: time="2025-08-13T01:10:09.784987282Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:10:09.785086 env[1422]: time="2025-08-13T01:10:09.785066878Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:10:09.785151 env[1422]: time="2025-08-13T01:10:09.785136774Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:10:09.793760 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport910309497-merged.mount: Deactivated successfully. Aug 13 01:10:09.820538 env[1422]: time="2025-08-13T01:10:09.820476077Z" level=info msg="Loading containers: start." Aug 13 01:10:09.988335 kernel: Initializing XFRM netlink socket Aug 13 01:10:10.036080 env[1422]: time="2025-08-13T01:10:10.033061459Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 01:10:10.116615 systemd-networkd[1030]: docker0: Link UP Aug 13 01:10:10.134187 env[1422]: time="2025-08-13T01:10:10.134124992Z" level=info msg="Loading containers: done." Aug 13 01:10:10.151773 env[1422]: time="2025-08-13T01:10:10.147732395Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:10:10.151773 env[1422]: time="2025-08-13T01:10:10.148012379Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 01:10:10.151773 env[1422]: time="2025-08-13T01:10:10.148937146Z" level=info msg="Daemon has completed initialization" Aug 13 01:10:10.167717 systemd[1]: Started docker.service. Aug 13 01:10:10.179218 env[1422]: time="2025-08-13T01:10:10.179126613Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:10:11.119621 env[1228]: time="2025-08-13T01:10:11.119519031Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 01:10:11.655563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2517960764.mount: Deactivated successfully. Aug 13 01:10:11.657269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:10:11.657545 systemd[1]: Stopped kubelet.service. Aug 13 01:10:11.657622 systemd[1]: kubelet.service: Consumed 1.548s CPU time. Aug 13 01:10:11.661147 systemd[1]: Starting kubelet.service... Aug 13 01:10:11.986969 systemd[1]: Started kubelet.service. Aug 13 01:10:12.079415 kubelet[1548]: E0813 01:10:12.078728 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:10:12.083778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:10:12.083942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:10:13.578117 env[1228]: time="2025-08-13T01:10:13.578037769Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:13.580773 env[1228]: time="2025-08-13T01:10:13.580706089Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:13.583482 env[1228]: time="2025-08-13T01:10:13.583439018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:13.586231 env[1228]: time="2025-08-13T01:10:13.586188451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:13.587676 env[1228]: time="2025-08-13T01:10:13.587628141Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 01:10:13.588667 env[1228]: time="2025-08-13T01:10:13.588624670Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 01:10:15.225913 env[1228]: time="2025-08-13T01:10:15.225849614Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:15.228601 env[1228]: time="2025-08-13T01:10:15.228555536Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:15.231839 env[1228]: time="2025-08-13T01:10:15.231791196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:15.233717 env[1228]: time="2025-08-13T01:10:15.233677536Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:15.235263 env[1228]: time="2025-08-13T01:10:15.235218950Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 01:10:15.236234 env[1228]: time="2025-08-13T01:10:15.236200752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 01:10:16.539049 env[1228]: time="2025-08-13T01:10:16.538977315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:16.541584 env[1228]: time="2025-08-13T01:10:16.541538165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:16.543770 env[1228]: time="2025-08-13T01:10:16.543727265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:16.546006 env[1228]: time="2025-08-13T01:10:16.545966689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:16.546986 env[1228]: time="2025-08-13T01:10:16.546926953Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 01:10:16.547703 env[1228]: time="2025-08-13T01:10:16.547669574Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 01:10:17.677575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount589383298.mount: Deactivated successfully. Aug 13 01:10:18.444928 env[1228]: time="2025-08-13T01:10:18.444849829Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:18.447483 env[1228]: time="2025-08-13T01:10:18.447417390Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:18.449337 env[1228]: time="2025-08-13T01:10:18.449250739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:18.451155 env[1228]: time="2025-08-13T01:10:18.451112330Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:18.452039 env[1228]: time="2025-08-13T01:10:18.451986867Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 01:10:18.452974 env[1228]: time="2025-08-13T01:10:18.452939940Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:10:18.917722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2486745405.mount: Deactivated successfully. Aug 13 01:10:20.241136 env[1228]: time="2025-08-13T01:10:20.241059223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:20.250870 env[1228]: time="2025-08-13T01:10:20.250805081Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:20.253385 env[1228]: time="2025-08-13T01:10:20.253333437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:20.255718 env[1228]: time="2025-08-13T01:10:20.255669851Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:20.256942 env[1228]: time="2025-08-13T01:10:20.256884410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:10:20.257958 env[1228]: time="2025-08-13T01:10:20.257923204Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:10:20.700518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996956649.mount: Deactivated successfully. Aug 13 01:10:20.705557 env[1228]: time="2025-08-13T01:10:20.705500464Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:20.707587 env[1228]: time="2025-08-13T01:10:20.707540556Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:20.709590 env[1228]: time="2025-08-13T01:10:20.709547370Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:20.711585 env[1228]: time="2025-08-13T01:10:20.711541100Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:20.712325 env[1228]: time="2025-08-13T01:10:20.712251554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:10:20.713156 env[1228]: time="2025-08-13T01:10:20.713120503Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:10:21.074196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1049977123.mount: Deactivated successfully. Aug 13 01:10:22.228366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:10:22.228726 systemd[1]: Stopped kubelet.service. Aug 13 01:10:22.231897 systemd[1]: Starting kubelet.service... Aug 13 01:10:22.837043 systemd[1]: Started kubelet.service. Aug 13 01:10:22.932451 kubelet[1559]: E0813 01:10:22.932398 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:10:22.935292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:10:22.935536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:10:23.769329 env[1228]: time="2025-08-13T01:10:23.769240715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:23.772126 env[1228]: time="2025-08-13T01:10:23.772057657Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:23.775412 env[1228]: time="2025-08-13T01:10:23.775359549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:23.778731 env[1228]: time="2025-08-13T01:10:23.778689904Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:23.779823 env[1228]: time="2025-08-13T01:10:23.779764959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:10:27.649360 systemd[1]: Stopped kubelet.service. Aug 13 01:10:27.653315 systemd[1]: Starting kubelet.service... Aug 13 01:10:27.692810 systemd[1]: Reloading. Aug 13 01:10:27.825188 /usr/lib/systemd/system-generators/torcx-generator[1608]: time="2025-08-13T01:10:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:10:27.825241 /usr/lib/systemd/system-generators/torcx-generator[1608]: time="2025-08-13T01:10:27Z" level=info msg="torcx already run" Aug 13 01:10:27.969419 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:10:27.969448 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:10:27.994471 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:10:28.155656 systemd[1]: Started kubelet.service. Aug 13 01:10:28.168395 systemd[1]: Stopping kubelet.service... Aug 13 01:10:28.169572 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:10:28.169757 systemd[1]: Stopped kubelet.service. Aug 13 01:10:28.172075 systemd[1]: Starting kubelet.service... Aug 13 01:10:28.476642 systemd[1]: Started kubelet.service. Aug 13 01:10:28.548839 kubelet[1664]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:10:28.548839 kubelet[1664]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:10:28.548839 kubelet[1664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:10:28.549566 kubelet[1664]: I0813 01:10:28.548942 1664 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:10:28.757842 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:10:29.039013 kubelet[1664]: I0813 01:10:29.038589 1664 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:10:29.039013 kubelet[1664]: I0813 01:10:29.038633 1664 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:10:29.039247 kubelet[1664]: I0813 01:10:29.039016 1664 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:10:29.103165 kubelet[1664]: I0813 01:10:29.103073 1664 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:10:29.103594 kubelet[1664]: E0813 01:10:29.103556 1664 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:10:29.111390 kubelet[1664]: E0813 01:10:29.111330 1664 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:10:29.111390 kubelet[1664]: I0813 01:10:29.111371 1664 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:10:29.116844 kubelet[1664]: I0813 01:10:29.116794 1664 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:10:29.117000 kubelet[1664]: I0813 01:10:29.116975 1664 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:10:29.117230 kubelet[1664]: I0813 01:10:29.117174 1664 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:10:29.117509 kubelet[1664]: I0813 01:10:29.117228 1664 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:10:29.117721 kubelet[1664]: I0813 01:10:29.117514 1664 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:10:29.117721 kubelet[1664]: I0813 01:10:29.117535 1664 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:10:29.118254 kubelet[1664]: I0813 01:10:29.118216 1664 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:10:29.122507 kubelet[1664]: I0813 01:10:29.122456 1664 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:10:29.122507 kubelet[1664]: I0813 01:10:29.122505 1664 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:10:29.122717 kubelet[1664]: I0813 01:10:29.122552 1664 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:10:29.122717 kubelet[1664]: I0813 01:10:29.122581 1664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:10:29.135251 kubelet[1664]: W0813 01:10:29.135164 1664 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Aug 13 01:10:29.135602 kubelet[1664]: E0813 01:10:29.135563 1664 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:10:29.136152 kubelet[1664]: I0813 01:10:29.136124 1664 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:10:29.137603 kubelet[1664]: I0813 01:10:29.137575 1664 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:10:29.137947 kubelet[1664]: W0813 01:10:29.137926 1664 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:10:29.141355 kubelet[1664]: W0813 01:10:29.140731 1664 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Aug 13 01:10:29.141355 kubelet[1664]: E0813 01:10:29.140830 1664 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:10:29.149714 kubelet[1664]: I0813 01:10:29.149663 1664 server.go:1274] "Started kubelet" Aug 13 01:10:29.164272 kubelet[1664]: I0813 01:10:29.164219 1664 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:10:29.164924 kubelet[1664]: I0813 01:10:29.164898 1664 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:10:29.176553 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 01:10:29.176879 kubelet[1664]: I0813 01:10:29.176855 1664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:10:29.177080 kubelet[1664]: E0813 01:10:29.175164 1664 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal.185b2e56926d006a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,UID:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,},FirstTimestamp:2025-08-13 01:10:29.149622378 +0000 UTC m=+0.664586050,LastTimestamp:2025-08-13 01:10:29.149622378 +0000 UTC m=+0.664586050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,}" Aug 13 01:10:29.180410 kubelet[1664]: I0813 01:10:29.180137 1664 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:10:29.182204 kubelet[1664]: I0813 01:10:29.182170 1664 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:10:29.185477 kubelet[1664]: I0813 01:10:29.185454 1664 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:10:29.185735 kubelet[1664]: I0813 01:10:29.185705 1664 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:10:29.186029 kubelet[1664]: E0813 01:10:29.186002 1664 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" not found" Aug 13 01:10:29.186577 kubelet[1664]: I0813 01:10:29.186545 1664 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:10:29.186781 kubelet[1664]: I0813 01:10:29.186765 1664 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:10:29.189778 kubelet[1664]: W0813 01:10:29.189714 1664 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Aug 13 01:10:29.189888 kubelet[1664]: E0813 01:10:29.189797 1664 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:10:29.189967 kubelet[1664]: E0813 01:10:29.189906 1664 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="200ms" Aug 13 01:10:29.190324 kubelet[1664]: I0813 01:10:29.190271 1664 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:10:29.190447 kubelet[1664]: I0813 01:10:29.190418 1664 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:10:29.192841 kubelet[1664]: E0813 01:10:29.192802 1664 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:10:29.193152 kubelet[1664]: I0813 01:10:29.193125 1664 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:10:29.207251 kubelet[1664]: I0813 01:10:29.207207 1664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:10:29.209256 kubelet[1664]: I0813 01:10:29.209226 1664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:10:29.209448 kubelet[1664]: I0813 01:10:29.209429 1664 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:10:29.209609 kubelet[1664]: I0813 01:10:29.209590 1664 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:10:29.209791 kubelet[1664]: E0813 01:10:29.209765 1664 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:10:29.218771 kubelet[1664]: W0813 01:10:29.218711 1664 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Aug 13 01:10:29.218988 kubelet[1664]: E0813 01:10:29.218955 1664 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:10:29.232195 kubelet[1664]: I0813 01:10:29.232121 1664 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:10:29.232195 kubelet[1664]: I0813 01:10:29.232175 1664 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:10:29.232195 kubelet[1664]: I0813 01:10:29.232201 1664 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:10:29.234703 kubelet[1664]: I0813 01:10:29.234667 1664 policy_none.go:49] "None policy: Start" Aug 13 01:10:29.235745 kubelet[1664]: I0813 01:10:29.235723 1664 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:10:29.235902 kubelet[1664]: I0813 01:10:29.235887 1664 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:10:29.244076 systemd[1]: Created slice kubepods.slice. Aug 13 01:10:29.251945 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 01:10:29.256393 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 01:10:29.264415 kubelet[1664]: I0813 01:10:29.264369 1664 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:10:29.264614 kubelet[1664]: I0813 01:10:29.264591 1664 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:10:29.264726 kubelet[1664]: I0813 01:10:29.264618 1664 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:10:29.266561 kubelet[1664]: I0813 01:10:29.265890 1664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:10:29.268906 kubelet[1664]: E0813 01:10:29.268877 1664 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" not found" Aug 13 01:10:29.325463 systemd[1]: Created slice kubepods-burstable-pod7bb1cc80d64a74137925a43eb9e7465c.slice. Aug 13 01:10:29.343726 systemd[1]: Created slice kubepods-burstable-podd208177b24a8010e8399bde7140fd51a.slice. Aug 13 01:10:29.354045 systemd[1]: Created slice kubepods-burstable-pod2e4e914c0fe28b809d4dc9c5ec792172.slice. Aug 13 01:10:29.369745 kubelet[1664]: I0813 01:10:29.369705 1664 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.370254 kubelet[1664]: E0813 01:10:29.370200 1664 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.44:6443/api/v1/nodes\": dial tcp 10.128.0.44:6443: connect: connection refused" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.387708 kubelet[1664]: I0813 01:10:29.387636 1664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d208177b24a8010e8399bde7140fd51a-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"d208177b24a8010e8399bde7140fd51a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.387708 kubelet[1664]: I0813 01:10:29.387693 1664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e4e914c0fe28b809d4dc9c5ec792172-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"2e4e914c0fe28b809d4dc9c5ec792172\") " pod="kube-system/kube-scheduler-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.387935 kubelet[1664]: I0813 01:10:29.387726 1664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bb1cc80d64a74137925a43eb9e7465c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"7bb1cc80d64a74137925a43eb9e7465c\") " pod="kube-system/kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.387935 kubelet[1664]: I0813 01:10:29.387754 1664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d208177b24a8010e8399bde7140fd51a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"d208177b24a8010e8399bde7140fd51a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.387935 kubelet[1664]: I0813 01:10:29.387778 1664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d208177b24a8010e8399bde7140fd51a-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"d208177b24a8010e8399bde7140fd51a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.387935 kubelet[1664]: I0813 01:10:29.387803 1664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d208177b24a8010e8399bde7140fd51a-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"d208177b24a8010e8399bde7140fd51a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.388115 kubelet[1664]: I0813 01:10:29.387829 1664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d208177b24a8010e8399bde7140fd51a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"d208177b24a8010e8399bde7140fd51a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.388115 kubelet[1664]: I0813 01:10:29.387856 1664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bb1cc80d64a74137925a43eb9e7465c-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"7bb1cc80d64a74137925a43eb9e7465c\") " pod="kube-system/kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.388115 kubelet[1664]: I0813 01:10:29.387884 1664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bb1cc80d64a74137925a43eb9e7465c-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"7bb1cc80d64a74137925a43eb9e7465c\") " pod="kube-system/kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.390992 kubelet[1664]: E0813 01:10:29.390941 1664 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="400ms" Aug 13 01:10:29.575608 kubelet[1664]: I0813 01:10:29.575567 1664 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.576202 kubelet[1664]: E0813 01:10:29.576053 1664 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.44:6443/api/v1/nodes\": dial tcp 10.128.0.44:6443: connect: connection refused" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.640352 env[1228]: time="2025-08-13T01:10:29.640163684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,Uid:7bb1cc80d64a74137925a43eb9e7465c,Namespace:kube-system,Attempt:0,}" Aug 13 01:10:29.652774 env[1228]: time="2025-08-13T01:10:29.652693230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,Uid:d208177b24a8010e8399bde7140fd51a,Namespace:kube-system,Attempt:0,}" Aug 13 01:10:29.658546 env[1228]: time="2025-08-13T01:10:29.658493103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,Uid:2e4e914c0fe28b809d4dc9c5ec792172,Namespace:kube-system,Attempt:0,}" Aug 13 01:10:29.792327 kubelet[1664]: E0813 01:10:29.792242 1664 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="800ms" Aug 13 01:10:29.981260 kubelet[1664]: I0813 01:10:29.981181 1664 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.981796 kubelet[1664]: E0813 01:10:29.981738 1664 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.44:6443/api/v1/nodes\": dial tcp 10.128.0.44:6443: connect: connection refused" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:29.987346 kubelet[1664]: W0813 01:10:29.987288 1664 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Aug 13 01:10:29.987481 kubelet[1664]: E0813 01:10:29.987363 1664 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:10:29.990086 kubelet[1664]: W0813 01:10:29.990016 1664 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Aug 13 01:10:29.990226 kubelet[1664]: E0813 01:10:29.990093 1664 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:10:30.045415 kubelet[1664]: W0813 01:10:30.045357 1664 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Aug 13 01:10:30.045608 kubelet[1664]: E0813 01:10:30.045427 1664 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:10:30.177192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434675628.mount: Deactivated successfully. Aug 13 01:10:30.182586 env[1228]: time="2025-08-13T01:10:30.182531580Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.183825 env[1228]: time="2025-08-13T01:10:30.183768812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.187150 env[1228]: time="2025-08-13T01:10:30.187095484Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.188662 env[1228]: time="2025-08-13T01:10:30.188608233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.189659 env[1228]: time="2025-08-13T01:10:30.189606310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.190659 env[1228]: time="2025-08-13T01:10:30.190609740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.191692 env[1228]: time="2025-08-13T01:10:30.191655627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.195737 env[1228]: time="2025-08-13T01:10:30.195698739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.199920 env[1228]: time="2025-08-13T01:10:30.199863109Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.201746 env[1228]: time="2025-08-13T01:10:30.201689085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.202847 env[1228]: time="2025-08-13T01:10:30.202797362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.206644 env[1228]: time="2025-08-13T01:10:30.206594621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:30.254215 env[1228]: time="2025-08-13T01:10:30.252948931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:10:30.254215 env[1228]: time="2025-08-13T01:10:30.253014096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:10:30.254215 env[1228]: time="2025-08-13T01:10:30.253032881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:10:30.254550 env[1228]: time="2025-08-13T01:10:30.253237107Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2064adf6bbeadc63bb5ef116c42aa1ee72ff65672af2d8b392782abd5db7ad26 pid=1704 runtime=io.containerd.runc.v2 Aug 13 01:10:30.266448 env[1228]: time="2025-08-13T01:10:30.266348433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:10:30.266637 env[1228]: time="2025-08-13T01:10:30.266474852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:10:30.266637 env[1228]: time="2025-08-13T01:10:30.266519274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:10:30.266876 env[1228]: time="2025-08-13T01:10:30.266778621Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1b9167e1f4ae66086be4b3d72fab1b58502a97cebcf2bac0235ad8343a0dcaf pid=1722 runtime=io.containerd.runc.v2 Aug 13 01:10:30.267325 env[1228]: time="2025-08-13T01:10:30.267224385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:10:30.267476 env[1228]: time="2025-08-13T01:10:30.267272631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:10:30.267476 env[1228]: time="2025-08-13T01:10:30.267340447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:10:30.267711 env[1228]: time="2025-08-13T01:10:30.267653619Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bd866ddfb3f0e918083cf9d77df3609509d3a094bb44b2b74dae2a74cf62fce pid=1731 runtime=io.containerd.runc.v2 Aug 13 01:10:30.287265 systemd[1]: Started cri-containerd-2064adf6bbeadc63bb5ef116c42aa1ee72ff65672af2d8b392782abd5db7ad26.scope. Aug 13 01:10:30.304430 systemd[1]: Started cri-containerd-e1b9167e1f4ae66086be4b3d72fab1b58502a97cebcf2bac0235ad8343a0dcaf.scope. Aug 13 01:10:30.332376 systemd[1]: Started cri-containerd-0bd866ddfb3f0e918083cf9d77df3609509d3a094bb44b2b74dae2a74cf62fce.scope. Aug 13 01:10:30.426858 env[1228]: time="2025-08-13T01:10:30.424609838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,Uid:2e4e914c0fe28b809d4dc9c5ec792172,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1b9167e1f4ae66086be4b3d72fab1b58502a97cebcf2bac0235ad8343a0dcaf\"" Aug 13 01:10:30.431204 kubelet[1664]: W0813 01:10:30.431126 1664 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Aug 13 01:10:30.431435 kubelet[1664]: E0813 01:10:30.431227 1664 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:10:30.443140 env[1228]: time="2025-08-13T01:10:30.442193926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,Uid:7bb1cc80d64a74137925a43eb9e7465c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bd866ddfb3f0e918083cf9d77df3609509d3a094bb44b2b74dae2a74cf62fce\"" Aug 13 01:10:30.443350 kubelet[1664]: E0813 01:10:30.443076 1664 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-21291" Aug 13 01:10:30.447480 env[1228]: time="2025-08-13T01:10:30.447425568Z" level=info msg="CreateContainer within sandbox \"e1b9167e1f4ae66086be4b3d72fab1b58502a97cebcf2bac0235ad8343a0dcaf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:10:30.449144 kubelet[1664]: E0813 01:10:30.449106 1664 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-21291" Aug 13 01:10:30.451892 env[1228]: time="2025-08-13T01:10:30.451844358Z" level=info msg="CreateContainer within sandbox \"0bd866ddfb3f0e918083cf9d77df3609509d3a094bb44b2b74dae2a74cf62fce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:10:30.473836 env[1228]: time="2025-08-13T01:10:30.473767554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,Uid:d208177b24a8010e8399bde7140fd51a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2064adf6bbeadc63bb5ef116c42aa1ee72ff65672af2d8b392782abd5db7ad26\"" Aug 13 01:10:30.476119 kubelet[1664]: E0813 01:10:30.476076 1664 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flat" Aug 13 01:10:30.478331 env[1228]: time="2025-08-13T01:10:30.478252658Z" level=info msg="CreateContainer within sandbox \"2064adf6bbeadc63bb5ef116c42aa1ee72ff65672af2d8b392782abd5db7ad26\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:10:30.483183 env[1228]: time="2025-08-13T01:10:30.483116375Z" level=info msg="CreateContainer within sandbox \"e1b9167e1f4ae66086be4b3d72fab1b58502a97cebcf2bac0235ad8343a0dcaf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7b63295fba3cdb11e8becaa01bd84987c8e0a44a6848ca09f0a0d2d8c3569b18\"" Aug 13 01:10:30.483916 env[1228]: time="2025-08-13T01:10:30.483872436Z" level=info msg="CreateContainer within sandbox \"0bd866ddfb3f0e918083cf9d77df3609509d3a094bb44b2b74dae2a74cf62fce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d572cd95d9b091df47b5c5c0c9b7f10dd7599d93939b2c38e8647bef7747bb11\"" Aug 13 01:10:30.484316 env[1228]: time="2025-08-13T01:10:30.484000781Z" level=info msg="StartContainer for \"7b63295fba3cdb11e8becaa01bd84987c8e0a44a6848ca09f0a0d2d8c3569b18\"" Aug 13 01:10:30.484799 env[1228]: time="2025-08-13T01:10:30.484761730Z" level=info msg="StartContainer for \"d572cd95d9b091df47b5c5c0c9b7f10dd7599d93939b2c38e8647bef7747bb11\"" Aug 13 01:10:30.497859 env[1228]: time="2025-08-13T01:10:30.497797955Z" level=info msg="CreateContainer within sandbox \"2064adf6bbeadc63bb5ef116c42aa1ee72ff65672af2d8b392782abd5db7ad26\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fac08011a8346d6693e81bee0317bba34793676c7ce116db15b1d668df103d2f\"" Aug 13 01:10:30.498751 env[1228]: time="2025-08-13T01:10:30.498712328Z" level=info msg="StartContainer for \"fac08011a8346d6693e81bee0317bba34793676c7ce116db15b1d668df103d2f\"" Aug 13 01:10:30.528574 systemd[1]: Started cri-containerd-fac08011a8346d6693e81bee0317bba34793676c7ce116db15b1d668df103d2f.scope. Aug 13 01:10:30.559128 systemd[1]: Started cri-containerd-7b63295fba3cdb11e8becaa01bd84987c8e0a44a6848ca09f0a0d2d8c3569b18.scope. Aug 13 01:10:30.568072 systemd[1]: Started cri-containerd-d572cd95d9b091df47b5c5c0c9b7f10dd7599d93939b2c38e8647bef7747bb11.scope. Aug 13 01:10:30.593608 kubelet[1664]: E0813 01:10:30.593547 1664 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="1.6s" Aug 13 01:10:30.650577 env[1228]: time="2025-08-13T01:10:30.650493099Z" level=info msg="StartContainer for \"fac08011a8346d6693e81bee0317bba34793676c7ce116db15b1d668df103d2f\" returns successfully" Aug 13 01:10:30.697797 env[1228]: time="2025-08-13T01:10:30.697672546Z" level=info msg="StartContainer for \"d572cd95d9b091df47b5c5c0c9b7f10dd7599d93939b2c38e8647bef7747bb11\" returns successfully" Aug 13 01:10:30.753564 env[1228]: time="2025-08-13T01:10:30.753500873Z" level=info msg="StartContainer for \"7b63295fba3cdb11e8becaa01bd84987c8e0a44a6848ca09f0a0d2d8c3569b18\" returns successfully" Aug 13 01:10:30.786886 kubelet[1664]: I0813 01:10:30.786749 1664 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:30.787229 kubelet[1664]: E0813 01:10:30.787185 1664 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.44:6443/api/v1/nodes\": dial tcp 10.128.0.44:6443: connect: connection refused" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:32.391917 kubelet[1664]: I0813 01:10:32.391874 1664 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:34.138607 kubelet[1664]: I0813 01:10:34.138546 1664 apiserver.go:52] "Watching apiserver" Aug 13 01:10:34.169473 kubelet[1664]: E0813 01:10:34.169429 1664 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" not found" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:34.187824 kubelet[1664]: I0813 01:10:34.187766 1664 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:10:34.268317 kubelet[1664]: I0813 01:10:34.268254 1664 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:34.268528 kubelet[1664]: E0813 01:10:34.268335 1664 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\": node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" not found" Aug 13 01:10:34.308750 kubelet[1664]: E0813 01:10:34.308556 1664 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal.185b2e56926d006a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,UID:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,},FirstTimestamp:2025-08-13 01:10:29.149622378 +0000 UTC m=+0.664586050,LastTimestamp:2025-08-13 01:10:29.149622378 +0000 UTC m=+0.664586050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal,}" Aug 13 01:10:35.632874 kubelet[1664]: W0813 01:10:35.632828 1664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 01:10:36.456682 systemd[1]: Reloading. Aug 13 01:10:36.585005 /usr/lib/systemd/system-generators/torcx-generator[1956]: time="2025-08-13T01:10:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:10:36.585650 /usr/lib/systemd/system-generators/torcx-generator[1956]: time="2025-08-13T01:10:36Z" level=info msg="torcx already run" Aug 13 01:10:36.709061 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:10:36.709092 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:10:36.749204 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:10:36.943036 systemd[1]: Stopping kubelet.service... Aug 13 01:10:36.958251 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:10:36.958581 systemd[1]: Stopped kubelet.service. Aug 13 01:10:36.958660 systemd[1]: kubelet.service: Consumed 1.163s CPU time. Aug 13 01:10:36.962670 systemd[1]: Starting kubelet.service... Aug 13 01:10:37.202539 systemd[1]: Started kubelet.service. Aug 13 01:10:37.279259 kubelet[2005]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:10:37.279797 kubelet[2005]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:10:37.279797 kubelet[2005]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:10:37.279797 kubelet[2005]: I0813 01:10:37.279413 2005 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:10:37.290183 kubelet[2005]: I0813 01:10:37.290124 2005 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:10:37.290183 kubelet[2005]: I0813 01:10:37.290156 2005 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:10:37.290539 kubelet[2005]: I0813 01:10:37.290502 2005 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:10:37.294336 kubelet[2005]: I0813 01:10:37.293674 2005 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:10:37.302347 kubelet[2005]: I0813 01:10:37.302310 2005 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:10:37.308083 kubelet[2005]: E0813 01:10:37.308027 2005 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:10:37.308083 kubelet[2005]: I0813 01:10:37.308062 2005 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:10:37.312400 kubelet[2005]: I0813 01:10:37.312357 2005 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:10:37.312528 kubelet[2005]: I0813 01:10:37.312516 2005 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:10:37.312741 kubelet[2005]: I0813 01:10:37.312704 2005 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:10:37.313029 kubelet[2005]: I0813 01:10:37.312742 2005 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:10:37.313245 kubelet[2005]: I0813 01:10:37.313048 2005 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:10:37.313245 kubelet[2005]: I0813 01:10:37.313067 2005 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:10:37.313245 kubelet[2005]: I0813 01:10:37.313107 2005 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:10:37.313245 kubelet[2005]: I0813 01:10:37.313245 2005 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:10:37.313500 kubelet[2005]: I0813 01:10:37.313262 2005 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:10:37.313500 kubelet[2005]: I0813 01:10:37.313371 2005 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:10:37.313500 kubelet[2005]: I0813 01:10:37.313393 2005 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:10:37.323076 kubelet[2005]: I0813 01:10:37.323029 2005 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:10:37.323824 kubelet[2005]: I0813 01:10:37.323685 2005 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:10:37.324426 kubelet[2005]: I0813 01:10:37.324283 2005 server.go:1274] "Started kubelet" Aug 13 01:10:37.331624 kubelet[2005]: I0813 01:10:37.331602 2005 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:10:37.344515 kubelet[2005]: I0813 01:10:37.344463 2005 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:10:37.346744 kubelet[2005]: I0813 01:10:37.346715 2005 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:10:37.348880 kubelet[2005]: I0813 01:10:37.348830 2005 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:10:37.349323 kubelet[2005]: I0813 01:10:37.349285 2005 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:10:37.350197 kubelet[2005]: I0813 01:10:37.350174 2005 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:10:37.352873 kubelet[2005]: I0813 01:10:37.352846 2005 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:10:37.353181 kubelet[2005]: E0813 01:10:37.353139 2005 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" not found" Aug 13 01:10:37.354152 kubelet[2005]: I0813 01:10:37.354117 2005 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:10:37.357976 kubelet[2005]: I0813 01:10:37.357942 2005 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:10:37.371413 kubelet[2005]: I0813 01:10:37.371382 2005 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:10:37.371630 kubelet[2005]: I0813 01:10:37.371610 2005 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:10:37.371910 kubelet[2005]: I0813 01:10:37.371879 2005 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:10:37.376582 kubelet[2005]: E0813 01:10:37.376551 2005 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:10:37.401533 kubelet[2005]: I0813 01:10:37.401479 2005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:10:37.405125 kubelet[2005]: I0813 01:10:37.405086 2005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:10:37.405125 kubelet[2005]: I0813 01:10:37.405125 2005 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:10:37.405394 kubelet[2005]: I0813 01:10:37.405152 2005 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:10:37.405394 kubelet[2005]: E0813 01:10:37.405216 2005 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:10:37.446554 kubelet[2005]: I0813 01:10:37.446090 2005 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:10:37.446554 kubelet[2005]: I0813 01:10:37.446114 2005 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:10:37.446554 kubelet[2005]: I0813 01:10:37.446140 2005 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:10:37.447142 kubelet[2005]: I0813 01:10:37.447114 2005 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:10:37.447435 kubelet[2005]: I0813 01:10:37.447391 2005 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:10:37.447576 kubelet[2005]: I0813 01:10:37.447558 2005 policy_none.go:49] "None policy: Start" Aug 13 01:10:37.448740 kubelet[2005]: I0813 01:10:37.448717 2005 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:10:37.448903 kubelet[2005]: I0813 01:10:37.448889 2005 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:10:37.449281 kubelet[2005]: I0813 01:10:37.449260 2005 state_mem.go:75] "Updated machine memory state" Aug 13 01:10:37.456457 kubelet[2005]: I0813 01:10:37.456430 2005 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:10:37.456811 kubelet[2005]: I0813 01:10:37.456788 2005 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:10:37.457009 kubelet[2005]: I0813 01:10:37.456952 2005 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:10:37.459991 kubelet[2005]: I0813 01:10:37.459969 2005 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:10:37.476061 sudo[2036]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 01:10:37.476685 sudo[2036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 01:10:37.521891 kubelet[2005]: W0813 01:10:37.521851 2005 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 01:10:37.522240 kubelet[2005]: E0813 01:10:37.522201 2005 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.522802 kubelet[2005]: W0813 01:10:37.522776 2005 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 01:10:37.523493 kubelet[2005]: W0813 01:10:37.523459 2005 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 01:10:37.567700 kubelet[2005]: I0813 01:10:37.567551 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e4e914c0fe28b809d4dc9c5ec792172-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"2e4e914c0fe28b809d4dc9c5ec792172\") " pod="kube-system/kube-scheduler-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.568021 kubelet[2005]: I0813 01:10:37.567966 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bb1cc80d64a74137925a43eb9e7465c-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"7bb1cc80d64a74137925a43eb9e7465c\") " pod="kube-system/kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.568217 kubelet[2005]: I0813 01:10:37.568192 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bb1cc80d64a74137925a43eb9e7465c-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"7bb1cc80d64a74137925a43eb9e7465c\") " pod="kube-system/kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.568522 kubelet[2005]: I0813 01:10:37.568485 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d208177b24a8010e8399bde7140fd51a-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"d208177b24a8010e8399bde7140fd51a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.568767 kubelet[2005]: I0813 01:10:37.568714 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d208177b24a8010e8399bde7140fd51a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"d208177b24a8010e8399bde7140fd51a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.569011 kubelet[2005]: I0813 01:10:37.568964 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d208177b24a8010e8399bde7140fd51a-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"d208177b24a8010e8399bde7140fd51a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.569200 kubelet[2005]: I0813 01:10:37.569170 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d208177b24a8010e8399bde7140fd51a-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"d208177b24a8010e8399bde7140fd51a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.569405 kubelet[2005]: I0813 01:10:37.569356 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d208177b24a8010e8399bde7140fd51a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"d208177b24a8010e8399bde7140fd51a\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.569620 kubelet[2005]: I0813 01:10:37.569588 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bb1cc80d64a74137925a43eb9e7465c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" (UID: \"7bb1cc80d64a74137925a43eb9e7465c\") " pod="kube-system/kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.579168 kubelet[2005]: I0813 01:10:37.579117 2005 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.591319 kubelet[2005]: I0813 01:10:37.591256 2005 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:37.591650 kubelet[2005]: I0813 01:10:37.591630 2005 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:38.297450 sudo[2036]: pam_unix(sudo:session): session closed for user root Aug 13 01:10:38.325775 kubelet[2005]: I0813 01:10:38.325724 2005 apiserver.go:52] "Watching apiserver" Aug 13 01:10:38.354996 kubelet[2005]: I0813 01:10:38.354725 2005 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:10:38.442009 kubelet[2005]: W0813 01:10:38.441972 2005 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 13 01:10:38.442367 kubelet[2005]: E0813 01:10:38.442337 2005 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" Aug 13 01:10:38.477903 kubelet[2005]: I0813 01:10:38.477799 2005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" podStartSLOduration=1.47777603 podStartE2EDuration="1.47777603s" podCreationTimestamp="2025-08-13 01:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:10:38.477351543 +0000 UTC m=+1.268143837" watchObservedRunningTime="2025-08-13 01:10:38.47777603 +0000 UTC m=+1.268568325" Aug 13 01:10:38.478414 kubelet[2005]: I0813 01:10:38.478350 2005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" podStartSLOduration=3.478333574 podStartE2EDuration="3.478333574s" podCreationTimestamp="2025-08-13 01:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:10:38.461952127 +0000 UTC m=+1.252744417" watchObservedRunningTime="2025-08-13 01:10:38.478333574 +0000 UTC m=+1.269125863" Aug 13 01:10:38.490284 kubelet[2005]: I0813 01:10:38.490219 2005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" podStartSLOduration=1.490198302 podStartE2EDuration="1.490198302s" podCreationTimestamp="2025-08-13 01:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:10:38.48997024 +0000 UTC m=+1.280762530" watchObservedRunningTime="2025-08-13 01:10:38.490198302 +0000 UTC m=+1.280990583" Aug 13 01:10:40.695634 sudo[1412]: pam_unix(sudo:session): session closed for user root Aug 13 01:10:40.739011 sshd[1409]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:40.743459 systemd[1]: sshd@4-10.128.0.44:22-139.178.68.195:49016.service: Deactivated successfully. Aug 13 01:10:40.744640 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:10:40.744879 systemd[1]: session-5.scope: Consumed 6.923s CPU time. Aug 13 01:10:40.745717 systemd-logind[1219]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:10:40.747077 systemd-logind[1219]: Removed session 5. Aug 13 01:10:41.314011 kubelet[2005]: I0813 01:10:41.313968 2005 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:10:41.315183 env[1228]: time="2025-08-13T01:10:41.315134750Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:10:41.316033 kubelet[2005]: I0813 01:10:41.316004 2005 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:10:41.999789 systemd[1]: Created slice kubepods-besteffort-podf68f7396_203a_40b3_89c1_fb81fefe325b.slice. Aug 13 01:10:42.030943 systemd[1]: Created slice kubepods-burstable-pod69a141d8_12b6_4109_8050_e139aaaebbec.slice. Aug 13 01:10:42.040922 kubelet[2005]: W0813 01:10:42.040863 2005 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object Aug 13 01:10:42.041096 kubelet[2005]: E0813 01:10:42.040930 2005 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 01:10:42.041258 kubelet[2005]: W0813 01:10:42.041136 2005 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object Aug 13 01:10:42.041258 kubelet[2005]: E0813 01:10:42.041176 2005 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 01:10:42.041496 kubelet[2005]: W0813 01:10:42.041403 2005 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object Aug 13 01:10:42.041496 kubelet[2005]: E0813 01:10:42.041434 2005 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 01:10:42.107002 kubelet[2005]: I0813 01:10:42.106933 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-hostproc\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107002 kubelet[2005]: I0813 01:10:42.106995 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-config-path\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107291 kubelet[2005]: I0813 01:10:42.107022 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4lvz\" (UniqueName: \"kubernetes.io/projected/69a141d8-12b6-4109-8050-e139aaaebbec-kube-api-access-r4lvz\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107291 kubelet[2005]: I0813 01:10:42.107048 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-bpf-maps\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107291 kubelet[2005]: I0813 01:10:42.107070 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cni-path\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107291 kubelet[2005]: I0813 01:10:42.107092 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-xtables-lock\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107291 kubelet[2005]: I0813 01:10:42.107117 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-host-proc-sys-net\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107291 kubelet[2005]: I0813 01:10:42.107141 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f68f7396-203a-40b3-89c1-fb81fefe325b-lib-modules\") pod \"kube-proxy-wrpjn\" (UID: \"f68f7396-203a-40b3-89c1-fb81fefe325b\") " pod="kube-system/kube-proxy-wrpjn" Aug 13 01:10:42.107665 kubelet[2005]: I0813 01:10:42.107170 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69a141d8-12b6-4109-8050-e139aaaebbec-hubble-tls\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107665 kubelet[2005]: I0813 01:10:42.107195 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-run\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107665 kubelet[2005]: I0813 01:10:42.107221 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-lib-modules\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107665 kubelet[2005]: I0813 01:10:42.107263 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f68f7396-203a-40b3-89c1-fb81fefe325b-kube-proxy\") pod \"kube-proxy-wrpjn\" (UID: \"f68f7396-203a-40b3-89c1-fb81fefe325b\") " pod="kube-system/kube-proxy-wrpjn" Aug 13 01:10:42.107665 kubelet[2005]: I0813 01:10:42.107288 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-host-proc-sys-kernel\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107665 kubelet[2005]: I0813 01:10:42.107348 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69a141d8-12b6-4109-8050-e139aaaebbec-clustermesh-secrets\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107909 kubelet[2005]: I0813 01:10:42.107378 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-etc-cni-netd\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107909 kubelet[2005]: I0813 01:10:42.107412 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-cgroup\") pod \"cilium-9zmgm\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " pod="kube-system/cilium-9zmgm" Aug 13 01:10:42.107909 kubelet[2005]: I0813 01:10:42.107451 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjgm\" (UniqueName: \"kubernetes.io/projected/f68f7396-203a-40b3-89c1-fb81fefe325b-kube-api-access-qsjgm\") pod \"kube-proxy-wrpjn\" (UID: \"f68f7396-203a-40b3-89c1-fb81fefe325b\") " pod="kube-system/kube-proxy-wrpjn" Aug 13 01:10:42.107909 kubelet[2005]: I0813 01:10:42.107486 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f68f7396-203a-40b3-89c1-fb81fefe325b-xtables-lock\") pod \"kube-proxy-wrpjn\" (UID: \"f68f7396-203a-40b3-89c1-fb81fefe325b\") " pod="kube-system/kube-proxy-wrpjn" Aug 13 01:10:42.223315 kubelet[2005]: I0813 01:10:42.223262 2005 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 01:10:42.309610 env[1228]: time="2025-08-13T01:10:42.309424613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wrpjn,Uid:f68f7396-203a-40b3-89c1-fb81fefe325b,Namespace:kube-system,Attempt:0,}" Aug 13 01:10:42.337107 env[1228]: time="2025-08-13T01:10:42.336997733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:10:42.337738 env[1228]: time="2025-08-13T01:10:42.337078926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:10:42.337738 env[1228]: time="2025-08-13T01:10:42.337103555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:10:42.338477 env[1228]: time="2025-08-13T01:10:42.337828214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a9812bb5dccb9bb6547c03799882733622bebdf64493d5af4d125032580e070 pid=2084 runtime=io.containerd.runc.v2 Aug 13 01:10:42.383409 systemd[1]: Started cri-containerd-5a9812bb5dccb9bb6547c03799882733622bebdf64493d5af4d125032580e070.scope. Aug 13 01:10:42.416917 systemd[1]: Created slice kubepods-besteffort-pod1a7ad985_1d2a_44f0_b481_8cf72c789c5f.slice. Aug 13 01:10:42.463228 env[1228]: time="2025-08-13T01:10:42.463174501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wrpjn,Uid:f68f7396-203a-40b3-89c1-fb81fefe325b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a9812bb5dccb9bb6547c03799882733622bebdf64493d5af4d125032580e070\"" Aug 13 01:10:42.469049 env[1228]: time="2025-08-13T01:10:42.468998415Z" level=info msg="CreateContainer within sandbox \"5a9812bb5dccb9bb6547c03799882733622bebdf64493d5af4d125032580e070\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:10:42.501231 env[1228]: time="2025-08-13T01:10:42.501169652Z" level=info msg="CreateContainer within sandbox \"5a9812bb5dccb9bb6547c03799882733622bebdf64493d5af4d125032580e070\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c6556d2d9bf4c1a779401c3550a7912a32dfde9c9c5ef48bdba84f698407acc\"" Aug 13 01:10:42.502825 env[1228]: time="2025-08-13T01:10:42.502782816Z" level=info msg="StartContainer for \"9c6556d2d9bf4c1a779401c3550a7912a32dfde9c9c5ef48bdba84f698407acc\"" Aug 13 01:10:42.511688 kubelet[2005]: I0813 01:10:42.511511 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7ad985-1d2a-44f0-b481-8cf72c789c5f-cilium-config-path\") pod \"cilium-operator-5d85765b45-r9swq\" (UID: \"1a7ad985-1d2a-44f0-b481-8cf72c789c5f\") " pod="kube-system/cilium-operator-5d85765b45-r9swq" Aug 13 01:10:42.511688 kubelet[2005]: I0813 01:10:42.511602 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59csf\" (UniqueName: \"kubernetes.io/projected/1a7ad985-1d2a-44f0-b481-8cf72c789c5f-kube-api-access-59csf\") pod \"cilium-operator-5d85765b45-r9swq\" (UID: \"1a7ad985-1d2a-44f0-b481-8cf72c789c5f\") " pod="kube-system/cilium-operator-5d85765b45-r9swq" Aug 13 01:10:42.537702 systemd[1]: Started cri-containerd-9c6556d2d9bf4c1a779401c3550a7912a32dfde9c9c5ef48bdba84f698407acc.scope. Aug 13 01:10:42.611995 env[1228]: time="2025-08-13T01:10:42.611819185Z" level=info msg="StartContainer for \"9c6556d2d9bf4c1a779401c3550a7912a32dfde9c9c5ef48bdba84f698407acc\" returns successfully" Aug 13 01:10:43.208191 kubelet[2005]: E0813 01:10:43.208133 2005 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Aug 13 01:10:43.208191 kubelet[2005]: E0813 01:10:43.208172 2005 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-9zmgm: failed to sync secret cache: timed out waiting for the condition Aug 13 01:10:43.208575 kubelet[2005]: E0813 01:10:43.208547 2005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69a141d8-12b6-4109-8050-e139aaaebbec-hubble-tls podName:69a141d8-12b6-4109-8050-e139aaaebbec nodeName:}" failed. No retries permitted until 2025-08-13 01:10:43.708251501 +0000 UTC m=+6.499043790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/69a141d8-12b6-4109-8050-e139aaaebbec-hubble-tls") pod "cilium-9zmgm" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec") : failed to sync secret cache: timed out waiting for the condition Aug 13 01:10:43.322001 env[1228]: time="2025-08-13T01:10:43.321934622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r9swq,Uid:1a7ad985-1d2a-44f0-b481-8cf72c789c5f,Namespace:kube-system,Attempt:0,}" Aug 13 01:10:43.350965 env[1228]: time="2025-08-13T01:10:43.350864755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:10:43.351465 env[1228]: time="2025-08-13T01:10:43.350918726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:10:43.351465 env[1228]: time="2025-08-13T01:10:43.350936983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:10:43.351825 env[1228]: time="2025-08-13T01:10:43.351734771Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083 pid=2293 runtime=io.containerd.runc.v2 Aug 13 01:10:43.377811 systemd[1]: Started cri-containerd-dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083.scope. Aug 13 01:10:43.445462 env[1228]: time="2025-08-13T01:10:43.445214202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r9swq,Uid:1a7ad985-1d2a-44f0-b481-8cf72c789c5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083\"" Aug 13 01:10:43.451010 env[1228]: time="2025-08-13T01:10:43.450648302Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:10:43.584090 kubelet[2005]: I0813 01:10:43.583251 2005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wrpjn" podStartSLOduration=2.58322613 podStartE2EDuration="2.58322613s" podCreationTimestamp="2025-08-13 01:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:10:43.457704697 +0000 UTC m=+6.248496989" watchObservedRunningTime="2025-08-13 01:10:43.58322613 +0000 UTC m=+6.374018421" Aug 13 01:10:43.764697 update_engine[1220]: I0813 01:10:43.764631 1220 update_attempter.cc:509] Updating boot flags... Aug 13 01:10:43.838452 env[1228]: time="2025-08-13T01:10:43.835854901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zmgm,Uid:69a141d8-12b6-4109-8050-e139aaaebbec,Namespace:kube-system,Attempt:0,}" Aug 13 01:10:43.864432 env[1228]: time="2025-08-13T01:10:43.864340399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:10:43.866477 env[1228]: time="2025-08-13T01:10:43.866400713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:10:43.866811 env[1228]: time="2025-08-13T01:10:43.866763227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:10:43.867261 env[1228]: time="2025-08-13T01:10:43.867196009Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6 pid=2347 runtime=io.containerd.runc.v2 Aug 13 01:10:43.907742 systemd[1]: Started cri-containerd-e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6.scope. Aug 13 01:10:44.020739 env[1228]: time="2025-08-13T01:10:44.020681897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zmgm,Uid:69a141d8-12b6-4109-8050-e139aaaebbec,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\"" Aug 13 01:10:44.615616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479493722.mount: Deactivated successfully. Aug 13 01:10:45.685465 env[1228]: time="2025-08-13T01:10:45.685391792Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:45.688208 env[1228]: time="2025-08-13T01:10:45.688143679Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:45.690510 env[1228]: time="2025-08-13T01:10:45.690454897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:45.691267 env[1228]: time="2025-08-13T01:10:45.691209902Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:10:45.694419 env[1228]: time="2025-08-13T01:10:45.693634056Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:10:45.696185 env[1228]: time="2025-08-13T01:10:45.696096166Z" level=info msg="CreateContainer within sandbox \"dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 01:10:45.720853 env[1228]: time="2025-08-13T01:10:45.720782124Z" level=info msg="CreateContainer within sandbox \"dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\"" Aug 13 01:10:45.723670 env[1228]: time="2025-08-13T01:10:45.723625855Z" level=info msg="StartContainer for \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\"" Aug 13 01:10:45.777102 systemd[1]: run-containerd-runc-k8s.io-a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9-runc.Zcaxaj.mount: Deactivated successfully. Aug 13 01:10:45.781824 systemd[1]: Started cri-containerd-a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9.scope. Aug 13 01:10:45.822105 env[1228]: time="2025-08-13T01:10:45.822032170Z" level=info msg="StartContainer for \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\" returns successfully" Aug 13 01:10:53.108678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482599062.mount: Deactivated successfully. Aug 13 01:10:56.481643 env[1228]: time="2025-08-13T01:10:56.481570195Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:56.484873 env[1228]: time="2025-08-13T01:10:56.484824544Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:56.487751 env[1228]: time="2025-08-13T01:10:56.487684991Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:10:56.488939 env[1228]: time="2025-08-13T01:10:56.488890032Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:10:56.493005 env[1228]: time="2025-08-13T01:10:56.492959802Z" level=info msg="CreateContainer within sandbox \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:10:56.510805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876815302.mount: Deactivated successfully. Aug 13 01:10:56.511606 env[1228]: time="2025-08-13T01:10:56.511520861Z" level=info msg="CreateContainer within sandbox \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70\"" Aug 13 01:10:56.513589 env[1228]: time="2025-08-13T01:10:56.513550760Z" level=info msg="StartContainer for \"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70\"" Aug 13 01:10:56.549931 systemd[1]: run-containerd-runc-k8s.io-686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70-runc.065vMQ.mount: Deactivated successfully. Aug 13 01:10:56.553928 systemd[1]: Started cri-containerd-686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70.scope. Aug 13 01:10:56.596341 env[1228]: time="2025-08-13T01:10:56.596050042Z" level=info msg="StartContainer for \"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70\" returns successfully" Aug 13 01:10:56.609217 systemd[1]: cri-containerd-686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70.scope: Deactivated successfully. Aug 13 01:10:57.504596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70-rootfs.mount: Deactivated successfully. Aug 13 01:10:57.525898 kubelet[2005]: I0813 01:10:57.525797 2005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-r9swq" podStartSLOduration=13.282263622 podStartE2EDuration="15.525771814s" podCreationTimestamp="2025-08-13 01:10:42 +0000 UTC" firstStartedPulling="2025-08-13 01:10:43.449186613 +0000 UTC m=+6.239978876" lastFinishedPulling="2025-08-13 01:10:45.692694777 +0000 UTC m=+8.483487068" observedRunningTime="2025-08-13 01:10:46.508589533 +0000 UTC m=+9.299381823" watchObservedRunningTime="2025-08-13 01:10:57.525771814 +0000 UTC m=+20.316564109" Aug 13 01:10:58.692043 env[1228]: time="2025-08-13T01:10:58.691928393Z" level=info msg="shim disconnected" id=686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70 Aug 13 01:10:58.692735 env[1228]: time="2025-08-13T01:10:58.692060116Z" level=warning msg="cleaning up after shim disconnected" id=686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70 namespace=k8s.io Aug 13 01:10:58.692735 env[1228]: time="2025-08-13T01:10:58.692094480Z" level=info msg="cleaning up dead shim" Aug 13 01:10:58.706015 env[1228]: time="2025-08-13T01:10:58.705957509Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:10:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2470 runtime=io.containerd.runc.v2\n" Aug 13 01:10:59.485996 env[1228]: time="2025-08-13T01:10:59.485207737Z" level=info msg="CreateContainer within sandbox \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:10:59.506673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206318627.mount: Deactivated successfully. Aug 13 01:10:59.520820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160322503.mount: Deactivated successfully. Aug 13 01:10:59.525014 env[1228]: time="2025-08-13T01:10:59.524933336Z" level=info msg="CreateContainer within sandbox \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed\"" Aug 13 01:10:59.526264 env[1228]: time="2025-08-13T01:10:59.526208553Z" level=info msg="StartContainer for \"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed\"" Aug 13 01:10:59.573031 systemd[1]: Started cri-containerd-f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed.scope. Aug 13 01:10:59.617411 env[1228]: time="2025-08-13T01:10:59.616121335Z" level=info msg="StartContainer for \"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed\" returns successfully" Aug 13 01:10:59.633514 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:10:59.633914 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:10:59.635480 systemd[1]: Stopping systemd-sysctl.service... Aug 13 01:10:59.639119 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:10:59.639700 systemd[1]: cri-containerd-f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed.scope: Deactivated successfully. Aug 13 01:10:59.657405 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:10:59.677843 env[1228]: time="2025-08-13T01:10:59.677780976Z" level=info msg="shim disconnected" id=f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed Aug 13 01:10:59.678113 env[1228]: time="2025-08-13T01:10:59.677843990Z" level=warning msg="cleaning up after shim disconnected" id=f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed namespace=k8s.io Aug 13 01:10:59.678113 env[1228]: time="2025-08-13T01:10:59.677861004Z" level=info msg="cleaning up dead shim" Aug 13 01:10:59.689748 env[1228]: time="2025-08-13T01:10:59.689682740Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:10:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2533 runtime=io.containerd.runc.v2\n" Aug 13 01:11:00.489267 env[1228]: time="2025-08-13T01:11:00.489209260Z" level=info msg="CreateContainer within sandbox \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:11:00.503918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed-rootfs.mount: Deactivated successfully. Aug 13 01:11:00.521266 env[1228]: time="2025-08-13T01:11:00.521207203Z" level=info msg="CreateContainer within sandbox \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7\"" Aug 13 01:11:00.522371 env[1228]: time="2025-08-13T01:11:00.522329571Z" level=info msg="StartContainer for \"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7\"" Aug 13 01:11:00.557291 systemd[1]: Started cri-containerd-6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7.scope. Aug 13 01:11:00.613061 env[1228]: time="2025-08-13T01:11:00.613001950Z" level=info msg="StartContainer for \"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7\" returns successfully" Aug 13 01:11:00.618936 systemd[1]: cri-containerd-6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7.scope: Deactivated successfully. Aug 13 01:11:00.651406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7-rootfs.mount: Deactivated successfully. Aug 13 01:11:00.658842 env[1228]: time="2025-08-13T01:11:00.658778464Z" level=info msg="shim disconnected" id=6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7 Aug 13 01:11:00.659132 env[1228]: time="2025-08-13T01:11:00.658844071Z" level=warning msg="cleaning up after shim disconnected" id=6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7 namespace=k8s.io Aug 13 01:11:00.659132 env[1228]: time="2025-08-13T01:11:00.658859936Z" level=info msg="cleaning up dead shim" Aug 13 01:11:00.671356 env[1228]: time="2025-08-13T01:11:00.671281115Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:11:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2592 runtime=io.containerd.runc.v2\n" Aug 13 01:11:01.494758 env[1228]: time="2025-08-13T01:11:01.494682463Z" level=info msg="CreateContainer within sandbox \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:11:01.530334 env[1228]: time="2025-08-13T01:11:01.526352789Z" level=info msg="CreateContainer within sandbox \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e\"" Aug 13 01:11:01.531258 env[1228]: time="2025-08-13T01:11:01.531146522Z" level=info msg="StartContainer for \"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e\"" Aug 13 01:11:01.564225 systemd[1]: Started cri-containerd-c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e.scope. Aug 13 01:11:01.612419 systemd[1]: cri-containerd-c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e.scope: Deactivated successfully. Aug 13 01:11:01.614798 env[1228]: time="2025-08-13T01:11:01.614750500Z" level=info msg="StartContainer for \"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e\" returns successfully" Aug 13 01:11:01.643205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e-rootfs.mount: Deactivated successfully. Aug 13 01:11:01.651059 env[1228]: time="2025-08-13T01:11:01.650984681Z" level=info msg="shim disconnected" id=c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e Aug 13 01:11:01.651059 env[1228]: time="2025-08-13T01:11:01.651049950Z" level=warning msg="cleaning up after shim disconnected" id=c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e namespace=k8s.io Aug 13 01:11:01.651059 env[1228]: time="2025-08-13T01:11:01.651070705Z" level=info msg="cleaning up dead shim" Aug 13 01:11:01.662747 env[1228]: time="2025-08-13T01:11:01.662687283Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:11:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2648 runtime=io.containerd.runc.v2\n" Aug 13 01:11:02.502721 env[1228]: time="2025-08-13T01:11:02.502656478Z" level=info msg="CreateContainer within sandbox \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:11:02.528572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170774103.mount: Deactivated successfully. Aug 13 01:11:02.540043 env[1228]: time="2025-08-13T01:11:02.539972038Z" level=info msg="CreateContainer within sandbox \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\"" Aug 13 01:11:02.542763 env[1228]: time="2025-08-13T01:11:02.542708151Z" level=info msg="StartContainer for \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\"" Aug 13 01:11:02.589522 systemd[1]: Started cri-containerd-a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec.scope. Aug 13 01:11:02.633441 env[1228]: time="2025-08-13T01:11:02.633378727Z" level=info msg="StartContainer for \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\" returns successfully" Aug 13 01:11:02.783394 kubelet[2005]: I0813 01:11:02.783255 2005 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 01:11:02.836210 systemd[1]: Created slice kubepods-burstable-pod7e49871e_fb05_433f_9b7a_197c6ea54443.slice. Aug 13 01:11:02.852478 systemd[1]: Created slice kubepods-burstable-pod3fe9e204_8e9e_4580_9e5a_24334b29b0ae.slice. Aug 13 01:11:02.882417 kubelet[2005]: I0813 01:11:02.882373 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrmmg\" (UniqueName: \"kubernetes.io/projected/7e49871e-fb05-433f-9b7a-197c6ea54443-kube-api-access-nrmmg\") pod \"coredns-7c65d6cfc9-hn6bw\" (UID: \"7e49871e-fb05-433f-9b7a-197c6ea54443\") " pod="kube-system/coredns-7c65d6cfc9-hn6bw" Aug 13 01:11:02.883049 kubelet[2005]: I0813 01:11:02.883014 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e49871e-fb05-433f-9b7a-197c6ea54443-config-volume\") pod \"coredns-7c65d6cfc9-hn6bw\" (UID: \"7e49871e-fb05-433f-9b7a-197c6ea54443\") " pod="kube-system/coredns-7c65d6cfc9-hn6bw" Aug 13 01:11:02.984152 kubelet[2005]: I0813 01:11:02.984096 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fe9e204-8e9e-4580-9e5a-24334b29b0ae-config-volume\") pod \"coredns-7c65d6cfc9-99qc4\" (UID: \"3fe9e204-8e9e-4580-9e5a-24334b29b0ae\") " pod="kube-system/coredns-7c65d6cfc9-99qc4" Aug 13 01:11:02.984421 kubelet[2005]: I0813 01:11:02.984164 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zztd6\" (UniqueName: \"kubernetes.io/projected/3fe9e204-8e9e-4580-9e5a-24334b29b0ae-kube-api-access-zztd6\") pod \"coredns-7c65d6cfc9-99qc4\" (UID: \"3fe9e204-8e9e-4580-9e5a-24334b29b0ae\") " pod="kube-system/coredns-7c65d6cfc9-99qc4" Aug 13 01:11:03.146902 env[1228]: time="2025-08-13T01:11:03.146238386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hn6bw,Uid:7e49871e-fb05-433f-9b7a-197c6ea54443,Namespace:kube-system,Attempt:0,}" Aug 13 01:11:03.158970 env[1228]: time="2025-08-13T01:11:03.158513125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-99qc4,Uid:3fe9e204-8e9e-4580-9e5a-24334b29b0ae,Namespace:kube-system,Attempt:0,}" Aug 13 01:11:03.528203 systemd[1]: run-containerd-runc-k8s.io-a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec-runc.kOfS2W.mount: Deactivated successfully. Aug 13 01:11:03.535109 kubelet[2005]: I0813 01:11:03.535012 2005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9zmgm" podStartSLOduration=10.066770861 podStartE2EDuration="22.53498666s" podCreationTimestamp="2025-08-13 01:10:41 +0000 UTC" firstStartedPulling="2025-08-13 01:10:44.022751227 +0000 UTC m=+6.813543504" lastFinishedPulling="2025-08-13 01:10:56.490967023 +0000 UTC m=+19.281759303" observedRunningTime="2025-08-13 01:11:03.532715245 +0000 UTC m=+26.323507534" watchObservedRunningTime="2025-08-13 01:11:03.53498666 +0000 UTC m=+26.325778949" Aug 13 01:11:04.962802 systemd-networkd[1030]: cilium_host: Link UP Aug 13 01:11:04.966398 systemd-networkd[1030]: cilium_net: Link UP Aug 13 01:11:04.970344 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 01:11:04.978456 systemd-networkd[1030]: cilium_net: Gained carrier Aug 13 01:11:04.983318 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 01:11:04.984100 systemd-networkd[1030]: cilium_host: Gained carrier Aug 13 01:11:04.989188 systemd-networkd[1030]: cilium_net: Gained IPv6LL Aug 13 01:11:05.131026 systemd-networkd[1030]: cilium_vxlan: Link UP Aug 13 01:11:05.131047 systemd-networkd[1030]: cilium_vxlan: Gained carrier Aug 13 01:11:05.253577 systemd-networkd[1030]: cilium_host: Gained IPv6LL Aug 13 01:11:05.408332 kernel: NET: Registered PF_ALG protocol family Aug 13 01:11:06.282625 systemd-networkd[1030]: lxc_health: Link UP Aug 13 01:11:06.311342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 01:11:06.312408 systemd-networkd[1030]: lxc_health: Gained carrier Aug 13 01:11:06.389526 systemd-networkd[1030]: cilium_vxlan: Gained IPv6LL Aug 13 01:11:06.708618 systemd-networkd[1030]: lxc6e37685d2c8e: Link UP Aug 13 01:11:06.723330 kernel: eth0: renamed from tmp4d17b Aug 13 01:11:06.740337 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6e37685d2c8e: link becomes ready Aug 13 01:11:06.745946 systemd-networkd[1030]: lxc6e37685d2c8e: Gained carrier Aug 13 01:11:06.755679 systemd-networkd[1030]: lxc1baa2049044b: Link UP Aug 13 01:11:06.769330 kernel: eth0: renamed from tmpb0294 Aug 13 01:11:06.782327 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1baa2049044b: link becomes ready Aug 13 01:11:06.782681 systemd-networkd[1030]: lxc1baa2049044b: Gained carrier Aug 13 01:11:07.478072 systemd-networkd[1030]: lxc_health: Gained IPv6LL Aug 13 01:11:07.861518 systemd-networkd[1030]: lxc1baa2049044b: Gained IPv6LL Aug 13 01:11:08.054191 systemd-networkd[1030]: lxc6e37685d2c8e: Gained IPv6LL Aug 13 01:11:11.699388 env[1228]: time="2025-08-13T01:11:11.699270343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:11:11.700225 env[1228]: time="2025-08-13T01:11:11.700176175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:11:11.700454 env[1228]: time="2025-08-13T01:11:11.700412616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:11:11.700846 env[1228]: time="2025-08-13T01:11:11.700796180Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b02943ab935c1728a903217ac5e1425a8aeb19bef9e8e6869579ad043bc20905 pid=3191 runtime=io.containerd.runc.v2 Aug 13 01:11:11.707651 env[1228]: time="2025-08-13T01:11:11.707564308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:11:11.707912 env[1228]: time="2025-08-13T01:11:11.707865978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:11:11.708120 env[1228]: time="2025-08-13T01:11:11.708075714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:11:11.708544 env[1228]: time="2025-08-13T01:11:11.708497189Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d17b518bfc19f44e5d223ab030cac6a65bba8e99f2f87845dfe6e4659f74d97 pid=3208 runtime=io.containerd.runc.v2 Aug 13 01:11:11.746109 systemd[1]: Started cri-containerd-4d17b518bfc19f44e5d223ab030cac6a65bba8e99f2f87845dfe6e4659f74d97.scope. Aug 13 01:11:11.763440 systemd[1]: run-containerd-runc-k8s.io-4d17b518bfc19f44e5d223ab030cac6a65bba8e99f2f87845dfe6e4659f74d97-runc.9wS8eg.mount: Deactivated successfully. Aug 13 01:11:11.799546 systemd[1]: Started cri-containerd-b02943ab935c1728a903217ac5e1425a8aeb19bef9e8e6869579ad043bc20905.scope. Aug 13 01:11:11.886950 env[1228]: time="2025-08-13T01:11:11.886889917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hn6bw,Uid:7e49871e-fb05-433f-9b7a-197c6ea54443,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d17b518bfc19f44e5d223ab030cac6a65bba8e99f2f87845dfe6e4659f74d97\"" Aug 13 01:11:11.893677 env[1228]: time="2025-08-13T01:11:11.893627205Z" level=info msg="CreateContainer within sandbox \"4d17b518bfc19f44e5d223ab030cac6a65bba8e99f2f87845dfe6e4659f74d97\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:11:11.926334 env[1228]: time="2025-08-13T01:11:11.924244045Z" level=info msg="CreateContainer within sandbox \"4d17b518bfc19f44e5d223ab030cac6a65bba8e99f2f87845dfe6e4659f74d97\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e10f7489ac721dd6b2138d5f86384f27e342a1f8c92e84eabd59c55a8ef798e\"" Aug 13 01:11:11.926334 env[1228]: time="2025-08-13T01:11:11.925160683Z" level=info msg="StartContainer for \"0e10f7489ac721dd6b2138d5f86384f27e342a1f8c92e84eabd59c55a8ef798e\"" Aug 13 01:11:11.950928 env[1228]: time="2025-08-13T01:11:11.949880856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-99qc4,Uid:3fe9e204-8e9e-4580-9e5a-24334b29b0ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"b02943ab935c1728a903217ac5e1425a8aeb19bef9e8e6869579ad043bc20905\"" Aug 13 01:11:11.958184 env[1228]: time="2025-08-13T01:11:11.958133132Z" level=info msg="CreateContainer within sandbox \"b02943ab935c1728a903217ac5e1425a8aeb19bef9e8e6869579ad043bc20905\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:11:11.980533 systemd[1]: Started cri-containerd-0e10f7489ac721dd6b2138d5f86384f27e342a1f8c92e84eabd59c55a8ef798e.scope. Aug 13 01:11:11.986795 env[1228]: time="2025-08-13T01:11:11.986735065Z" level=info msg="CreateContainer within sandbox \"b02943ab935c1728a903217ac5e1425a8aeb19bef9e8e6869579ad043bc20905\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fc23b36f4a8467ad43d930d7cb81faf111394066bf090fec1a6f1b1b203eb9b\"" Aug 13 01:11:11.987856 env[1228]: time="2025-08-13T01:11:11.987770748Z" level=info msg="StartContainer for \"0fc23b36f4a8467ad43d930d7cb81faf111394066bf090fec1a6f1b1b203eb9b\"" Aug 13 01:11:12.037077 systemd[1]: Started cri-containerd-0fc23b36f4a8467ad43d930d7cb81faf111394066bf090fec1a6f1b1b203eb9b.scope. Aug 13 01:11:12.050365 env[1228]: time="2025-08-13T01:11:12.049993614Z" level=info msg="StartContainer for \"0e10f7489ac721dd6b2138d5f86384f27e342a1f8c92e84eabd59c55a8ef798e\" returns successfully" Aug 13 01:11:12.156340 env[1228]: time="2025-08-13T01:11:12.155780105Z" level=info msg="StartContainer for \"0fc23b36f4a8467ad43d930d7cb81faf111394066bf090fec1a6f1b1b203eb9b\" returns successfully" Aug 13 01:11:12.545399 kubelet[2005]: I0813 01:11:12.545292 2005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-99qc4" podStartSLOduration=30.545269938 podStartE2EDuration="30.545269938s" podCreationTimestamp="2025-08-13 01:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:11:12.542465091 +0000 UTC m=+35.333257381" watchObservedRunningTime="2025-08-13 01:11:12.545269938 +0000 UTC m=+35.336062229" Aug 13 01:11:12.568340 kubelet[2005]: I0813 01:11:12.568242 2005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hn6bw" podStartSLOduration=30.568215986 podStartE2EDuration="30.568215986s" podCreationTimestamp="2025-08-13 01:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:11:12.562934353 +0000 UTC m=+35.353726644" watchObservedRunningTime="2025-08-13 01:11:12.568215986 +0000 UTC m=+35.359008275" Aug 13 01:11:12.717124 systemd[1]: run-containerd-runc-k8s.io-b02943ab935c1728a903217ac5e1425a8aeb19bef9e8e6869579ad043bc20905-runc.OOUSpH.mount: Deactivated successfully. Aug 13 01:11:36.179374 systemd[1]: Started sshd@5-10.128.0.44:22-78.128.112.74:52722.service. Aug 13 01:11:36.320450 systemd[1]: Started sshd@6-10.128.0.44:22-139.178.68.195:34336.service. Aug 13 01:11:36.617235 sshd[3362]: Accepted publickey for core from 139.178.68.195 port 34336 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:11:36.619275 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:11:36.626425 systemd-logind[1219]: New session 6 of user core. Aug 13 01:11:36.626429 systemd[1]: Started session-6.scope. Aug 13 01:11:36.791642 sshd[3359]: Invalid user user from 78.128.112.74 port 52722 Aug 13 01:11:36.937552 sshd[3362]: pam_unix(sshd:session): session closed for user core Aug 13 01:11:36.939789 sshd[3359]: Failed password for invalid user user from 78.128.112.74 port 52722 ssh2 Aug 13 01:11:36.942573 systemd[1]: sshd@6-10.128.0.44:22-139.178.68.195:34336.service: Deactivated successfully. Aug 13 01:11:36.943879 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:11:36.944815 systemd-logind[1219]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:11:36.946189 systemd-logind[1219]: Removed session 6. Aug 13 01:11:37.087074 sshd[3359]: Connection closed by invalid user user 78.128.112.74 port 52722 [preauth] Aug 13 01:11:37.089273 systemd[1]: sshd@5-10.128.0.44:22-78.128.112.74:52722.service: Deactivated successfully. Aug 13 01:11:41.984656 systemd[1]: Started sshd@7-10.128.0.44:22-139.178.68.195:42852.service. Aug 13 01:11:42.279941 sshd[3378]: Accepted publickey for core from 139.178.68.195 port 42852 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:11:42.281821 sshd[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:11:42.288384 systemd-logind[1219]: New session 7 of user core. Aug 13 01:11:42.289695 systemd[1]: Started session-7.scope. Aug 13 01:11:42.578609 sshd[3378]: pam_unix(sshd:session): session closed for user core Aug 13 01:11:42.583440 systemd-logind[1219]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:11:42.583932 systemd[1]: sshd@7-10.128.0.44:22-139.178.68.195:42852.service: Deactivated successfully. Aug 13 01:11:42.585136 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:11:42.586912 systemd-logind[1219]: Removed session 7. Aug 13 01:11:47.626702 systemd[1]: Started sshd@8-10.128.0.44:22-139.178.68.195:42856.service. Aug 13 01:11:47.922825 sshd[3393]: Accepted publickey for core from 139.178.68.195 port 42856 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:11:47.924991 sshd[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:11:47.932066 systemd[1]: Started session-8.scope. Aug 13 01:11:47.932813 systemd-logind[1219]: New session 8 of user core. Aug 13 01:11:48.213890 sshd[3393]: pam_unix(sshd:session): session closed for user core Aug 13 01:11:48.219115 systemd[1]: sshd@8-10.128.0.44:22-139.178.68.195:42856.service: Deactivated successfully. Aug 13 01:11:48.220444 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:11:48.221520 systemd-logind[1219]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:11:48.222825 systemd-logind[1219]: Removed session 8. Aug 13 01:11:53.262163 systemd[1]: Started sshd@9-10.128.0.44:22-139.178.68.195:56828.service. Aug 13 01:11:53.558689 sshd[3405]: Accepted publickey for core from 139.178.68.195 port 56828 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:11:53.560882 sshd[3405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:11:53.567968 systemd[1]: Started session-9.scope. Aug 13 01:11:53.568885 systemd-logind[1219]: New session 9 of user core. Aug 13 01:11:53.853142 sshd[3405]: pam_unix(sshd:session): session closed for user core Aug 13 01:11:53.858043 systemd[1]: sshd@9-10.128.0.44:22-139.178.68.195:56828.service: Deactivated successfully. Aug 13 01:11:53.859163 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:11:53.860382 systemd-logind[1219]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:11:53.861712 systemd-logind[1219]: Removed session 9. Aug 13 01:11:58.899955 systemd[1]: Started sshd@10-10.128.0.44:22-139.178.68.195:56830.service. Aug 13 01:11:59.192676 sshd[3418]: Accepted publickey for core from 139.178.68.195 port 56830 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:11:59.194692 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:11:59.201657 systemd[1]: Started session-10.scope. Aug 13 01:11:59.202703 systemd-logind[1219]: New session 10 of user core. Aug 13 01:11:59.493482 sshd[3418]: pam_unix(sshd:session): session closed for user core Aug 13 01:11:59.498361 systemd-logind[1219]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:11:59.498624 systemd[1]: sshd@10-10.128.0.44:22-139.178.68.195:56830.service: Deactivated successfully. Aug 13 01:11:59.499823 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:11:59.501161 systemd-logind[1219]: Removed session 10. Aug 13 01:11:59.541062 systemd[1]: Started sshd@11-10.128.0.44:22-139.178.68.195:56836.service. Aug 13 01:11:59.836827 sshd[3431]: Accepted publickey for core from 139.178.68.195 port 56836 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:11:59.838639 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:11:59.846075 systemd[1]: Started session-11.scope. Aug 13 01:11:59.846834 systemd-logind[1219]: New session 11 of user core. Aug 13 01:12:00.169394 sshd[3431]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:00.176835 systemd-logind[1219]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:12:00.179381 systemd[1]: sshd@11-10.128.0.44:22-139.178.68.195:56836.service: Deactivated successfully. Aug 13 01:12:00.180562 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:12:00.183241 systemd-logind[1219]: Removed session 11. Aug 13 01:12:00.217178 systemd[1]: Started sshd@12-10.128.0.44:22-139.178.68.195:42608.service. Aug 13 01:12:00.518907 sshd[3441]: Accepted publickey for core from 139.178.68.195 port 42608 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:00.520952 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:00.527617 systemd[1]: Started session-12.scope. Aug 13 01:12:00.528326 systemd-logind[1219]: New session 12 of user core. Aug 13 01:12:00.808999 sshd[3441]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:00.813636 systemd[1]: sshd@12-10.128.0.44:22-139.178.68.195:42608.service: Deactivated successfully. Aug 13 01:12:00.814828 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:12:00.815813 systemd-logind[1219]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:12:00.817211 systemd-logind[1219]: Removed session 12. Aug 13 01:12:05.857151 systemd[1]: Started sshd@13-10.128.0.44:22-139.178.68.195:42622.service. Aug 13 01:12:06.154240 sshd[3453]: Accepted publickey for core from 139.178.68.195 port 42622 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:06.156103 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:06.162920 systemd-logind[1219]: New session 13 of user core. Aug 13 01:12:06.163751 systemd[1]: Started session-13.scope. Aug 13 01:12:06.446211 sshd[3453]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:06.451204 systemd[1]: sshd@13-10.128.0.44:22-139.178.68.195:42622.service: Deactivated successfully. Aug 13 01:12:06.452606 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:12:06.453706 systemd-logind[1219]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:12:06.454943 systemd-logind[1219]: Removed session 13. Aug 13 01:12:11.494214 systemd[1]: Started sshd@14-10.128.0.44:22-139.178.68.195:40214.service. Aug 13 01:12:11.787927 sshd[3466]: Accepted publickey for core from 139.178.68.195 port 40214 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:11.790443 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:11.797636 systemd[1]: Started session-14.scope. Aug 13 01:12:11.798272 systemd-logind[1219]: New session 14 of user core. Aug 13 01:12:12.077283 sshd[3466]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:12.082463 systemd[1]: sshd@14-10.128.0.44:22-139.178.68.195:40214.service: Deactivated successfully. Aug 13 01:12:12.083698 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:12:12.084745 systemd-logind[1219]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:12:12.086156 systemd-logind[1219]: Removed session 14. Aug 13 01:12:12.124410 systemd[1]: Started sshd@15-10.128.0.44:22-139.178.68.195:40216.service. Aug 13 01:12:12.418412 sshd[3478]: Accepted publickey for core from 139.178.68.195 port 40216 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:12.420662 sshd[3478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:12.427469 systemd[1]: Started session-15.scope. Aug 13 01:12:12.428173 systemd-logind[1219]: New session 15 of user core. Aug 13 01:12:12.790734 sshd[3478]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:12.795405 systemd[1]: sshd@15-10.128.0.44:22-139.178.68.195:40216.service: Deactivated successfully. Aug 13 01:12:12.796666 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:12:12.798729 systemd-logind[1219]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:12:12.801489 systemd-logind[1219]: Removed session 15. Aug 13 01:12:12.838148 systemd[1]: Started sshd@16-10.128.0.44:22-139.178.68.195:40230.service. Aug 13 01:12:13.129933 sshd[3490]: Accepted publickey for core from 139.178.68.195 port 40230 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:13.131674 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:13.138340 systemd[1]: Started session-16.scope. Aug 13 01:12:13.138975 systemd-logind[1219]: New session 16 of user core. Aug 13 01:12:14.898862 sshd[3490]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:14.906293 systemd-logind[1219]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:12:14.906645 systemd[1]: sshd@16-10.128.0.44:22-139.178.68.195:40230.service: Deactivated successfully. Aug 13 01:12:14.907848 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:12:14.909898 systemd-logind[1219]: Removed session 16. Aug 13 01:12:14.948109 systemd[1]: Started sshd@17-10.128.0.44:22-139.178.68.195:40238.service. Aug 13 01:12:15.243533 sshd[3507]: Accepted publickey for core from 139.178.68.195 port 40238 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:15.245733 sshd[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:15.252742 systemd-logind[1219]: New session 17 of user core. Aug 13 01:12:15.253575 systemd[1]: Started session-17.scope. Aug 13 01:12:15.672751 sshd[3507]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:15.678516 systemd[1]: sshd@17-10.128.0.44:22-139.178.68.195:40238.service: Deactivated successfully. Aug 13 01:12:15.679746 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:12:15.680704 systemd-logind[1219]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:12:15.682014 systemd-logind[1219]: Removed session 17. Aug 13 01:12:15.720036 systemd[1]: Started sshd@18-10.128.0.44:22-139.178.68.195:40240.service. Aug 13 01:12:16.013773 sshd[3517]: Accepted publickey for core from 139.178.68.195 port 40240 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:16.016081 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:16.023395 systemd-logind[1219]: New session 18 of user core. Aug 13 01:12:16.024462 systemd[1]: Started session-18.scope. Aug 13 01:12:16.307161 sshd[3517]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:16.312727 systemd[1]: sshd@18-10.128.0.44:22-139.178.68.195:40240.service: Deactivated successfully. Aug 13 01:12:16.313894 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:12:16.314401 systemd-logind[1219]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:12:16.315862 systemd-logind[1219]: Removed session 18. Aug 13 01:12:21.356785 systemd[1]: Started sshd@19-10.128.0.44:22-139.178.68.195:43772.service. Aug 13 01:12:21.654810 sshd[3533]: Accepted publickey for core from 139.178.68.195 port 43772 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:21.657019 sshd[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:21.663855 systemd[1]: Started session-19.scope. Aug 13 01:12:21.664811 systemd-logind[1219]: New session 19 of user core. Aug 13 01:12:21.942522 sshd[3533]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:21.947056 systemd[1]: sshd@19-10.128.0.44:22-139.178.68.195:43772.service: Deactivated successfully. Aug 13 01:12:21.948327 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:12:21.949398 systemd-logind[1219]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:12:21.951330 systemd-logind[1219]: Removed session 19. Aug 13 01:12:26.989695 systemd[1]: Started sshd@20-10.128.0.44:22-139.178.68.195:43784.service. Aug 13 01:12:27.284640 sshd[3546]: Accepted publickey for core from 139.178.68.195 port 43784 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:27.287026 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:27.294440 systemd[1]: Started session-20.scope. Aug 13 01:12:27.295407 systemd-logind[1219]: New session 20 of user core. Aug 13 01:12:27.577553 sshd[3546]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:27.582346 systemd[1]: sshd@20-10.128.0.44:22-139.178.68.195:43784.service: Deactivated successfully. Aug 13 01:12:27.583565 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:12:27.584756 systemd-logind[1219]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:12:27.586038 systemd-logind[1219]: Removed session 20. Aug 13 01:12:32.626226 systemd[1]: Started sshd@21-10.128.0.44:22-139.178.68.195:57554.service. Aug 13 01:12:32.921994 sshd[3558]: Accepted publickey for core from 139.178.68.195 port 57554 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:32.924338 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:32.931350 systemd[1]: Started session-21.scope. Aug 13 01:12:32.931973 systemd-logind[1219]: New session 21 of user core. Aug 13 01:12:33.212929 sshd[3558]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:33.217679 systemd-logind[1219]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:12:33.217989 systemd[1]: sshd@21-10.128.0.44:22-139.178.68.195:57554.service: Deactivated successfully. Aug 13 01:12:33.219202 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:12:33.220398 systemd-logind[1219]: Removed session 21. Aug 13 01:12:38.261924 systemd[1]: Started sshd@22-10.128.0.44:22-139.178.68.195:57558.service. Aug 13 01:12:38.561075 sshd[3572]: Accepted publickey for core from 139.178.68.195 port 57558 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:38.563377 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:38.569643 systemd-logind[1219]: New session 22 of user core. Aug 13 01:12:38.570683 systemd[1]: Started session-22.scope. Aug 13 01:12:38.852890 sshd[3572]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:38.857668 systemd[1]: sshd@22-10.128.0.44:22-139.178.68.195:57558.service: Deactivated successfully. Aug 13 01:12:38.858877 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:12:38.859810 systemd-logind[1219]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:12:38.861094 systemd-logind[1219]: Removed session 22. Aug 13 01:12:38.900565 systemd[1]: Started sshd@23-10.128.0.44:22-139.178.68.195:57574.service. Aug 13 01:12:39.195448 sshd[3584]: Accepted publickey for core from 139.178.68.195 port 57574 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:39.197865 sshd[3584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:39.205214 systemd[1]: Started session-23.scope. Aug 13 01:12:39.205899 systemd-logind[1219]: New session 23 of user core. Aug 13 01:12:40.958901 env[1228]: time="2025-08-13T01:12:40.956107986Z" level=info msg="StopContainer for \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\" with timeout 30 (s)" Aug 13 01:12:40.958901 env[1228]: time="2025-08-13T01:12:40.956760709Z" level=info msg="Stop container \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\" with signal terminated" Aug 13 01:12:40.979279 systemd[1]: cri-containerd-a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9.scope: Deactivated successfully. Aug 13 01:12:40.994784 env[1228]: time="2025-08-13T01:12:40.994690219Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:12:41.006986 env[1228]: time="2025-08-13T01:12:41.006901811Z" level=info msg="StopContainer for \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\" with timeout 2 (s)" Aug 13 01:12:41.007261 env[1228]: time="2025-08-13T01:12:41.007225326Z" level=info msg="Stop container \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\" with signal terminated" Aug 13 01:12:41.024660 systemd-networkd[1030]: lxc_health: Link DOWN Aug 13 01:12:41.024708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9-rootfs.mount: Deactivated successfully. Aug 13 01:12:41.025364 systemd-networkd[1030]: lxc_health: Lost carrier Aug 13 01:12:41.047838 systemd[1]: cri-containerd-a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec.scope: Deactivated successfully. Aug 13 01:12:41.048212 systemd[1]: cri-containerd-a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec.scope: Consumed 9.232s CPU time. Aug 13 01:12:41.062639 env[1228]: time="2025-08-13T01:12:41.062577665Z" level=info msg="shim disconnected" id=a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9 Aug 13 01:12:41.063254 env[1228]: time="2025-08-13T01:12:41.063213462Z" level=warning msg="cleaning up after shim disconnected" id=a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9 namespace=k8s.io Aug 13 01:12:41.063448 env[1228]: time="2025-08-13T01:12:41.063423429Z" level=info msg="cleaning up dead shim" Aug 13 01:12:41.093095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec-rootfs.mount: Deactivated successfully. Aug 13 01:12:41.096709 env[1228]: time="2025-08-13T01:12:41.096636120Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:12:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3643 runtime=io.containerd.runc.v2\n" Aug 13 01:12:41.100546 env[1228]: time="2025-08-13T01:12:41.100508972Z" level=info msg="StopContainer for \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\" returns successfully" Aug 13 01:12:41.101820 env[1228]: time="2025-08-13T01:12:41.101777113Z" level=info msg="StopPodSandbox for \"dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083\"" Aug 13 01:12:41.102071 env[1228]: time="2025-08-13T01:12:41.102030840Z" level=info msg="Container to stop \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:12:41.107870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083-shm.mount: Deactivated successfully. Aug 13 01:12:41.110964 env[1228]: time="2025-08-13T01:12:41.110903931Z" level=info msg="shim disconnected" id=a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec Aug 13 01:12:41.111228 env[1228]: time="2025-08-13T01:12:41.111180003Z" level=warning msg="cleaning up after shim disconnected" id=a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec namespace=k8s.io Aug 13 01:12:41.111415 env[1228]: time="2025-08-13T01:12:41.111387267Z" level=info msg="cleaning up dead shim" Aug 13 01:12:41.124163 systemd[1]: cri-containerd-dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083.scope: Deactivated successfully. Aug 13 01:12:41.133790 env[1228]: time="2025-08-13T01:12:41.133727032Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:12:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3663 runtime=io.containerd.runc.v2\n" Aug 13 01:12:41.136427 env[1228]: time="2025-08-13T01:12:41.136373080Z" level=info msg="StopContainer for \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\" returns successfully" Aug 13 01:12:41.137275 env[1228]: time="2025-08-13T01:12:41.137233269Z" level=info msg="StopPodSandbox for \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\"" Aug 13 01:12:41.137415 env[1228]: time="2025-08-13T01:12:41.137377257Z" level=info msg="Container to stop \"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:12:41.137415 env[1228]: time="2025-08-13T01:12:41.137405086Z" level=info msg="Container to stop \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:12:41.137535 env[1228]: time="2025-08-13T01:12:41.137423488Z" level=info msg="Container to stop \"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:12:41.137535 env[1228]: time="2025-08-13T01:12:41.137443352Z" level=info msg="Container to stop \"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:12:41.137535 env[1228]: time="2025-08-13T01:12:41.137463524Z" level=info msg="Container to stop \"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:12:41.141410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6-shm.mount: Deactivated successfully. Aug 13 01:12:41.154246 systemd[1]: cri-containerd-e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6.scope: Deactivated successfully. Aug 13 01:12:41.176082 env[1228]: time="2025-08-13T01:12:41.176018629Z" level=info msg="shim disconnected" id=dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083 Aug 13 01:12:41.176533 env[1228]: time="2025-08-13T01:12:41.176486145Z" level=warning msg="cleaning up after shim disconnected" id=dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083 namespace=k8s.io Aug 13 01:12:41.177269 env[1228]: time="2025-08-13T01:12:41.177238340Z" level=info msg="cleaning up dead shim" Aug 13 01:12:41.197544 env[1228]: time="2025-08-13T01:12:41.197490675Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:12:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3710 runtime=io.containerd.runc.v2\n" Aug 13 01:12:41.198160 env[1228]: time="2025-08-13T01:12:41.198112412Z" level=info msg="TearDown network for sandbox \"dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083\" successfully" Aug 13 01:12:41.198284 env[1228]: time="2025-08-13T01:12:41.198158872Z" level=info msg="StopPodSandbox for \"dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083\" returns successfully" Aug 13 01:12:41.200846 env[1228]: time="2025-08-13T01:12:41.200795605Z" level=info msg="shim disconnected" id=e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6 Aug 13 01:12:41.201075 env[1228]: time="2025-08-13T01:12:41.201006421Z" level=warning msg="cleaning up after shim disconnected" id=e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6 namespace=k8s.io Aug 13 01:12:41.201204 env[1228]: time="2025-08-13T01:12:41.201178554Z" level=info msg="cleaning up dead shim" Aug 13 01:12:41.221524 env[1228]: time="2025-08-13T01:12:41.221341932Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:12:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3729 runtime=io.containerd.runc.v2\n" Aug 13 01:12:41.224892 env[1228]: time="2025-08-13T01:12:41.223991262Z" level=info msg="TearDown network for sandbox \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" successfully" Aug 13 01:12:41.224892 env[1228]: time="2025-08-13T01:12:41.224040030Z" level=info msg="StopPodSandbox for \"e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6\" returns successfully" Aug 13 01:12:41.327687 kubelet[2005]: I0813 01:12:41.327622 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-host-proc-sys-net\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.327687 kubelet[2005]: I0813 01:12:41.327685 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-run\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.328436 kubelet[2005]: I0813 01:12:41.327729 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-lib-modules\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.328436 kubelet[2005]: I0813 01:12:41.327752 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-cgroup\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.328436 kubelet[2005]: I0813 01:12:41.327788 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7ad985-1d2a-44f0-b481-8cf72c789c5f-cilium-config-path\") pod \"1a7ad985-1d2a-44f0-b481-8cf72c789c5f\" (UID: \"1a7ad985-1d2a-44f0-b481-8cf72c789c5f\") " Aug 13 01:12:41.328436 kubelet[2005]: I0813 01:12:41.327819 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69a141d8-12b6-4109-8050-e139aaaebbec-clustermesh-secrets\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.328436 kubelet[2005]: I0813 01:12:41.327848 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4lvz\" (UniqueName: \"kubernetes.io/projected/69a141d8-12b6-4109-8050-e139aaaebbec-kube-api-access-r4lvz\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.328436 kubelet[2005]: I0813 01:12:41.327875 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-hostproc\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.328790 kubelet[2005]: I0813 01:12:41.327905 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-bpf-maps\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.328790 kubelet[2005]: I0813 01:12:41.327928 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cni-path\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.328790 kubelet[2005]: I0813 01:12:41.327954 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69a141d8-12b6-4109-8050-e139aaaebbec-hubble-tls\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.328790 kubelet[2005]: I0813 01:12:41.327986 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-config-path\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.328790 kubelet[2005]: I0813 01:12:41.328019 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59csf\" (UniqueName: \"kubernetes.io/projected/1a7ad985-1d2a-44f0-b481-8cf72c789c5f-kube-api-access-59csf\") pod \"1a7ad985-1d2a-44f0-b481-8cf72c789c5f\" (UID: \"1a7ad985-1d2a-44f0-b481-8cf72c789c5f\") " Aug 13 01:12:41.328790 kubelet[2005]: I0813 01:12:41.328049 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-host-proc-sys-kernel\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.329206 kubelet[2005]: I0813 01:12:41.328075 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-etc-cni-netd\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.329206 kubelet[2005]: I0813 01:12:41.328103 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-xtables-lock\") pod \"69a141d8-12b6-4109-8050-e139aaaebbec\" (UID: \"69a141d8-12b6-4109-8050-e139aaaebbec\") " Aug 13 01:12:41.329206 kubelet[2005]: I0813 01:12:41.328206 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:41.329206 kubelet[2005]: I0813 01:12:41.328265 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:41.329206 kubelet[2005]: I0813 01:12:41.328291 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:41.329510 kubelet[2005]: I0813 01:12:41.328357 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:41.329510 kubelet[2005]: I0813 01:12:41.328380 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:41.331882 kubelet[2005]: I0813 01:12:41.331353 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cni-path" (OuterVolumeSpecName: "cni-path") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:41.331882 kubelet[2005]: I0813 01:12:41.331784 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7ad985-1d2a-44f0-b481-8cf72c789c5f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a7ad985-1d2a-44f0-b481-8cf72c789c5f" (UID: "1a7ad985-1d2a-44f0-b481-8cf72c789c5f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:12:41.336172 kubelet[2005]: I0813 01:12:41.336115 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-hostproc" (OuterVolumeSpecName: "hostproc") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:41.336379 kubelet[2005]: I0813 01:12:41.336176 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:41.337934 kubelet[2005]: I0813 01:12:41.337882 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:12:41.338278 kubelet[2005]: I0813 01:12:41.338235 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:41.338507 kubelet[2005]: I0813 01:12:41.338467 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:41.344646 kubelet[2005]: I0813 01:12:41.344606 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a141d8-12b6-4109-8050-e139aaaebbec-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:12:41.345126 kubelet[2005]: I0813 01:12:41.344477 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a141d8-12b6-4109-8050-e139aaaebbec-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:12:41.345286 kubelet[2005]: I0813 01:12:41.345110 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a141d8-12b6-4109-8050-e139aaaebbec-kube-api-access-r4lvz" (OuterVolumeSpecName: "kube-api-access-r4lvz") pod "69a141d8-12b6-4109-8050-e139aaaebbec" (UID: "69a141d8-12b6-4109-8050-e139aaaebbec"). InnerVolumeSpecName "kube-api-access-r4lvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:12:41.345832 kubelet[2005]: I0813 01:12:41.345800 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7ad985-1d2a-44f0-b481-8cf72c789c5f-kube-api-access-59csf" (OuterVolumeSpecName: "kube-api-access-59csf") pod "1a7ad985-1d2a-44f0-b481-8cf72c789c5f" (UID: "1a7ad985-1d2a-44f0-b481-8cf72c789c5f"). InnerVolumeSpecName "kube-api-access-59csf". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:12:41.414771 systemd[1]: Removed slice kubepods-besteffort-pod1a7ad985_1d2a_44f0_b481_8cf72c789c5f.slice. Aug 13 01:12:41.417445 systemd[1]: Removed slice kubepods-burstable-pod69a141d8_12b6_4109_8050_e139aaaebbec.slice. Aug 13 01:12:41.417603 systemd[1]: kubepods-burstable-pod69a141d8_12b6_4109_8050_e139aaaebbec.slice: Consumed 9.396s CPU time. Aug 13 01:12:41.429288 kubelet[2005]: I0813 01:12:41.429242 2005 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-hostproc\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429288 kubelet[2005]: I0813 01:12:41.429286 2005 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-bpf-maps\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429574 kubelet[2005]: I0813 01:12:41.429352 2005 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cni-path\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429574 kubelet[2005]: I0813 01:12:41.429371 2005 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69a141d8-12b6-4109-8050-e139aaaebbec-hubble-tls\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429574 kubelet[2005]: I0813 01:12:41.429388 2005 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-config-path\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429574 kubelet[2005]: I0813 01:12:41.429406 2005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59csf\" (UniqueName: \"kubernetes.io/projected/1a7ad985-1d2a-44f0-b481-8cf72c789c5f-kube-api-access-59csf\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429574 kubelet[2005]: I0813 01:12:41.429426 2005 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-xtables-lock\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429574 kubelet[2005]: I0813 01:12:41.429448 2005 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-host-proc-sys-kernel\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429574 kubelet[2005]: I0813 01:12:41.429464 2005 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-etc-cni-netd\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429855 kubelet[2005]: I0813 01:12:41.429480 2005 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7ad985-1d2a-44f0-b481-8cf72c789c5f-cilium-config-path\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429855 kubelet[2005]: I0813 01:12:41.429495 2005 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-host-proc-sys-net\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429855 kubelet[2005]: I0813 01:12:41.429513 2005 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-run\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429855 kubelet[2005]: I0813 01:12:41.429531 2005 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-lib-modules\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429855 kubelet[2005]: I0813 01:12:41.429559 2005 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69a141d8-12b6-4109-8050-e139aaaebbec-cilium-cgroup\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429855 kubelet[2005]: I0813 01:12:41.429584 2005 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69a141d8-12b6-4109-8050-e139aaaebbec-clustermesh-secrets\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.429855 kubelet[2005]: I0813 01:12:41.429602 2005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4lvz\" (UniqueName: \"kubernetes.io/projected/69a141d8-12b6-4109-8050-e139aaaebbec-kube-api-access-r4lvz\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:41.728172 kubelet[2005]: I0813 01:12:41.728131 2005 scope.go:117] "RemoveContainer" containerID="a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec" Aug 13 01:12:41.735939 env[1228]: time="2025-08-13T01:12:41.735877206Z" level=info msg="RemoveContainer for \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\"" Aug 13 01:12:41.746396 env[1228]: time="2025-08-13T01:12:41.746324187Z" level=info msg="RemoveContainer for \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\" returns successfully" Aug 13 01:12:41.748935 kubelet[2005]: I0813 01:12:41.748795 2005 scope.go:117] "RemoveContainer" containerID="c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e" Aug 13 01:12:41.751557 env[1228]: time="2025-08-13T01:12:41.751501005Z" level=info msg="RemoveContainer for \"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e\"" Aug 13 01:12:41.765092 env[1228]: time="2025-08-13T01:12:41.765033371Z" level=info msg="RemoveContainer for \"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e\" returns successfully" Aug 13 01:12:41.766345 kubelet[2005]: I0813 01:12:41.765717 2005 scope.go:117] "RemoveContainer" containerID="6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7" Aug 13 01:12:41.768658 env[1228]: time="2025-08-13T01:12:41.768611983Z" level=info msg="RemoveContainer for \"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7\"" Aug 13 01:12:41.773228 env[1228]: time="2025-08-13T01:12:41.773163111Z" level=info msg="RemoveContainer for \"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7\" returns successfully" Aug 13 01:12:41.775042 kubelet[2005]: I0813 01:12:41.773510 2005 scope.go:117] "RemoveContainer" containerID="f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed" Aug 13 01:12:41.777589 env[1228]: time="2025-08-13T01:12:41.777486261Z" level=info msg="RemoveContainer for \"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed\"" Aug 13 01:12:41.783998 env[1228]: time="2025-08-13T01:12:41.783946728Z" level=info msg="RemoveContainer for \"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed\" returns successfully" Aug 13 01:12:41.784193 kubelet[2005]: I0813 01:12:41.784163 2005 scope.go:117] "RemoveContainer" containerID="686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70" Aug 13 01:12:41.785628 env[1228]: time="2025-08-13T01:12:41.785587134Z" level=info msg="RemoveContainer for \"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70\"" Aug 13 01:12:41.789765 env[1228]: time="2025-08-13T01:12:41.789719381Z" level=info msg="RemoveContainer for \"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70\" returns successfully" Aug 13 01:12:41.790169 kubelet[2005]: I0813 01:12:41.790123 2005 scope.go:117] "RemoveContainer" containerID="a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec" Aug 13 01:12:41.790645 env[1228]: time="2025-08-13T01:12:41.790545241Z" level=error msg="ContainerStatus for \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\": not found" Aug 13 01:12:41.790922 kubelet[2005]: E0813 01:12:41.790890 2005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\": not found" containerID="a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec" Aug 13 01:12:41.791066 kubelet[2005]: I0813 01:12:41.790938 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec"} err="failed to get container status \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7cf43451748725100433519fb578392e9435e55277fea8ec1153d4bf305fbec\": not found" Aug 13 01:12:41.791169 kubelet[2005]: I0813 01:12:41.791069 2005 scope.go:117] "RemoveContainer" containerID="c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e" Aug 13 01:12:41.791464 env[1228]: time="2025-08-13T01:12:41.791388921Z" level=error msg="ContainerStatus for \"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e\": not found" Aug 13 01:12:41.791607 kubelet[2005]: E0813 01:12:41.791580 2005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e\": not found" containerID="c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e" Aug 13 01:12:41.791729 kubelet[2005]: I0813 01:12:41.791618 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e"} err="failed to get container status \"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2c222db93a69104be5eab831fa9f295b0fa48735aff0880fa26e263133b592e\": not found" Aug 13 01:12:41.791729 kubelet[2005]: I0813 01:12:41.791657 2005 scope.go:117] "RemoveContainer" containerID="6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7" Aug 13 01:12:41.792126 env[1228]: time="2025-08-13T01:12:41.791910883Z" level=error msg="ContainerStatus for \"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7\": not found" Aug 13 01:12:41.792292 kubelet[2005]: E0813 01:12:41.792254 2005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7\": not found" containerID="6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7" Aug 13 01:12:41.792413 kubelet[2005]: I0813 01:12:41.792332 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7"} err="failed to get container status \"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"6fb6b4cb0cf713dea4ea0666723ff0f79b428c2e829475b8036539f96dcf08f7\": not found" Aug 13 01:12:41.792413 kubelet[2005]: I0813 01:12:41.792360 2005 scope.go:117] "RemoveContainer" containerID="f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed" Aug 13 01:12:41.792717 env[1228]: time="2025-08-13T01:12:41.792613427Z" level=error msg="ContainerStatus for \"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed\": not found" Aug 13 01:12:41.792886 kubelet[2005]: E0813 01:12:41.792844 2005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed\": not found" containerID="f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed" Aug 13 01:12:41.792997 kubelet[2005]: I0813 01:12:41.792876 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed"} err="failed to get container status \"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed\": rpc error: code = NotFound desc = an error occurred when try to find container \"f97b26d58d6c9a6c56ef7c56f04396da02bff6b0e4e8ca297dace9900ef9bfed\": not found" Aug 13 01:12:41.792997 kubelet[2005]: I0813 01:12:41.792903 2005 scope.go:117] "RemoveContainer" containerID="686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70" Aug 13 01:12:41.793231 env[1228]: time="2025-08-13T01:12:41.793150644Z" level=error msg="ContainerStatus for \"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70\": not found" Aug 13 01:12:41.793423 kubelet[2005]: E0813 01:12:41.793369 2005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70\": not found" containerID="686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70" Aug 13 01:12:41.793423 kubelet[2005]: I0813 01:12:41.793414 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70"} err="failed to get container status \"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70\": rpc error: code = NotFound desc = an error occurred when try to find container \"686e987aaccb5bb3cd914b423d257465555651d8c026c5a46a2d655f8aa94f70\": not found" Aug 13 01:12:41.793647 kubelet[2005]: I0813 01:12:41.793441 2005 scope.go:117] "RemoveContainer" containerID="a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9" Aug 13 01:12:41.794935 env[1228]: time="2025-08-13T01:12:41.794900740Z" level=info msg="RemoveContainer for \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\"" Aug 13 01:12:41.799001 env[1228]: time="2025-08-13T01:12:41.798925475Z" level=info msg="RemoveContainer for \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\" returns successfully" Aug 13 01:12:41.799363 kubelet[2005]: I0813 01:12:41.799340 2005 scope.go:117] "RemoveContainer" containerID="a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9" Aug 13 01:12:41.799689 env[1228]: time="2025-08-13T01:12:41.799615367Z" level=error msg="ContainerStatus for \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\": not found" Aug 13 01:12:41.799828 kubelet[2005]: E0813 01:12:41.799795 2005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\": not found" containerID="a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9" Aug 13 01:12:41.799937 kubelet[2005]: I0813 01:12:41.799839 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9"} err="failed to get container status \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\": rpc error: code = NotFound desc = an error occurred when try to find container \"a84a7519a1668062b6a7fae47b387860fd2586252cbbe9e097e9ed0b9218aad9\": not found" Aug 13 01:12:41.945337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8542dd9ea92151d014a9c8a589e22be5873752be045af570d8d43fc414559b6-rootfs.mount: Deactivated successfully. Aug 13 01:12:41.945495 systemd[1]: var-lib-kubelet-pods-69a141d8\x2d12b6\x2d4109\x2d8050\x2de139aaaebbec-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:12:41.945603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd2f7d18b570e06710fd7a39e793068b0bb568beae7b4ae8631f41dae6cea083-rootfs.mount: Deactivated successfully. Aug 13 01:12:41.945723 systemd[1]: var-lib-kubelet-pods-69a141d8\x2d12b6\x2d4109\x2d8050\x2de139aaaebbec-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:12:41.945834 systemd[1]: var-lib-kubelet-pods-1a7ad985\x2d1d2a\x2d44f0\x2db481\x2d8cf72c789c5f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d59csf.mount: Deactivated successfully. Aug 13 01:12:41.945951 systemd[1]: var-lib-kubelet-pods-69a141d8\x2d12b6\x2d4109\x2d8050\x2de139aaaebbec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr4lvz.mount: Deactivated successfully. Aug 13 01:12:42.486094 kubelet[2005]: E0813 01:12:42.485998 2005 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:12:42.899634 sshd[3584]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:42.904629 systemd[1]: sshd@23-10.128.0.44:22-139.178.68.195:57574.service: Deactivated successfully. Aug 13 01:12:42.905790 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:12:42.906927 systemd-logind[1219]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:12:42.908255 systemd-logind[1219]: Removed session 23. Aug 13 01:12:42.946547 systemd[1]: Started sshd@24-10.128.0.44:22-139.178.68.195:45614.service. Aug 13 01:12:43.241354 sshd[3752]: Accepted publickey for core from 139.178.68.195 port 45614 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:43.243630 sshd[3752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:43.250671 systemd[1]: Started session-24.scope. Aug 13 01:12:43.251789 systemd-logind[1219]: New session 24 of user core. Aug 13 01:12:43.409765 kubelet[2005]: I0813 01:12:43.409708 2005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a7ad985-1d2a-44f0-b481-8cf72c789c5f" path="/var/lib/kubelet/pods/1a7ad985-1d2a-44f0-b481-8cf72c789c5f/volumes" Aug 13 01:12:43.410581 kubelet[2005]: I0813 01:12:43.410544 2005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a141d8-12b6-4109-8050-e139aaaebbec" path="/var/lib/kubelet/pods/69a141d8-12b6-4109-8050-e139aaaebbec/volumes" Aug 13 01:12:44.394041 kubelet[2005]: E0813 01:12:44.393998 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a7ad985-1d2a-44f0-b481-8cf72c789c5f" containerName="cilium-operator" Aug 13 01:12:44.394732 kubelet[2005]: E0813 01:12:44.394701 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a141d8-12b6-4109-8050-e139aaaebbec" containerName="clean-cilium-state" Aug 13 01:12:44.394973 kubelet[2005]: E0813 01:12:44.394940 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a141d8-12b6-4109-8050-e139aaaebbec" containerName="mount-cgroup" Aug 13 01:12:44.395151 kubelet[2005]: E0813 01:12:44.395129 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a141d8-12b6-4109-8050-e139aaaebbec" containerName="apply-sysctl-overwrites" Aug 13 01:12:44.395330 kubelet[2005]: E0813 01:12:44.395289 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a141d8-12b6-4109-8050-e139aaaebbec" containerName="mount-bpf-fs" Aug 13 01:12:44.395502 kubelet[2005]: E0813 01:12:44.395480 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a141d8-12b6-4109-8050-e139aaaebbec" containerName="cilium-agent" Aug 13 01:12:44.395730 kubelet[2005]: I0813 01:12:44.395690 2005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7ad985-1d2a-44f0-b481-8cf72c789c5f" containerName="cilium-operator" Aug 13 01:12:44.395882 kubelet[2005]: I0813 01:12:44.395863 2005 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a141d8-12b6-4109-8050-e139aaaebbec" containerName="cilium-agent" Aug 13 01:12:44.404637 systemd[1]: Created slice kubepods-burstable-podc6fa1dba_0674_49ac_abff_acc0c540c0c3.slice. Aug 13 01:12:44.408436 sshd[3752]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:44.412945 systemd[1]: sshd@24-10.128.0.44:22-139.178.68.195:45614.service: Deactivated successfully. Aug 13 01:12:44.414137 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:12:44.416839 systemd-logind[1219]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:12:44.419413 systemd-logind[1219]: Removed session 24. Aug 13 01:12:44.421374 kubelet[2005]: W0813 01:12:44.421283 2005 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object Aug 13 01:12:44.421374 kubelet[2005]: E0813 01:12:44.421352 2005 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 01:12:44.421674 kubelet[2005]: W0813 01:12:44.421437 2005 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object Aug 13 01:12:44.421674 kubelet[2005]: E0813 01:12:44.421464 2005 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 01:12:44.421674 kubelet[2005]: W0813 01:12:44.421528 2005 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object Aug 13 01:12:44.421674 kubelet[2005]: E0813 01:12:44.421549 2005 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 01:12:44.422133 kubelet[2005]: W0813 01:12:44.421283 2005 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object Aug 13 01:12:44.422620 kubelet[2005]: E0813 01:12:44.422584 2005 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 13 01:12:44.456263 kubelet[2005]: I0813 01:12:44.456221 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cni-path\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.456632 kubelet[2005]: I0813 01:12:44.456584 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-lib-modules\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.456833 kubelet[2005]: I0813 01:12:44.456810 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-config-path\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.457028 kubelet[2005]: I0813 01:12:44.456995 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-host-proc-sys-kernel\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.457203 kubelet[2005]: I0813 01:12:44.457171 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-etc-cni-netd\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.458620 kubelet[2005]: I0813 01:12:44.458575 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-ipsec-secrets\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.458802 kubelet[2005]: I0813 01:12:44.458777 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-host-proc-sys-net\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.459021 kubelet[2005]: I0813 01:12:44.458971 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6fa1dba-0674-49ac-abff-acc0c540c0c3-hubble-tls\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.459169 kubelet[2005]: I0813 01:12:44.459148 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-hostproc\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.460061 systemd[1]: Started sshd@25-10.128.0.44:22-139.178.68.195:45630.service. Aug 13 01:12:44.461926 kubelet[2005]: I0813 01:12:44.459289 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-xtables-lock\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.462155 kubelet[2005]: I0813 01:12:44.462118 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-bpf-maps\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.462350 kubelet[2005]: I0813 01:12:44.462325 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-cgroup\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.462510 kubelet[2005]: I0813 01:12:44.462488 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6fa1dba-0674-49ac-abff-acc0c540c0c3-clustermesh-secrets\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.462691 kubelet[2005]: I0813 01:12:44.462667 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7ww\" (UniqueName: \"kubernetes.io/projected/c6fa1dba-0674-49ac-abff-acc0c540c0c3-kube-api-access-tx7ww\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.462858 kubelet[2005]: I0813 01:12:44.462837 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-run\") pod \"cilium-bmjjx\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " pod="kube-system/cilium-bmjjx" Aug 13 01:12:44.767492 sshd[3762]: Accepted publickey for core from 139.178.68.195 port 45630 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:44.770020 sshd[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:44.777245 systemd[1]: Started session-25.scope. Aug 13 01:12:44.777905 systemd-logind[1219]: New session 25 of user core. Aug 13 01:12:45.058428 kubelet[2005]: E0813 01:12:45.058248 2005 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-bmjjx" podUID="c6fa1dba-0674-49ac-abff-acc0c540c0c3" Aug 13 01:12:45.077627 sshd[3762]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:45.082436 systemd[1]: sshd@25-10.128.0.44:22-139.178.68.195:45630.service: Deactivated successfully. Aug 13 01:12:45.083651 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:12:45.084654 systemd-logind[1219]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:12:45.086141 systemd-logind[1219]: Removed session 25. Aug 13 01:12:45.123131 systemd[1]: Started sshd@26-10.128.0.44:22-139.178.68.195:45634.service. Aug 13 01:12:45.427060 sshd[3775]: Accepted publickey for core from 139.178.68.195 port 45634 ssh2: RSA SHA256:js7QLiUsN7/S4hY8YN2wBIB3ZHNNF040gi6scZJjeR4 Aug 13 01:12:45.427550 sshd[3775]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:45.436950 systemd[1]: Started session-26.scope. Aug 13 01:12:45.437926 systemd-logind[1219]: New session 26 of user core. Aug 13 01:12:45.565739 kubelet[2005]: E0813 01:12:45.565692 2005 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Aug 13 01:12:45.566229 kubelet[2005]: E0813 01:12:45.565809 2005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-ipsec-secrets podName:c6fa1dba-0674-49ac-abff-acc0c540c0c3 nodeName:}" failed. No retries permitted until 2025-08-13 01:12:46.065783449 +0000 UTC m=+128.856575736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-ipsec-secrets") pod "cilium-bmjjx" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3") : failed to sync secret cache: timed out waiting for the condition Aug 13 01:12:45.566229 kubelet[2005]: E0813 01:12:45.566148 2005 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Aug 13 01:12:45.566229 kubelet[2005]: E0813 01:12:45.566212 2005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-config-path podName:c6fa1dba-0674-49ac-abff-acc0c540c0c3 nodeName:}" failed. No retries permitted until 2025-08-13 01:12:46.066194396 +0000 UTC m=+128.856986660 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-config-path") pod "cilium-bmjjx" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3") : failed to sync configmap cache: timed out waiting for the condition Aug 13 01:12:45.873321 kubelet[2005]: I0813 01:12:45.873227 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-lib-modules\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873555 kubelet[2005]: I0813 01:12:45.873350 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-etc-cni-netd\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873555 kubelet[2005]: I0813 01:12:45.873389 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-hostproc\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873555 kubelet[2005]: I0813 01:12:45.873415 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cni-path\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873555 kubelet[2005]: I0813 01:12:45.873447 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6fa1dba-0674-49ac-abff-acc0c540c0c3-clustermesh-secrets\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873555 kubelet[2005]: I0813 01:12:45.873481 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx7ww\" (UniqueName: \"kubernetes.io/projected/c6fa1dba-0674-49ac-abff-acc0c540c0c3-kube-api-access-tx7ww\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873555 kubelet[2005]: I0813 01:12:45.873508 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-cgroup\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873931 kubelet[2005]: I0813 01:12:45.873530 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-xtables-lock\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873931 kubelet[2005]: I0813 01:12:45.873560 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6fa1dba-0674-49ac-abff-acc0c540c0c3-hubble-tls\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873931 kubelet[2005]: I0813 01:12:45.873584 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-host-proc-sys-kernel\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873931 kubelet[2005]: I0813 01:12:45.873612 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-host-proc-sys-net\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873931 kubelet[2005]: I0813 01:12:45.873637 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-run\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.873931 kubelet[2005]: I0813 01:12:45.873661 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-bpf-maps\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:45.874274 kubelet[2005]: I0813 01:12:45.873227 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:45.874274 kubelet[2005]: I0813 01:12:45.873773 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:45.874274 kubelet[2005]: I0813 01:12:45.873826 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:45.874274 kubelet[2005]: I0813 01:12:45.873853 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-hostproc" (OuterVolumeSpecName: "hostproc") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:45.874274 kubelet[2005]: I0813 01:12:45.873891 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cni-path" (OuterVolumeSpecName: "cni-path") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:45.875485 kubelet[2005]: I0813 01:12:45.875433 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:45.875727 kubelet[2005]: I0813 01:12:45.875688 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:45.875909 kubelet[2005]: I0813 01:12:45.875883 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:45.876067 kubelet[2005]: I0813 01:12:45.876044 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:45.876240 kubelet[2005]: I0813 01:12:45.876217 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:12:45.884665 kubelet[2005]: I0813 01:12:45.884609 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6fa1dba-0674-49ac-abff-acc0c540c0c3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:12:45.885294 systemd[1]: var-lib-kubelet-pods-c6fa1dba\x2d0674\x2d49ac\x2dabff\x2dacc0c540c0c3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:12:45.886641 kubelet[2005]: I0813 01:12:45.886587 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6fa1dba-0674-49ac-abff-acc0c540c0c3-kube-api-access-tx7ww" (OuterVolumeSpecName: "kube-api-access-tx7ww") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "kube-api-access-tx7ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:12:45.891319 systemd[1]: var-lib-kubelet-pods-c6fa1dba\x2d0674\x2d49ac\x2dabff\x2dacc0c540c0c3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:12:45.891470 systemd[1]: var-lib-kubelet-pods-c6fa1dba\x2d0674\x2d49ac\x2dabff\x2dacc0c540c0c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtx7ww.mount: Deactivated successfully. Aug 13 01:12:45.895619 kubelet[2005]: I0813 01:12:45.895495 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6fa1dba-0674-49ac-abff-acc0c540c0c3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:12:45.974764 kubelet[2005]: I0813 01:12:45.974696 2005 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-xtables-lock\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.974764 kubelet[2005]: I0813 01:12:45.974746 2005 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-host-proc-sys-kernel\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.974764 kubelet[2005]: I0813 01:12:45.974771 2005 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-host-proc-sys-net\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.975099 kubelet[2005]: I0813 01:12:45.974786 2005 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6fa1dba-0674-49ac-abff-acc0c540c0c3-hubble-tls\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.975099 kubelet[2005]: I0813 01:12:45.974801 2005 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-run\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.975099 kubelet[2005]: I0813 01:12:45.974817 2005 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-bpf-maps\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.975099 kubelet[2005]: I0813 01:12:45.974833 2005 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-etc-cni-netd\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.975099 kubelet[2005]: I0813 01:12:45.974849 2005 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-hostproc\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.975099 kubelet[2005]: I0813 01:12:45.974862 2005 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cni-path\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.975099 kubelet[2005]: I0813 01:12:45.974890 2005 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-lib-modules\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.975371 kubelet[2005]: I0813 01:12:45.974907 2005 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6fa1dba-0674-49ac-abff-acc0c540c0c3-clustermesh-secrets\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.975371 kubelet[2005]: I0813 01:12:45.974929 2005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx7ww\" (UniqueName: \"kubernetes.io/projected/c6fa1dba-0674-49ac-abff-acc0c540c0c3-kube-api-access-tx7ww\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:45.975371 kubelet[2005]: I0813 01:12:45.974946 2005 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-cgroup\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:46.178957 kubelet[2005]: I0813 01:12:46.176737 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-ipsec-secrets\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:46.178957 kubelet[2005]: I0813 01:12:46.176820 2005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-config-path\") pod \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\" (UID: \"c6fa1dba-0674-49ac-abff-acc0c540c0c3\") " Aug 13 01:12:46.180648 kubelet[2005]: I0813 01:12:46.180581 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:12:46.186242 systemd[1]: var-lib-kubelet-pods-c6fa1dba\x2d0674\x2d49ac\x2dabff\x2dacc0c540c0c3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 01:12:46.187073 kubelet[2005]: I0813 01:12:46.187031 2005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c6fa1dba-0674-49ac-abff-acc0c540c0c3" (UID: "c6fa1dba-0674-49ac-abff-acc0c540c0c3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:12:46.277852 kubelet[2005]: I0813 01:12:46.277785 2005 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-config-path\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:46.277852 kubelet[2005]: I0813 01:12:46.277832 2005 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6fa1dba-0674-49ac-abff-acc0c540c0c3-cilium-ipsec-secrets\") on node \"ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 01:12:46.757916 systemd[1]: Removed slice kubepods-burstable-podc6fa1dba_0674_49ac_abff_acc0c540c0c3.slice. Aug 13 01:12:46.849191 systemd[1]: Created slice kubepods-burstable-podaf2c17dc_889e_4493_b7d5_b2864a084ff7.slice. Aug 13 01:12:46.983247 kubelet[2005]: I0813 01:12:46.983186 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af2c17dc-889e-4493-b7d5-b2864a084ff7-hostproc\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.983888 kubelet[2005]: I0813 01:12:46.983412 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af2c17dc-889e-4493-b7d5-b2864a084ff7-clustermesh-secrets\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.983888 kubelet[2005]: I0813 01:12:46.983464 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af2c17dc-889e-4493-b7d5-b2864a084ff7-lib-modules\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.983888 kubelet[2005]: I0813 01:12:46.983511 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af2c17dc-889e-4493-b7d5-b2864a084ff7-bpf-maps\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.983888 kubelet[2005]: I0813 01:12:46.983542 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af2c17dc-889e-4493-b7d5-b2864a084ff7-xtables-lock\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.983888 kubelet[2005]: I0813 01:12:46.983572 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af2c17dc-889e-4493-b7d5-b2864a084ff7-hubble-tls\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.983888 kubelet[2005]: I0813 01:12:46.983597 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af2c17dc-889e-4493-b7d5-b2864a084ff7-cilium-cgroup\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.984194 kubelet[2005]: I0813 01:12:46.983636 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af2c17dc-889e-4493-b7d5-b2864a084ff7-cni-path\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.984194 kubelet[2005]: I0813 01:12:46.983669 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/af2c17dc-889e-4493-b7d5-b2864a084ff7-cilium-ipsec-secrets\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.984194 kubelet[2005]: I0813 01:12:46.983723 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af2c17dc-889e-4493-b7d5-b2864a084ff7-host-proc-sys-kernel\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.984194 kubelet[2005]: I0813 01:12:46.983752 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af2c17dc-889e-4493-b7d5-b2864a084ff7-cilium-run\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.984194 kubelet[2005]: I0813 01:12:46.983776 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af2c17dc-889e-4493-b7d5-b2864a084ff7-cilium-config-path\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.984427 kubelet[2005]: I0813 01:12:46.983816 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5cr\" (UniqueName: \"kubernetes.io/projected/af2c17dc-889e-4493-b7d5-b2864a084ff7-kube-api-access-bp5cr\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.984427 kubelet[2005]: I0813 01:12:46.983842 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af2c17dc-889e-4493-b7d5-b2864a084ff7-etc-cni-netd\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:46.984427 kubelet[2005]: I0813 01:12:46.983866 2005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af2c17dc-889e-4493-b7d5-b2864a084ff7-host-proc-sys-net\") pod \"cilium-2zvwq\" (UID: \"af2c17dc-889e-4493-b7d5-b2864a084ff7\") " pod="kube-system/cilium-2zvwq" Aug 13 01:12:47.158130 env[1228]: time="2025-08-13T01:12:47.157962450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2zvwq,Uid:af2c17dc-889e-4493-b7d5-b2864a084ff7,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:47.185387 env[1228]: time="2025-08-13T01:12:47.184166762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:12:47.185387 env[1228]: time="2025-08-13T01:12:47.184265813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:12:47.185387 env[1228]: time="2025-08-13T01:12:47.184343086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:12:47.185387 env[1228]: time="2025-08-13T01:12:47.184585924Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5 pid=3803 runtime=io.containerd.runc.v2 Aug 13 01:12:47.206086 systemd[1]: Started cri-containerd-18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5.scope. Aug 13 01:12:47.273126 env[1228]: time="2025-08-13T01:12:47.273070535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2zvwq,Uid:af2c17dc-889e-4493-b7d5-b2864a084ff7,Namespace:kube-system,Attempt:0,} returns sandbox id \"18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5\"" Aug 13 01:12:47.278737 env[1228]: time="2025-08-13T01:12:47.278544428Z" level=info msg="CreateContainer within sandbox \"18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:12:47.292052 env[1228]: time="2025-08-13T01:12:47.292010697Z" level=info msg="CreateContainer within sandbox \"18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e391a1bf3d2513af9a22ea633edfe5cee97943aa176a49942f0d1aaa9d07b7a\"" Aug 13 01:12:47.294776 env[1228]: time="2025-08-13T01:12:47.294733655Z" level=info msg="StartContainer for \"8e391a1bf3d2513af9a22ea633edfe5cee97943aa176a49942f0d1aaa9d07b7a\"" Aug 13 01:12:47.319016 systemd[1]: Started cri-containerd-8e391a1bf3d2513af9a22ea633edfe5cee97943aa176a49942f0d1aaa9d07b7a.scope. Aug 13 01:12:47.366396 env[1228]: time="2025-08-13T01:12:47.363221551Z" level=info msg="StartContainer for \"8e391a1bf3d2513af9a22ea633edfe5cee97943aa176a49942f0d1aaa9d07b7a\" returns successfully" Aug 13 01:12:47.378947 systemd[1]: cri-containerd-8e391a1bf3d2513af9a22ea633edfe5cee97943aa176a49942f0d1aaa9d07b7a.scope: Deactivated successfully. Aug 13 01:12:47.413426 kubelet[2005]: I0813 01:12:47.412938 2005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6fa1dba-0674-49ac-abff-acc0c540c0c3" path="/var/lib/kubelet/pods/c6fa1dba-0674-49ac-abff-acc0c540c0c3/volumes" Aug 13 01:12:47.435515 env[1228]: time="2025-08-13T01:12:47.435439034Z" level=info msg="shim disconnected" id=8e391a1bf3d2513af9a22ea633edfe5cee97943aa176a49942f0d1aaa9d07b7a Aug 13 01:12:47.435966 env[1228]: time="2025-08-13T01:12:47.435923193Z" level=warning msg="cleaning up after shim disconnected" id=8e391a1bf3d2513af9a22ea633edfe5cee97943aa176a49942f0d1aaa9d07b7a namespace=k8s.io Aug 13 01:12:47.436169 env[1228]: time="2025-08-13T01:12:47.436137493Z" level=info msg="cleaning up dead shim" Aug 13 01:12:47.450145 env[1228]: time="2025-08-13T01:12:47.450099995Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:12:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3888 runtime=io.containerd.runc.v2\n" Aug 13 01:12:47.487482 kubelet[2005]: E0813 01:12:47.487433 2005 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:12:47.761385 env[1228]: time="2025-08-13T01:12:47.761326877Z" level=info msg="CreateContainer within sandbox \"18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:12:47.778438 env[1228]: time="2025-08-13T01:12:47.778381823Z" level=info msg="CreateContainer within sandbox \"18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"86e7d7b1b472797a818e3f5750ab2d9270e4febd4fbc35711d5e7122ffc57a4b\"" Aug 13 01:12:47.779645 env[1228]: time="2025-08-13T01:12:47.779605257Z" level=info msg="StartContainer for \"86e7d7b1b472797a818e3f5750ab2d9270e4febd4fbc35711d5e7122ffc57a4b\"" Aug 13 01:12:47.810562 systemd[1]: Started cri-containerd-86e7d7b1b472797a818e3f5750ab2d9270e4febd4fbc35711d5e7122ffc57a4b.scope. Aug 13 01:12:47.874570 systemd[1]: cri-containerd-86e7d7b1b472797a818e3f5750ab2d9270e4febd4fbc35711d5e7122ffc57a4b.scope: Deactivated successfully. Aug 13 01:12:47.879005 env[1228]: time="2025-08-13T01:12:47.877251163Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf2c17dc_889e_4493_b7d5_b2864a084ff7.slice/cri-containerd-86e7d7b1b472797a818e3f5750ab2d9270e4febd4fbc35711d5e7122ffc57a4b.scope/memory.events\": no such file or directory" Aug 13 01:12:47.881019 env[1228]: time="2025-08-13T01:12:47.880951931Z" level=info msg="StartContainer for \"86e7d7b1b472797a818e3f5750ab2d9270e4febd4fbc35711d5e7122ffc57a4b\" returns successfully" Aug 13 01:12:47.911490 env[1228]: time="2025-08-13T01:12:47.911427463Z" level=info msg="shim disconnected" id=86e7d7b1b472797a818e3f5750ab2d9270e4febd4fbc35711d5e7122ffc57a4b Aug 13 01:12:47.911822 env[1228]: time="2025-08-13T01:12:47.911790594Z" level=warning msg="cleaning up after shim disconnected" id=86e7d7b1b472797a818e3f5750ab2d9270e4febd4fbc35711d5e7122ffc57a4b namespace=k8s.io Aug 13 01:12:47.911960 env[1228]: time="2025-08-13T01:12:47.911928865Z" level=info msg="cleaning up dead shim" Aug 13 01:12:47.929393 env[1228]: time="2025-08-13T01:12:47.929340827Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:12:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3949 runtime=io.containerd.runc.v2\n" Aug 13 01:12:48.774294 env[1228]: time="2025-08-13T01:12:48.773729591Z" level=info msg="CreateContainer within sandbox \"18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:12:48.818569 env[1228]: time="2025-08-13T01:12:48.818501621Z" level=info msg="CreateContainer within sandbox \"18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9fef3cd8a83a5205ee05df253005a26ca7340ddbb090b36726619ce69d7cf34c\"" Aug 13 01:12:48.819291 env[1228]: time="2025-08-13T01:12:48.819248956Z" level=info msg="StartContainer for \"9fef3cd8a83a5205ee05df253005a26ca7340ddbb090b36726619ce69d7cf34c\"" Aug 13 01:12:48.854608 systemd[1]: Started cri-containerd-9fef3cd8a83a5205ee05df253005a26ca7340ddbb090b36726619ce69d7cf34c.scope. Aug 13 01:12:48.911460 env[1228]: time="2025-08-13T01:12:48.905628974Z" level=info msg="StartContainer for \"9fef3cd8a83a5205ee05df253005a26ca7340ddbb090b36726619ce69d7cf34c\" returns successfully" Aug 13 01:12:48.913844 systemd[1]: cri-containerd-9fef3cd8a83a5205ee05df253005a26ca7340ddbb090b36726619ce69d7cf34c.scope: Deactivated successfully. Aug 13 01:12:48.947855 env[1228]: time="2025-08-13T01:12:48.947781227Z" level=info msg="shim disconnected" id=9fef3cd8a83a5205ee05df253005a26ca7340ddbb090b36726619ce69d7cf34c Aug 13 01:12:48.947855 env[1228]: time="2025-08-13T01:12:48.947845854Z" level=warning msg="cleaning up after shim disconnected" id=9fef3cd8a83a5205ee05df253005a26ca7340ddbb090b36726619ce69d7cf34c namespace=k8s.io Aug 13 01:12:48.947855 env[1228]: time="2025-08-13T01:12:48.947877373Z" level=info msg="cleaning up dead shim" Aug 13 01:12:48.960787 env[1228]: time="2025-08-13T01:12:48.960723375Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:12:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4010 runtime=io.containerd.runc.v2\n" Aug 13 01:12:49.097486 systemd[1]: run-containerd-runc-k8s.io-9fef3cd8a83a5205ee05df253005a26ca7340ddbb090b36726619ce69d7cf34c-runc.zXCpRY.mount: Deactivated successfully. Aug 13 01:12:49.097805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fef3cd8a83a5205ee05df253005a26ca7340ddbb090b36726619ce69d7cf34c-rootfs.mount: Deactivated successfully. Aug 13 01:12:49.777709 env[1228]: time="2025-08-13T01:12:49.777632603Z" level=info msg="CreateContainer within sandbox \"18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:12:49.801360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2770093307.mount: Deactivated successfully. Aug 13 01:12:49.809104 env[1228]: time="2025-08-13T01:12:49.809029051Z" level=info msg="CreateContainer within sandbox \"18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d96a7b218512f805f1b4d2789490d01c48cf91007ece2881f9de152336f30627\"" Aug 13 01:12:49.811264 env[1228]: time="2025-08-13T01:12:49.809954369Z" level=info msg="StartContainer for \"d96a7b218512f805f1b4d2789490d01c48cf91007ece2881f9de152336f30627\"" Aug 13 01:12:49.846704 systemd[1]: Started cri-containerd-d96a7b218512f805f1b4d2789490d01c48cf91007ece2881f9de152336f30627.scope. Aug 13 01:12:49.891496 systemd[1]: cri-containerd-d96a7b218512f805f1b4d2789490d01c48cf91007ece2881f9de152336f30627.scope: Deactivated successfully. Aug 13 01:12:49.896615 env[1228]: time="2025-08-13T01:12:49.895356992Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf2c17dc_889e_4493_b7d5_b2864a084ff7.slice/cri-containerd-d96a7b218512f805f1b4d2789490d01c48cf91007ece2881f9de152336f30627.scope/memory.events\": no such file or directory" Aug 13 01:12:49.900119 env[1228]: time="2025-08-13T01:12:49.900063777Z" level=info msg="StartContainer for \"d96a7b218512f805f1b4d2789490d01c48cf91007ece2881f9de152336f30627\" returns successfully" Aug 13 01:12:49.930996 env[1228]: time="2025-08-13T01:12:49.930923618Z" level=info msg="shim disconnected" id=d96a7b218512f805f1b4d2789490d01c48cf91007ece2881f9de152336f30627 Aug 13 01:12:49.930996 env[1228]: time="2025-08-13T01:12:49.930985263Z" level=warning msg="cleaning up after shim disconnected" id=d96a7b218512f805f1b4d2789490d01c48cf91007ece2881f9de152336f30627 namespace=k8s.io Aug 13 01:12:49.930996 env[1228]: time="2025-08-13T01:12:49.931001944Z" level=info msg="cleaning up dead shim" Aug 13 01:12:49.942435 env[1228]: time="2025-08-13T01:12:49.942357215Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:12:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4066 runtime=io.containerd.runc.v2\n" Aug 13 01:12:50.095967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d96a7b218512f805f1b4d2789490d01c48cf91007ece2881f9de152336f30627-rootfs.mount: Deactivated successfully. Aug 13 01:12:50.668623 kubelet[2005]: I0813 01:12:50.668507 2005 setters.go:600] "Node became not ready" node="ci-3510-3-8-8fd0904e93211774eb5d.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T01:12:50Z","lastTransitionTime":"2025-08-13T01:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 01:12:50.784517 env[1228]: time="2025-08-13T01:12:50.784462514Z" level=info msg="CreateContainer within sandbox \"18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:12:50.817188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727657437.mount: Deactivated successfully. Aug 13 01:12:50.818865 env[1228]: time="2025-08-13T01:12:50.818793623Z" level=info msg="CreateContainer within sandbox \"18013d545ef42a30a794cfb0fdc7f6c6c754f25b6c23d8a1f5e5259e61a8d4e5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cd73ed5beccd0449a064423999f98aeca549430c6386f89eb858536d4fe87b4b\"" Aug 13 01:12:50.825127 env[1228]: time="2025-08-13T01:12:50.825078724Z" level=info msg="StartContainer for \"cd73ed5beccd0449a064423999f98aeca549430c6386f89eb858536d4fe87b4b\"" Aug 13 01:12:50.858198 systemd[1]: Started cri-containerd-cd73ed5beccd0449a064423999f98aeca549430c6386f89eb858536d4fe87b4b.scope. Aug 13 01:12:50.915389 env[1228]: time="2025-08-13T01:12:50.915286215Z" level=info msg="StartContainer for \"cd73ed5beccd0449a064423999f98aeca549430c6386f89eb858536d4fe87b4b\" returns successfully" Aug 13 01:12:51.389352 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 01:12:54.061441 systemd[1]: run-containerd-runc-k8s.io-cd73ed5beccd0449a064423999f98aeca549430c6386f89eb858536d4fe87b4b-runc.ILVzzC.mount: Deactivated successfully. Aug 13 01:12:54.545092 systemd-networkd[1030]: lxc_health: Link UP Aug 13 01:12:54.589352 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 01:12:54.589757 systemd-networkd[1030]: lxc_health: Gained carrier Aug 13 01:12:55.204532 kubelet[2005]: I0813 01:12:55.204455 2005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2zvwq" podStartSLOduration=9.204429377 podStartE2EDuration="9.204429377s" podCreationTimestamp="2025-08-13 01:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:12:51.821061667 +0000 UTC m=+134.611853969" watchObservedRunningTime="2025-08-13 01:12:55.204429377 +0000 UTC m=+137.995221668" Aug 13 01:12:56.306042 systemd[1]: run-containerd-runc-k8s.io-cd73ed5beccd0449a064423999f98aeca549430c6386f89eb858536d4fe87b4b-runc.Ym75Au.mount: Deactivated successfully. Aug 13 01:12:56.469616 systemd-networkd[1030]: lxc_health: Gained IPv6LL Aug 13 01:12:58.551003 systemd[1]: run-containerd-runc-k8s.io-cd73ed5beccd0449a064423999f98aeca549430c6386f89eb858536d4fe87b4b-runc.x3K5N9.mount: Deactivated successfully. Aug 13 01:13:00.767161 systemd[1]: run-containerd-runc-k8s.io-cd73ed5beccd0449a064423999f98aeca549430c6386f89eb858536d4fe87b4b-runc.75S6hp.mount: Deactivated successfully. Aug 13 01:13:00.918364 sshd[3775]: pam_unix(sshd:session): session closed for user core Aug 13 01:13:00.922709 systemd[1]: sshd@26-10.128.0.44:22-139.178.68.195:45634.service: Deactivated successfully. Aug 13 01:13:00.923901 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:13:00.925081 systemd-logind[1219]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:13:00.927237 systemd-logind[1219]: Removed session 26.