May 10 00:50:24.129878 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 9 23:12:23 -00 2025 May 10 00:50:24.129974 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:50:24.129993 kernel: BIOS-provided physical RAM map: May 10 00:50:24.130007 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved May 10 00:50:24.130020 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable May 10 00:50:24.130032 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved May 10 00:50:24.130052 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable May 10 00:50:24.130066 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved May 10 00:50:24.130080 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd277fff] usable May 10 00:50:24.130095 kernel: BIOS-e820: [mem 0x00000000bd278000-0x00000000bd281fff] ACPI data May 10 00:50:24.130109 kernel: BIOS-e820: [mem 0x00000000bd282000-0x00000000bf8ecfff] usable May 10 00:50:24.130123 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved May 10 00:50:24.130137 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data May 10 00:50:24.130152 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS May 10 00:50:24.130173 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable May 10 00:50:24.130189 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved May 10 00:50:24.130204 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable May 10 00:50:24.130219 kernel: NX (Execute Disable) protection: active May 10 00:50:24.130234 kernel: efi: EFI v2.70 by EDK II May 10 00:50:24.130250 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd278018 May 10 00:50:24.130265 kernel: random: crng init done May 10 00:50:24.130291 kernel: SMBIOS 2.4 present. May 10 00:50:24.130310 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 May 10 00:50:24.130324 kernel: Hypervisor detected: KVM May 10 00:50:24.130340 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 00:50:24.130354 kernel: kvm-clock: cpu 0, msr 1a1196001, primary cpu clock May 10 00:50:24.130369 kernel: kvm-clock: using sched offset of 13687816131 cycles May 10 00:50:24.130385 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 00:50:24.130400 kernel: tsc: Detected 2299.998 MHz processor May 10 00:50:24.130414 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 00:50:24.130430 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 00:50:24.130446 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 May 10 00:50:24.130467 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 00:50:24.130482 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 May 10 00:50:24.130498 kernel: Using GB pages for direct mapping May 10 00:50:24.130513 kernel: Secure boot disabled May 10 00:50:24.130529 kernel: ACPI: Early table checksum verification disabled May 10 00:50:24.130543 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) May 10 00:50:24.130559 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) May 10 00:50:24.130576 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) May 10 00:50:24.130601 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) May 10 00:50:24.130618 kernel: ACPI: FACS 0x00000000BFBF2000 000040 May 10 00:50:24.130635 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) May 10 00:50:24.130652 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) May 10 00:50:24.130669 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) May 10 00:50:24.130687 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) May 10 00:50:24.130707 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) May 10 00:50:24.130722 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) May 10 00:50:24.130739 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] May 10 00:50:24.130755 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] May 10 00:50:24.130771 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] May 10 00:50:24.130788 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] May 10 00:50:24.130804 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] May 10 00:50:24.130821 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] May 10 00:50:24.130837 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] May 10 00:50:24.130859 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] May 10 00:50:24.130876 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] May 10 00:50:24.130893 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 10 00:50:24.130924 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 10 00:50:24.130941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 10 00:50:24.130957 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] May 10 00:50:24.130973 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] May 10 00:50:24.130991 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] May 10 00:50:24.131008 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] May 10 00:50:24.131029 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] May 10 00:50:24.131046 kernel: Zone ranges: May 10 00:50:24.131063 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 00:50:24.131080 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 10 00:50:24.131097 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] May 10 00:50:24.131114 kernel: Movable zone start for each node May 10 00:50:24.131131 kernel: Early memory node ranges May 10 00:50:24.131147 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] May 10 00:50:24.131164 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] May 10 00:50:24.131185 kernel: node 0: [mem 0x0000000000100000-0x00000000bd277fff] May 10 00:50:24.131201 kernel: node 0: [mem 0x00000000bd282000-0x00000000bf8ecfff] May 10 00:50:24.131218 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] May 10 00:50:24.131235 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] May 10 00:50:24.131252 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] May 10 00:50:24.131269 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 00:50:24.131294 kernel: On node 0, zone DMA: 11 pages in unavailable ranges May 10 00:50:24.131311 kernel: On node 0, zone DMA: 104 pages in unavailable ranges May 10 00:50:24.131327 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges May 10 00:50:24.131348 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 10 00:50:24.131364 kernel: On node 0, zone Normal: 32 pages in unavailable ranges May 10 00:50:24.131380 kernel: ACPI: PM-Timer IO Port: 0xb008 May 10 00:50:24.131396 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 00:50:24.131412 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 00:50:24.131429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 00:50:24.131446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 00:50:24.131462 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 00:50:24.131479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 00:50:24.131500 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 00:50:24.131516 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 10 00:50:24.131533 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 10 00:50:24.131550 kernel: Booting paravirtualized kernel on KVM May 10 00:50:24.131566 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 00:50:24.131584 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 10 00:50:24.131601 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 10 00:50:24.131618 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 10 00:50:24.131633 kernel: pcpu-alloc: [0] 0 1 May 10 00:50:24.131653 kernel: kvm-guest: PV spinlocks enabled May 10 00:50:24.131668 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 00:50:24.131684 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 May 10 00:50:24.131700 kernel: Policy zone: Normal May 10 00:50:24.131719 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:50:24.131736 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:50:24.131752 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 10 00:50:24.131769 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 00:50:24.131786 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:50:24.131807 kernel: Memory: 7515412K/7860544K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 344872K reserved, 0K cma-reserved) May 10 00:50:24.131824 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 10 00:50:24.131841 kernel: Kernel/User page tables isolation: enabled May 10 00:50:24.131858 kernel: ftrace: allocating 34584 entries in 136 pages May 10 00:50:24.131880 kernel: ftrace: allocated 136 pages with 2 groups May 10 00:50:24.131897 kernel: rcu: Hierarchical RCU implementation. May 10 00:50:24.131937 kernel: rcu: RCU event tracing is enabled. May 10 00:50:24.131951 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 10 00:50:24.131971 kernel: Rude variant of Tasks RCU enabled. May 10 00:50:24.131998 kernel: Tracing variant of Tasks RCU enabled. May 10 00:50:24.132013 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:50:24.132033 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 10 00:50:24.132049 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 10 00:50:24.132064 kernel: Console: colour dummy device 80x25 May 10 00:50:24.132079 kernel: printk: console [ttyS0] enabled May 10 00:50:24.132095 kernel: ACPI: Core revision 20210730 May 10 00:50:24.132111 kernel: APIC: Switch to symmetric I/O mode setup May 10 00:50:24.132128 kernel: x2apic enabled May 10 00:50:24.132147 kernel: Switched APIC routing to physical x2apic. May 10 00:50:24.132164 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 May 10 00:50:24.132181 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 10 00:50:24.132197 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) May 10 00:50:24.132213 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 May 10 00:50:24.132229 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 May 10 00:50:24.132246 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 00:50:24.132266 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 10 00:50:24.132291 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 10 00:50:24.132306 kernel: Spectre V2 : Mitigation: IBRS May 10 00:50:24.132323 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 00:50:24.132339 kernel: RETBleed: Mitigation: IBRS May 10 00:50:24.132357 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 10 00:50:24.132374 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl May 10 00:50:24.132390 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 10 00:50:24.132406 kernel: MDS: Mitigation: Clear CPU buffers May 10 00:50:24.132427 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 10 00:50:24.132443 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 00:50:24.132460 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 00:50:24.132477 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 00:50:24.132494 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 00:50:24.132511 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 10 00:50:24.132528 kernel: Freeing SMP alternatives memory: 32K May 10 00:50:24.132545 kernel: pid_max: default: 32768 minimum: 301 May 10 00:50:24.132562 kernel: LSM: Security Framework initializing May 10 00:50:24.132583 kernel: SELinux: Initializing. May 10 00:50:24.132599 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 00:50:24.132617 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 00:50:24.132634 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) May 10 00:50:24.132651 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. May 10 00:50:24.132669 kernel: signal: max sigframe size: 1776 May 10 00:50:24.132685 kernel: rcu: Hierarchical SRCU implementation. May 10 00:50:24.132703 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 10 00:50:24.132721 kernel: smp: Bringing up secondary CPUs ... May 10 00:50:24.132740 kernel: x86: Booting SMP configuration: May 10 00:50:24.132757 kernel: .... node #0, CPUs: #1 May 10 00:50:24.132774 kernel: kvm-clock: cpu 1, msr 1a1196041, secondary cpu clock May 10 00:50:24.132792 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 10 00:50:24.132811 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 10 00:50:24.132828 kernel: smp: Brought up 1 node, 2 CPUs May 10 00:50:24.132844 kernel: smpboot: Max logical packages: 1 May 10 00:50:24.132862 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) May 10 00:50:24.132881 kernel: devtmpfs: initialized May 10 00:50:24.132898 kernel: x86/mm: Memory block size: 128MB May 10 00:50:24.132972 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) May 10 00:50:24.132990 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:50:24.133007 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 10 00:50:24.133025 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:50:24.133043 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:50:24.133060 kernel: audit: initializing netlink subsys (disabled) May 10 00:50:24.133083 kernel: audit: type=2000 audit(1746838222.500:1): state=initialized audit_enabled=0 res=1 May 10 00:50:24.133104 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:50:24.133122 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 00:50:24.133138 kernel: cpuidle: using governor menu May 10 00:50:24.133155 kernel: ACPI: bus type PCI registered May 10 00:50:24.133171 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:50:24.133188 kernel: dca service started, version 1.12.1 May 10 00:50:24.133204 kernel: PCI: Using configuration type 1 for base access May 10 00:50:24.133222 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 00:50:24.133239 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:50:24.133260 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:50:24.133285 kernel: ACPI: Added _OSI(Module Device) May 10 00:50:24.133303 kernel: ACPI: Added _OSI(Processor Device) May 10 00:50:24.133320 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:50:24.133337 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:50:24.133354 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 10 00:50:24.133370 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 10 00:50:24.133387 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 10 00:50:24.133404 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 10 00:50:24.133424 kernel: ACPI: Interpreter enabled May 10 00:50:24.133440 kernel: ACPI: PM: (supports S0 S3 S5) May 10 00:50:24.133456 kernel: ACPI: Using IOAPIC for interrupt routing May 10 00:50:24.133473 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 00:50:24.133490 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F May 10 00:50:24.133507 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:50:24.133747 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 10 00:50:24.133931 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 10 00:50:24.134203 kernel: PCI host bridge to bus 0000:00 May 10 00:50:24.134720 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 00:50:24.135154 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 00:50:24.135324 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 00:50:24.135478 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] May 10 00:50:24.135629 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:50:24.135810 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 10 00:50:24.136025 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 May 10 00:50:24.136198 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 10 00:50:24.136373 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 10 00:50:24.136547 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 May 10 00:50:24.136724 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 10 00:50:24.136890 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] May 10 00:50:24.143167 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 10 00:50:24.143354 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] May 10 00:50:24.143519 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] May 10 00:50:24.143687 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 May 10 00:50:24.143846 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] May 10 00:50:24.145886 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] May 10 00:50:24.145936 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 00:50:24.145962 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 00:50:24.145979 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 00:50:24.145996 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 00:50:24.146014 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 10 00:50:24.146032 kernel: iommu: Default domain type: Translated May 10 00:50:24.146050 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 00:50:24.146068 kernel: vgaarb: loaded May 10 00:50:24.146086 kernel: pps_core: LinuxPPS API ver. 1 registered May 10 00:50:24.146105 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 10 00:50:24.146126 kernel: PTP clock support registered May 10 00:50:24.146144 kernel: Registered efivars operations May 10 00:50:24.146162 kernel: PCI: Using ACPI for IRQ routing May 10 00:50:24.146180 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 00:50:24.146197 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] May 10 00:50:24.146214 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] May 10 00:50:24.146231 kernel: e820: reserve RAM buffer [mem 0xbd278000-0xbfffffff] May 10 00:50:24.146248 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] May 10 00:50:24.146266 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] May 10 00:50:24.146298 kernel: clocksource: Switched to clocksource kvm-clock May 10 00:50:24.146316 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:50:24.146334 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:50:24.146352 kernel: pnp: PnP ACPI init May 10 00:50:24.146370 kernel: pnp: PnP ACPI: found 7 devices May 10 00:50:24.146388 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 00:50:24.146406 kernel: NET: Registered PF_INET protocol family May 10 00:50:24.146424 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 10 00:50:24.146442 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 10 00:50:24.146464 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:50:24.146481 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 00:50:24.146497 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 10 00:50:24.146515 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 10 00:50:24.146532 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 10 00:50:24.146550 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 10 00:50:24.146567 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:50:24.146585 kernel: NET: Registered PF_XDP protocol family May 10 00:50:24.146753 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 00:50:24.156897 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 00:50:24.157395 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 00:50:24.157678 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] May 10 00:50:24.162051 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 10 00:50:24.162093 kernel: PCI: CLS 0 bytes, default 64 May 10 00:50:24.162111 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 10 00:50:24.162136 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) May 10 00:50:24.162153 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 10 00:50:24.162170 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 10 00:50:24.162187 kernel: clocksource: Switched to clocksource tsc May 10 00:50:24.162203 kernel: Initialise system trusted keyrings May 10 00:50:24.162219 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 10 00:50:24.162235 kernel: Key type asymmetric registered May 10 00:50:24.162252 kernel: Asymmetric key parser 'x509' registered May 10 00:50:24.162268 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 10 00:50:24.162296 kernel: io scheduler mq-deadline registered May 10 00:50:24.162313 kernel: io scheduler kyber registered May 10 00:50:24.162330 kernel: io scheduler bfq registered May 10 00:50:24.162351 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 00:50:24.162389 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 10 00:50:24.162580 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver May 10 00:50:24.162612 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 May 10 00:50:24.162791 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver May 10 00:50:24.162816 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 10 00:50:24.163003 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver May 10 00:50:24.163027 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:50:24.163045 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 00:50:24.163064 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 10 00:50:24.163082 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A May 10 00:50:24.163100 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A May 10 00:50:24.163282 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) May 10 00:50:24.163309 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 00:50:24.163332 kernel: i8042: Warning: Keylock active May 10 00:50:24.163350 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 00:50:24.163368 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 00:50:24.163538 kernel: rtc_cmos 00:00: RTC can wake from S4 May 10 00:50:24.163686 kernel: rtc_cmos 00:00: registered as rtc0 May 10 00:50:24.163837 kernel: rtc_cmos 00:00: setting system clock to 2025-05-10T00:50:23 UTC (1746838223) May 10 00:50:24.166266 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 10 00:50:24.166313 kernel: intel_pstate: CPU model not supported May 10 00:50:24.166339 kernel: pstore: Registered efi as persistent store backend May 10 00:50:24.166480 kernel: NET: Registered PF_INET6 protocol family May 10 00:50:24.166499 kernel: Segment Routing with IPv6 May 10 00:50:24.166517 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:50:24.166534 kernel: NET: Registered PF_PACKET protocol family May 10 00:50:24.166552 kernel: Key type dns_resolver registered May 10 00:50:24.166692 kernel: IPI shorthand broadcast: enabled May 10 00:50:24.166711 kernel: sched_clock: Marking stable (752737417, 166792576)->(980016623, -60486630) May 10 00:50:24.166729 kernel: registered taskstats version 1 May 10 00:50:24.166751 kernel: Loading compiled-in X.509 certificates May 10 00:50:24.166769 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 00:50:24.166787 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 0c62a22cd9157131d2e97d5a2e1bd9023e187117' May 10 00:50:24.166805 kernel: Key type .fscrypt registered May 10 00:50:24.166822 kernel: Key type fscrypt-provisioning registered May 10 00:50:24.166841 kernel: pstore: Using crash dump compression: deflate May 10 00:50:24.166858 kernel: ima: Allocated hash algorithm: sha1 May 10 00:50:24.166875 kernel: ima: No architecture policies found May 10 00:50:24.166893 kernel: clk: Disabling unused clocks May 10 00:50:24.166927 kernel: Freeing unused kernel image (initmem) memory: 47456K May 10 00:50:24.166944 kernel: Write protecting the kernel read-only data: 28672k May 10 00:50:24.166962 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 10 00:50:24.166980 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 10 00:50:24.166998 kernel: Run /init as init process May 10 00:50:24.167016 kernel: with arguments: May 10 00:50:24.167034 kernel: /init May 10 00:50:24.167051 kernel: with environment: May 10 00:50:24.167068 kernel: HOME=/ May 10 00:50:24.167090 kernel: TERM=linux May 10 00:50:24.167108 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:50:24.167131 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:50:24.167153 systemd[1]: Detected virtualization kvm. May 10 00:50:24.167172 systemd[1]: Detected architecture x86-64. May 10 00:50:24.167190 systemd[1]: Running in initrd. May 10 00:50:24.167209 systemd[1]: No hostname configured, using default hostname. May 10 00:50:24.167230 systemd[1]: Hostname set to . May 10 00:50:24.167250 systemd[1]: Initializing machine ID from VM UUID. May 10 00:50:24.167268 systemd[1]: Queued start job for default target initrd.target. May 10 00:50:24.167295 systemd[1]: Started systemd-ask-password-console.path. May 10 00:50:24.167313 systemd[1]: Reached target cryptsetup.target. May 10 00:50:24.167332 systemd[1]: Reached target paths.target. May 10 00:50:24.167350 systemd[1]: Reached target slices.target. May 10 00:50:24.167369 systemd[1]: Reached target swap.target. May 10 00:50:24.167392 systemd[1]: Reached target timers.target. May 10 00:50:24.167412 systemd[1]: Listening on iscsid.socket. May 10 00:50:24.167430 systemd[1]: Listening on iscsiuio.socket. May 10 00:50:24.167449 systemd[1]: Listening on systemd-journald-audit.socket. May 10 00:50:24.167468 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 00:50:24.167487 systemd[1]: Listening on systemd-journald.socket. May 10 00:50:24.167506 systemd[1]: Listening on systemd-networkd.socket. May 10 00:50:24.167524 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:50:24.167546 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:50:24.167565 systemd[1]: Reached target sockets.target. May 10 00:50:24.167603 systemd[1]: Starting kmod-static-nodes.service... May 10 00:50:24.167625 systemd[1]: Finished network-cleanup.service. May 10 00:50:24.167644 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:50:24.167663 systemd[1]: Starting systemd-journald.service... May 10 00:50:24.167683 systemd[1]: Starting systemd-modules-load.service... May 10 00:50:24.167706 systemd[1]: Starting systemd-resolved.service... May 10 00:50:24.167726 systemd[1]: Starting systemd-vconsole-setup.service... May 10 00:50:24.167745 systemd[1]: Finished kmod-static-nodes.service. May 10 00:50:24.167766 kernel: audit: type=1130 audit(1746838224.123:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.167785 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:50:24.167805 kernel: audit: type=1130 audit(1746838224.135:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.167824 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:50:24.167844 systemd[1]: Finished systemd-vconsole-setup.service. May 10 00:50:24.167867 kernel: audit: type=1130 audit(1746838224.156:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.167885 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:50:24.173885 kernel: audit: type=1130 audit(1746838224.166:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.173946 systemd[1]: Starting dracut-cmdline-ask.service... May 10 00:50:24.173973 systemd-journald[190]: Journal started May 10 00:50:24.174076 systemd-journald[190]: Runtime Journal (/run/log/journal/8b5d6874158a57611deafa7941266321) is 8.0M, max 148.8M, 140.8M free. May 10 00:50:24.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.154335 systemd-modules-load[191]: Inserted module 'overlay' May 10 00:50:24.185960 systemd[1]: Started systemd-journald.service. May 10 00:50:24.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.197138 kernel: audit: type=1130 audit(1746838224.191:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.222079 systemd[1]: Finished dracut-cmdline-ask.service. May 10 00:50:24.232946 kernel: audit: type=1130 audit(1746838224.224:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.224293 systemd-resolved[192]: Positive Trust Anchors: May 10 00:50:24.237048 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:50:24.224311 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:50:24.224377 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:50:24.227663 systemd[1]: Starting dracut-cmdline.service... May 10 00:50:24.254161 dracut-cmdline[205]: dracut-dracut-053 May 10 00:50:24.254161 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:50:24.267048 kernel: Bridge firewalling registered May 10 00:50:24.239655 systemd-resolved[192]: Defaulting to hostname 'linux'. May 10 00:50:24.254851 systemd-modules-load[191]: Inserted module 'br_netfilter' May 10 00:50:24.278287 systemd[1]: Started systemd-resolved.service. May 10 00:50:24.297065 kernel: SCSI subsystem initialized May 10 00:50:24.297105 kernel: audit: type=1130 audit(1746838224.283:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.285168 systemd[1]: Reached target nss-lookup.target. May 10 00:50:24.312163 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:50:24.312247 kernel: device-mapper: uevent: version 1.0.3 May 10 00:50:24.313917 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 10 00:50:24.318810 systemd-modules-load[191]: Inserted module 'dm_multipath' May 10 00:50:24.319898 systemd[1]: Finished systemd-modules-load.service. May 10 00:50:24.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.333138 systemd[1]: Starting systemd-sysctl.service... May 10 00:50:24.346075 kernel: audit: type=1130 audit(1746838224.330:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.352360 systemd[1]: Finished systemd-sysctl.service. May 10 00:50:24.363077 kernel: audit: type=1130 audit(1746838224.354:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.364934 kernel: Loading iSCSI transport class v2.0-870. May 10 00:50:24.385933 kernel: iscsi: registered transport (tcp) May 10 00:50:24.413461 kernel: iscsi: registered transport (qla4xxx) May 10 00:50:24.413570 kernel: QLogic iSCSI HBA Driver May 10 00:50:24.460659 systemd[1]: Finished dracut-cmdline.service. May 10 00:50:24.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.462037 systemd[1]: Starting dracut-pre-udev.service... May 10 00:50:24.524991 kernel: raid6: avx2x4 gen() 17677 MB/s May 10 00:50:24.545980 kernel: raid6: avx2x4 xor() 6602 MB/s May 10 00:50:24.566971 kernel: raid6: avx2x2 gen() 17928 MB/s May 10 00:50:24.587992 kernel: raid6: avx2x2 xor() 18128 MB/s May 10 00:50:24.608960 kernel: raid6: avx2x1 gen() 13636 MB/s May 10 00:50:24.629983 kernel: raid6: avx2x1 xor() 15834 MB/s May 10 00:50:24.650985 kernel: raid6: sse2x4 gen() 10778 MB/s May 10 00:50:24.671949 kernel: raid6: sse2x4 xor() 6679 MB/s May 10 00:50:24.692980 kernel: raid6: sse2x2 gen() 11605 MB/s May 10 00:50:24.713978 kernel: raid6: sse2x2 xor() 7340 MB/s May 10 00:50:24.734947 kernel: raid6: sse2x1 gen() 10516 MB/s May 10 00:50:24.761021 kernel: raid6: sse2x1 xor() 5196 MB/s May 10 00:50:24.761075 kernel: raid6: using algorithm avx2x2 gen() 17928 MB/s May 10 00:50:24.761098 kernel: raid6: .... xor() 18128 MB/s, rmw enabled May 10 00:50:24.766195 kernel: raid6: using avx2x2 recovery algorithm May 10 00:50:24.791948 kernel: xor: automatically using best checksumming function avx May 10 00:50:24.902948 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 10 00:50:24.915109 systemd[1]: Finished dracut-pre-udev.service. May 10 00:50:24.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.913000 audit: BPF prog-id=7 op=LOAD May 10 00:50:24.914000 audit: BPF prog-id=8 op=LOAD May 10 00:50:24.916508 systemd[1]: Starting systemd-udevd.service... May 10 00:50:24.933966 systemd-udevd[387]: Using default interface naming scheme 'v252'. May 10 00:50:24.954273 systemd[1]: Started systemd-udevd.service. May 10 00:50:24.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:24.964429 systemd[1]: Starting dracut-pre-trigger.service... May 10 00:50:24.979241 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation May 10 00:50:25.020170 systemd[1]: Finished dracut-pre-trigger.service. May 10 00:50:25.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:25.021417 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:50:25.091193 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:50:25.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:25.177933 kernel: scsi host0: Virtio SCSI HBA May 10 00:50:25.188931 kernel: cryptd: max_cpu_qlen set to 1000 May 10 00:50:25.202936 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 May 10 00:50:25.312120 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) May 10 00:50:25.390713 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks May 10 00:50:25.390978 kernel: sd 0:0:1:0: [sda] Write Protect is off May 10 00:50:25.391207 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 May 10 00:50:25.391411 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 10 00:50:25.391620 kernel: AVX2 version of gcm_enc/dec engaged. May 10 00:50:25.391646 kernel: AES CTR mode by8 optimization enabled May 10 00:50:25.391669 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:50:25.391692 kernel: GPT:17805311 != 25165823 May 10 00:50:25.391714 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:50:25.391735 kernel: GPT:17805311 != 25165823 May 10 00:50:25.391756 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:50:25.391781 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:50:25.391803 kernel: sd 0:0:1:0: [sda] Attached SCSI disk May 10 00:50:25.460485 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 10 00:50:25.472202 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (451) May 10 00:50:25.486027 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 10 00:50:25.496082 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 10 00:50:25.501291 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 10 00:50:25.536208 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:50:25.550336 systemd[1]: Starting disk-uuid.service... May 10 00:50:25.575211 disk-uuid[516]: Primary Header is updated. May 10 00:50:25.575211 disk-uuid[516]: Secondary Entries is updated. May 10 00:50:25.575211 disk-uuid[516]: Secondary Header is updated. May 10 00:50:25.616075 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:50:25.616124 kernel: GPT:disk_guids don't match. May 10 00:50:25.616149 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:50:25.616181 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:50:25.635948 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:50:26.624528 disk-uuid[517]: The operation has completed successfully. May 10 00:50:26.633086 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:50:26.701407 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:50:26.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:26.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:26.701545 systemd[1]: Finished disk-uuid.service. May 10 00:50:26.719553 systemd[1]: Starting verity-setup.service... May 10 00:50:26.745959 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 10 00:50:26.833590 systemd[1]: Found device dev-mapper-usr.device. May 10 00:50:26.844600 systemd[1]: Mounting sysusr-usr.mount... May 10 00:50:26.856562 systemd[1]: Finished verity-setup.service. May 10 00:50:26.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:26.950951 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 00:50:26.951200 systemd[1]: Mounted sysusr-usr.mount. May 10 00:50:26.951596 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 10 00:50:27.000108 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:50:27.000153 kernel: BTRFS info (device sda6): using free space tree May 10 00:50:27.000176 kernel: BTRFS info (device sda6): has skinny extents May 10 00:50:26.952555 systemd[1]: Starting ignition-setup.service... May 10 00:50:27.013082 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:50:26.965427 systemd[1]: Starting parse-ip-for-networkd.service... May 10 00:50:27.051413 systemd[1]: Finished ignition-setup.service. May 10 00:50:27.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.052814 systemd[1]: Starting ignition-fetch-offline.service... May 10 00:50:27.100534 systemd[1]: Finished parse-ip-for-networkd.service. May 10 00:50:27.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.108000 audit: BPF prog-id=9 op=LOAD May 10 00:50:27.111189 systemd[1]: Starting systemd-networkd.service... May 10 00:50:27.145546 systemd-networkd[691]: lo: Link UP May 10 00:50:27.145560 systemd-networkd[691]: lo: Gained carrier May 10 00:50:27.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.147049 systemd-networkd[691]: Enumeration completed May 10 00:50:27.147229 systemd[1]: Started systemd-networkd.service. May 10 00:50:27.147468 systemd-networkd[691]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:50:27.149497 systemd-networkd[691]: eth0: Link UP May 10 00:50:27.149504 systemd-networkd[691]: eth0: Gained carrier May 10 00:50:27.161298 systemd[1]: Reached target network.target. May 10 00:50:27.161336 systemd-networkd[691]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388.c.flatcar-212911.internal' to 'ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388' May 10 00:50:27.161355 systemd-networkd[691]: eth0: DHCPv4 address 10.128.0.57/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 10 00:50:27.184349 systemd[1]: Starting iscsiuio.service... May 10 00:50:27.272286 systemd[1]: Started iscsiuio.service. May 10 00:50:27.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.280407 systemd[1]: Starting iscsid.service... May 10 00:50:27.302087 iscsid[701]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 10 00:50:27.302087 iscsid[701]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 10 00:50:27.302087 iscsid[701]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 10 00:50:27.302087 iscsid[701]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 10 00:50:27.302087 iscsid[701]: If using hardware iscsi like qla4xxx this message can be ignored. May 10 00:50:27.302087 iscsid[701]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 10 00:50:27.302087 iscsid[701]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 10 00:50:27.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.294273 systemd[1]: Started iscsid.service. May 10 00:50:27.345118 ignition[651]: Ignition 2.14.0 May 10 00:50:27.310468 systemd[1]: Starting dracut-initqueue.service... May 10 00:50:27.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.345134 ignition[651]: Stage: fetch-offline May 10 00:50:27.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.331078 systemd[1]: Finished dracut-initqueue.service. May 10 00:50:27.345221 ignition[651]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:50:27.364317 systemd[1]: Reached target remote-fs-pre.target. May 10 00:50:27.345264 ignition[651]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:50:27.400120 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:50:27.365372 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:50:27.417112 systemd[1]: Reached target remote-fs.target. May 10 00:50:27.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.365695 ignition[651]: parsed url from cmdline: "" May 10 00:50:27.436944 systemd[1]: Starting dracut-pre-mount.service... May 10 00:50:27.365702 ignition[651]: no config URL provided May 10 00:50:27.460458 systemd[1]: Finished ignition-fetch-offline.service. May 10 00:50:27.365709 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:50:27.475480 systemd[1]: Finished dracut-pre-mount.service. May 10 00:50:27.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.365722 ignition[651]: no config at "/usr/lib/ignition/user.ign" May 10 00:50:27.492382 systemd[1]: Starting ignition-fetch.service... May 10 00:50:27.365731 ignition[651]: failed to fetch config: resource requires networking May 10 00:50:27.526998 unknown[716]: fetched base config from "system" May 10 00:50:27.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.366190 ignition[651]: Ignition finished successfully May 10 00:50:27.527063 unknown[716]: fetched base config from "system" May 10 00:50:27.504086 ignition[716]: Ignition 2.14.0 May 10 00:50:27.527114 unknown[716]: fetched user config from "gcp" May 10 00:50:27.504101 ignition[716]: Stage: fetch May 10 00:50:27.533675 systemd[1]: Finished ignition-fetch.service. May 10 00:50:27.504252 ignition[716]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:50:27.558673 systemd[1]: Starting ignition-kargs.service... May 10 00:50:27.504291 ignition[716]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:50:27.596520 systemd[1]: Finished ignition-kargs.service. May 10 00:50:27.512855 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:50:27.615442 systemd[1]: Starting ignition-disks.service... May 10 00:50:27.513092 ignition[716]: parsed url from cmdline: "" May 10 00:50:27.644418 systemd[1]: Finished ignition-disks.service. May 10 00:50:27.513101 ignition[716]: no config URL provided May 10 00:50:27.662306 systemd[1]: Reached target initrd-root-device.target. May 10 00:50:27.513112 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:50:27.677168 systemd[1]: Reached target local-fs-pre.target. May 10 00:50:27.513129 ignition[716]: no config at "/usr/lib/ignition/user.ign" May 10 00:50:27.677296 systemd[1]: Reached target local-fs.target. May 10 00:50:27.513175 ignition[716]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 May 10 00:50:27.699130 systemd[1]: Reached target sysinit.target. May 10 00:50:27.521805 ignition[716]: GET result: OK May 10 00:50:27.712126 systemd[1]: Reached target basic.target. May 10 00:50:27.521872 ignition[716]: parsing config with SHA512: 3d3ae60915281e6dc7f5c2da1a7b46c4bcfacd4496587ea02335eaec1cdaff32b1685f56723fdb84c443bb2064daaca9a6be8a3a32b6e4870e8d4bcfb5ce5a90 May 10 00:50:27.724387 systemd[1]: Starting systemd-fsck-root.service... May 10 00:50:27.529412 ignition[716]: fetch: fetch complete May 10 00:50:27.529443 ignition[716]: fetch: fetch passed May 10 00:50:27.529545 ignition[716]: Ignition finished successfully May 10 00:50:27.571695 ignition[722]: Ignition 2.14.0 May 10 00:50:27.571706 ignition[722]: Stage: kargs May 10 00:50:27.571841 ignition[722]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:50:27.571871 ignition[722]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:50:27.579586 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:50:27.581414 ignition[722]: kargs: kargs passed May 10 00:50:27.581464 ignition[722]: Ignition finished successfully May 10 00:50:27.628309 ignition[728]: Ignition 2.14.0 May 10 00:50:27.628320 ignition[728]: Stage: disks May 10 00:50:27.628475 ignition[728]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:50:27.628515 ignition[728]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:50:27.636886 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:50:27.638830 ignition[728]: disks: disks passed May 10 00:50:27.638893 ignition[728]: Ignition finished successfully May 10 00:50:27.769472 systemd-fsck[736]: ROOT: clean, 623/1628000 files, 124060/1617920 blocks May 10 00:50:27.944881 systemd[1]: Finished systemd-fsck-root.service. May 10 00:50:27.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:27.946163 systemd[1]: Mounting sysroot.mount... May 10 00:50:27.976104 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 00:50:27.984327 systemd[1]: Mounted sysroot.mount. May 10 00:50:27.991212 systemd[1]: Reached target initrd-root-fs.target. May 10 00:50:28.009671 systemd[1]: Mounting sysroot-usr.mount... May 10 00:50:28.022712 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 10 00:50:28.022772 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:50:28.022810 systemd[1]: Reached target ignition-diskful.target. May 10 00:50:28.094613 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (742) May 10 00:50:28.030631 systemd[1]: Mounted sysroot-usr.mount. May 10 00:50:28.121217 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:50:28.121254 kernel: BTRFS info (device sda6): using free space tree May 10 00:50:28.121269 kernel: BTRFS info (device sda6): has skinny extents May 10 00:50:28.065653 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:50:28.141620 initrd-setup-root[747]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:50:28.152745 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:50:28.080439 systemd[1]: Starting initrd-setup-root.service... May 10 00:50:28.170200 initrd-setup-root[755]: cut: /sysroot/etc/group: No such file or directory May 10 00:50:28.147186 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:50:28.189111 initrd-setup-root[763]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:50:28.199071 initrd-setup-root[787]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:50:28.213287 systemd[1]: Finished initrd-setup-root.service. May 10 00:50:28.256244 kernel: kauditd_printk_skb: 23 callbacks suppressed May 10 00:50:28.256293 kernel: audit: type=1130 audit(1746838228.220:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:28.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:28.223514 systemd[1]: Starting ignition-mount.service... May 10 00:50:28.264337 systemd[1]: Starting sysroot-boot.service... May 10 00:50:28.278356 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 10 00:50:28.278519 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 10 00:50:28.305058 ignition[808]: INFO : Ignition 2.14.0 May 10 00:50:28.305058 ignition[808]: INFO : Stage: mount May 10 00:50:28.305058 ignition[808]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:50:28.305058 ignition[808]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:50:28.439108 kernel: audit: type=1130 audit(1746838228.320:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:28.439166 kernel: audit: type=1130 audit(1746838228.348:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:28.439191 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (817) May 10 00:50:28.439213 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:50:28.439236 kernel: BTRFS info (device sda6): using free space tree May 10 00:50:28.439251 kernel: BTRFS info (device sda6): has skinny extents May 10 00:50:28.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:28.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:28.311376 systemd[1]: Finished sysroot-boot.service. May 10 00:50:28.460102 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:50:28.460207 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:50:28.460207 ignition[808]: INFO : mount: mount passed May 10 00:50:28.460207 ignition[808]: INFO : Ignition finished successfully May 10 00:50:28.322510 systemd[1]: Finished ignition-mount.service. May 10 00:50:28.346115 systemd-networkd[691]: eth0: Gained IPv6LL May 10 00:50:28.509050 ignition[836]: INFO : Ignition 2.14.0 May 10 00:50:28.509050 ignition[836]: INFO : Stage: files May 10 00:50:28.509050 ignition[836]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:50:28.509050 ignition[836]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:50:28.509050 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:50:28.509050 ignition[836]: DEBUG : files: compiled without relabeling support, skipping May 10 00:50:28.509050 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:50:28.509050 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:50:28.351719 systemd[1]: Starting ignition-files.service... May 10 00:50:28.612101 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:50:28.612101 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:50:28.612101 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:50:28.612101 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" May 10 00:50:28.612101 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition May 10 00:50:28.612101 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1244499524" May 10 00:50:28.612101 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1244499524": device or resource busy May 10 00:50:28.612101 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1244499524", trying btrfs: device or resource busy May 10 00:50:28.612101 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1244499524" May 10 00:50:28.612101 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1244499524" May 10 00:50:28.612101 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem1244499524" May 10 00:50:28.612101 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem1244499524" May 10 00:50:28.612101 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" May 10 00:50:28.612101 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:50:28.612101 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 10 00:50:28.391550 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:50:28.853129 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK May 10 00:50:28.455071 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:50:28.515163 unknown[836]: wrote ssh authorized keys file for user: core May 10 00:50:29.045013 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:50:29.062126 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:50:29.062126 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 10 00:50:29.356039 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK May 10 00:50:29.508290 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem792182076" May 10 00:50:29.524059 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem792182076": device or resource busy May 10 00:50:29.524059 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem792182076", trying btrfs: device or resource busy May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem792182076" May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem792182076" May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem792182076" May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem792182076" May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/install.sh" May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:50:29.524059 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2163141126" May 10 00:50:29.771103 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2163141126": device or resource busy May 10 00:50:29.771103 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2163141126", trying btrfs: device or resource busy May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2163141126" May 10 00:50:29.771103 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2163141126" May 10 00:50:29.526776 systemd[1]: mnt-oem792182076.mount: Deactivated successfully. May 10 00:50:30.030141 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem2163141126" May 10 00:50:30.030141 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem2163141126" May 10 00:50:30.030141 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" May 10 00:50:30.030141 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:50:30.030141 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 10 00:50:30.030141 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK May 10 00:50:29.551408 systemd[1]: mnt-oem2163141126.mount: Deactivated successfully. May 10 00:50:30.138440 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:50:30.157084 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" May 10 00:50:30.157084 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition May 10 00:50:30.157084 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1529320025" May 10 00:50:30.157084 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1529320025": device or resource busy May 10 00:50:30.157084 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1529320025", trying btrfs: device or resource busy May 10 00:50:30.157084 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1529320025" May 10 00:50:30.157084 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1529320025" May 10 00:50:30.157084 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem1529320025" May 10 00:50:30.157084 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem1529320025" May 10 00:50:30.157084 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" May 10 00:50:30.157084 ignition[836]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" May 10 00:50:30.157084 ignition[836]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 10 00:50:30.157084 ignition[836]: INFO : files: op(1d): [started] processing unit "oem-gce.service" May 10 00:50:30.157084 ignition[836]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" May 10 00:50:30.157084 ignition[836]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" May 10 00:50:30.157084 ignition[836]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" May 10 00:50:30.157084 ignition[836]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" May 10 00:50:30.663225 kernel: audit: type=1130 audit(1746838230.172:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.663283 kernel: audit: type=1130 audit(1746838230.276:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.663302 kernel: audit: type=1130 audit(1746838230.342:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.663318 kernel: audit: type=1131 audit(1746838230.342:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.663343 kernel: audit: type=1130 audit(1746838230.442:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.663358 kernel: audit: type=1131 audit(1746838230.442:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.663373 kernel: audit: type=1130 audit(1746838230.568:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.156920 systemd[1]: mnt-oem1529320025.mount: Deactivated successfully. May 10 00:50:30.679236 ignition[836]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:50:30.679236 ignition[836]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:50:30.679236 ignition[836]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" May 10 00:50:30.679236 ignition[836]: INFO : files: op(21): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:50:30.679236 ignition[836]: INFO : files: op(21): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:50:30.679236 ignition[836]: INFO : files: op(22): [started] setting preset to enabled for "oem-gce.service" May 10 00:50:30.679236 ignition[836]: INFO : files: op(22): [finished] setting preset to enabled for "oem-gce.service" May 10 00:50:30.679236 ignition[836]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" May 10 00:50:30.679236 ignition[836]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" May 10 00:50:30.679236 ignition[836]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" May 10 00:50:30.679236 ignition[836]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:50:30.679236 ignition[836]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:50:30.679236 ignition[836]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:50:30.679236 ignition[836]: INFO : files: files passed May 10 00:50:30.679236 ignition[836]: INFO : Ignition finished successfully May 10 00:50:30.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.173232 systemd[1]: Finished ignition-files.service. May 10 00:50:30.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.184612 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 10 00:50:31.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.019233 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:50:31.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.217293 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 10 00:50:30.218433 systemd[1]: Starting ignition-quench.service... May 10 00:50:30.248500 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 10 00:50:31.089232 iscsid[701]: iscsid shutting down. May 10 00:50:31.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.278599 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:50:30.278758 systemd[1]: Finished ignition-quench.service. May 10 00:50:30.344487 systemd[1]: Reached target ignition-complete.target. May 10 00:50:31.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.410244 systemd[1]: Starting initrd-parse-etc.service... May 10 00:50:31.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.162162 ignition[875]: INFO : Ignition 2.14.0 May 10 00:50:31.162162 ignition[875]: INFO : Stage: umount May 10 00:50:31.162162 ignition[875]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:50:31.162162 ignition[875]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:50:31.162162 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:50:31.162162 ignition[875]: INFO : umount: umount passed May 10 00:50:31.162162 ignition[875]: INFO : Ignition finished successfully May 10 00:50:31.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.438251 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:50:31.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.438382 systemd[1]: Finished initrd-parse-etc.service. May 10 00:50:31.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.444449 systemd[1]: Reached target initrd-fs.target. May 10 00:50:31.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.507327 systemd[1]: Reached target initrd.target. May 10 00:50:30.525407 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 10 00:50:30.526755 systemd[1]: Starting dracut-pre-pivot.service... May 10 00:50:30.551423 systemd[1]: Finished dracut-pre-pivot.service. May 10 00:50:30.571573 systemd[1]: Starting initrd-cleanup.service... May 10 00:50:30.617725 systemd[1]: Stopped target nss-lookup.target. May 10 00:50:30.631343 systemd[1]: Stopped target remote-cryptsetup.target. May 10 00:50:30.652427 systemd[1]: Stopped target timers.target. May 10 00:50:31.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.670364 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:50:31.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.670567 systemd[1]: Stopped dracut-pre-pivot.service. May 10 00:50:30.687536 systemd[1]: Stopped target initrd.target. May 10 00:50:31.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.711415 systemd[1]: Stopped target basic.target. May 10 00:50:30.735350 systemd[1]: Stopped target ignition-complete.target. May 10 00:50:30.754374 systemd[1]: Stopped target ignition-diskful.target. May 10 00:50:30.776348 systemd[1]: Stopped target initrd-root-device.target. May 10 00:50:30.799369 systemd[1]: Stopped target remote-fs.target. May 10 00:50:31.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.821419 systemd[1]: Stopped target remote-fs-pre.target. May 10 00:50:31.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.537000 audit: BPF prog-id=6 op=UNLOAD May 10 00:50:30.842393 systemd[1]: Stopped target sysinit.target. May 10 00:50:30.865366 systemd[1]: Stopped target local-fs.target. May 10 00:50:30.879489 systemd[1]: Stopped target local-fs-pre.target. May 10 00:50:31.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.906412 systemd[1]: Stopped target swap.target. May 10 00:50:31.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.919443 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:50:31.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.919635 systemd[1]: Stopped dracut-pre-mount.service. May 10 00:50:30.940575 systemd[1]: Stopped target cryptsetup.target. May 10 00:50:31.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.975472 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:50:30.975682 systemd[1]: Stopped dracut-initqueue.service. May 10 00:50:30.992511 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:50:31.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:30.992715 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 10 00:50:31.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.009472 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:50:31.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.009651 systemd[1]: Stopped ignition-files.service. May 10 00:50:31.029850 systemd[1]: Stopping ignition-mount.service... May 10 00:50:31.066539 systemd[1]: Stopping iscsid.service... May 10 00:50:31.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.080075 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:50:31.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.080364 systemd[1]: Stopped kmod-static-nodes.service. May 10 00:50:31.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:31.098763 systemd[1]: Stopping sysroot-boot.service... May 10 00:50:31.122205 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:50:31.122476 systemd[1]: Stopped systemd-udev-trigger.service. May 10 00:50:31.138462 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:50:31.138642 systemd[1]: Stopped dracut-pre-trigger.service. May 10 00:50:31.158837 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:50:31.878140 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). May 10 00:50:31.160067 systemd[1]: iscsid.service: Deactivated successfully. May 10 00:50:31.160196 systemd[1]: Stopped iscsid.service. May 10 00:50:31.176930 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:50:31.177069 systemd[1]: Stopped ignition-mount.service. May 10 00:50:31.183922 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:50:31.184051 systemd[1]: Stopped sysroot-boot.service. May 10 00:50:31.202564 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:50:31.202688 systemd[1]: Finished initrd-cleanup.service. May 10 00:50:31.225509 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:50:31.225579 systemd[1]: Stopped ignition-disks.service. May 10 00:50:31.256308 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:50:31.256382 systemd[1]: Stopped ignition-kargs.service. May 10 00:50:31.272358 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 00:50:31.272437 systemd[1]: Stopped ignition-fetch.service. May 10 00:50:31.288312 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:50:31.288388 systemd[1]: Stopped ignition-fetch-offline.service. May 10 00:50:31.303276 systemd[1]: Stopped target paths.target. May 10 00:50:31.317098 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:50:31.320146 systemd[1]: Stopped systemd-ask-password-console.path. May 10 00:50:31.324300 systemd[1]: Stopped target slices.target. May 10 00:50:31.349075 systemd[1]: Stopped target sockets.target. May 10 00:50:31.370286 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:50:31.370350 systemd[1]: Closed iscsid.socket. May 10 00:50:31.390261 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:50:31.390346 systemd[1]: Stopped ignition-setup.service. May 10 00:50:31.398370 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:50:31.398443 systemd[1]: Stopped initrd-setup-root.service. May 10 00:50:31.419430 systemd[1]: Stopping iscsiuio.service... May 10 00:50:31.435014 systemd[1]: iscsiuio.service: Deactivated successfully. May 10 00:50:31.435151 systemd[1]: Stopped iscsiuio.service. May 10 00:50:31.449413 systemd[1]: Stopped target network.target. May 10 00:50:31.466172 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:50:31.466252 systemd[1]: Closed iscsiuio.socket. May 10 00:50:31.473496 systemd[1]: Stopping systemd-networkd.service... May 10 00:50:31.477005 systemd-networkd[691]: eth0: DHCPv6 lease lost May 10 00:50:31.493308 systemd[1]: Stopping systemd-resolved.service... May 10 00:50:31.508538 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:50:31.508676 systemd[1]: Stopped systemd-resolved.service. May 10 00:50:31.517030 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:50:31.517163 systemd[1]: Stopped systemd-networkd.service. May 10 00:50:31.539429 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:50:31.539473 systemd[1]: Closed systemd-networkd.socket. May 10 00:50:31.556238 systemd[1]: Stopping network-cleanup.service... May 10 00:50:31.569113 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:50:31.569239 systemd[1]: Stopped parse-ip-for-networkd.service. May 10 00:50:31.585234 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:50:31.585310 systemd[1]: Stopped systemd-sysctl.service. May 10 00:50:31.600297 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:50:31.600363 systemd[1]: Stopped systemd-modules-load.service. May 10 00:50:31.615414 systemd[1]: Stopping systemd-udevd.service... May 10 00:50:31.633843 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 00:50:31.634566 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:50:31.634728 systemd[1]: Stopped systemd-udevd.service. May 10 00:50:31.648577 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:50:31.648677 systemd[1]: Closed systemd-udevd-control.socket. May 10 00:50:31.662251 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:50:31.662308 systemd[1]: Closed systemd-udevd-kernel.socket. May 10 00:50:31.679261 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:50:31.679343 systemd[1]: Stopped dracut-pre-udev.service. May 10 00:50:31.694280 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:50:31.694352 systemd[1]: Stopped dracut-cmdline.service. May 10 00:50:31.712308 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:50:31.712380 systemd[1]: Stopped dracut-cmdline-ask.service. May 10 00:50:31.729335 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 10 00:50:31.752096 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:50:31.752222 systemd[1]: Stopped systemd-vconsole-setup.service. May 10 00:50:31.767808 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:50:31.767970 systemd[1]: Stopped network-cleanup.service. May 10 00:50:31.783501 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:50:31.783642 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 10 00:50:31.801398 systemd[1]: Reached target initrd-switch-root.target. May 10 00:50:31.818267 systemd[1]: Starting initrd-switch-root.service... May 10 00:50:31.843076 systemd[1]: Switching root. May 10 00:50:31.890194 systemd-journald[190]: Journal stopped May 10 00:50:36.691890 kernel: SELinux: Class mctp_socket not defined in policy. May 10 00:50:36.692127 kernel: SELinux: Class anon_inode not defined in policy. May 10 00:50:36.692162 kernel: SELinux: the above unknown classes and permissions will be allowed May 10 00:50:36.692178 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:50:36.692193 kernel: SELinux: policy capability open_perms=1 May 10 00:50:36.692215 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:50:36.692229 kernel: SELinux: policy capability always_check_network=0 May 10 00:50:36.692244 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:50:36.692259 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:50:36.692278 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:50:36.692292 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:50:36.692309 systemd[1]: Successfully loaded SELinux policy in 121.155ms. May 10 00:50:36.692337 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.119ms. May 10 00:50:36.692357 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:50:36.692374 systemd[1]: Detected virtualization kvm. May 10 00:50:36.692389 systemd[1]: Detected architecture x86-64. May 10 00:50:36.692451 systemd[1]: Detected first boot. May 10 00:50:36.692476 systemd[1]: Initializing machine ID from VM UUID. May 10 00:50:36.692499 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 10 00:50:36.692541 systemd[1]: Populated /etc with preset unit settings. May 10 00:50:36.692566 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:50:36.692589 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:50:36.692609 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:50:36.692626 kernel: kauditd_printk_skb: 51 callbacks suppressed May 10 00:50:36.692643 kernel: audit: type=1334 audit(1746838235.754:88): prog-id=12 op=LOAD May 10 00:50:36.692658 kernel: audit: type=1334 audit(1746838235.754:89): prog-id=3 op=UNLOAD May 10 00:50:36.692672 kernel: audit: type=1334 audit(1746838235.766:90): prog-id=13 op=LOAD May 10 00:50:36.692686 kernel: audit: type=1334 audit(1746838235.773:91): prog-id=14 op=LOAD May 10 00:50:36.692700 kernel: audit: type=1334 audit(1746838235.773:92): prog-id=4 op=UNLOAD May 10 00:50:36.692714 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 00:50:36.692729 kernel: audit: type=1334 audit(1746838235.773:93): prog-id=5 op=UNLOAD May 10 00:50:36.692744 kernel: audit: type=1131 audit(1746838235.775:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.692761 kernel: audit: type=1334 audit(1746838235.831:95): prog-id=12 op=UNLOAD May 10 00:50:36.692775 systemd[1]: Stopped initrd-switch-root.service. May 10 00:50:36.692791 kernel: audit: type=1130 audit(1746838235.854:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.692805 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 00:50:36.692820 kernel: audit: type=1131 audit(1746838235.854:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.692838 systemd[1]: Created slice system-addon\x2dconfig.slice. May 10 00:50:36.692853 systemd[1]: Created slice system-addon\x2drun.slice. May 10 00:50:36.692868 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 10 00:50:36.692886 systemd[1]: Created slice system-getty.slice. May 10 00:50:36.692921 systemd[1]: Created slice system-modprobe.slice. May 10 00:50:36.692947 systemd[1]: Created slice system-serial\x2dgetty.slice. May 10 00:50:36.692965 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 10 00:50:36.692980 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 10 00:50:36.692995 systemd[1]: Created slice user.slice. May 10 00:50:36.693010 systemd[1]: Started systemd-ask-password-console.path. May 10 00:50:36.693024 systemd[1]: Started systemd-ask-password-wall.path. May 10 00:50:36.693086 systemd[1]: Set up automount boot.automount. May 10 00:50:36.693110 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 10 00:50:36.693134 systemd[1]: Stopped target initrd-switch-root.target. May 10 00:50:36.693155 systemd[1]: Stopped target initrd-fs.target. May 10 00:50:36.693177 systemd[1]: Stopped target initrd-root-fs.target. May 10 00:50:36.693200 systemd[1]: Reached target integritysetup.target. May 10 00:50:36.693221 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:50:36.693243 systemd[1]: Reached target remote-fs.target. May 10 00:50:36.693265 systemd[1]: Reached target slices.target. May 10 00:50:36.693292 systemd[1]: Reached target swap.target. May 10 00:50:36.693313 systemd[1]: Reached target torcx.target. May 10 00:50:36.693335 systemd[1]: Reached target veritysetup.target. May 10 00:50:36.693357 systemd[1]: Listening on systemd-coredump.socket. May 10 00:50:36.693380 systemd[1]: Listening on systemd-initctl.socket. May 10 00:50:36.693403 systemd[1]: Listening on systemd-networkd.socket. May 10 00:50:36.693426 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:50:36.693451 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:50:36.693619 systemd[1]: Listening on systemd-userdbd.socket. May 10 00:50:36.693643 systemd[1]: Mounting dev-hugepages.mount... May 10 00:50:36.693673 systemd[1]: Mounting dev-mqueue.mount... May 10 00:50:36.693695 systemd[1]: Mounting media.mount... May 10 00:50:36.693717 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:50:36.693738 systemd[1]: Mounting sys-kernel-debug.mount... May 10 00:50:36.693760 systemd[1]: Mounting sys-kernel-tracing.mount... May 10 00:50:36.693784 systemd[1]: Mounting tmp.mount... May 10 00:50:36.693808 systemd[1]: Starting flatcar-tmpfiles.service... May 10 00:50:36.693832 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:50:36.693856 systemd[1]: Starting kmod-static-nodes.service... May 10 00:50:36.693881 systemd[1]: Starting modprobe@configfs.service... May 10 00:50:36.694051 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:50:36.694092 systemd[1]: Starting modprobe@drm.service... May 10 00:50:36.694109 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:50:36.694124 systemd[1]: Starting modprobe@fuse.service... May 10 00:50:36.694141 systemd[1]: Starting modprobe@loop.service... May 10 00:50:36.694156 kernel: fuse: init (API version 7.34) May 10 00:50:36.694173 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:50:36.694190 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 00:50:36.694216 kernel: loop: module loaded May 10 00:50:36.694231 systemd[1]: Stopped systemd-fsck-root.service. May 10 00:50:36.694246 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 00:50:36.694260 systemd[1]: Stopped systemd-fsck-usr.service. May 10 00:50:36.694275 systemd[1]: Stopped systemd-journald.service. May 10 00:50:36.694290 systemd[1]: Starting systemd-journald.service... May 10 00:50:36.694305 systemd[1]: Starting systemd-modules-load.service... May 10 00:50:36.694321 systemd[1]: Starting systemd-network-generator.service... May 10 00:50:36.694344 systemd-journald[999]: Journal started May 10 00:50:36.694426 systemd-journald[999]: Runtime Journal (/run/log/journal/8b5d6874158a57611deafa7941266321) is 8.0M, max 148.8M, 140.8M free. May 10 00:50:31.889000 audit: BPF prog-id=9 op=UNLOAD May 10 00:50:32.221000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:50:32.382000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:50:32.382000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:50:32.382000 audit: BPF prog-id=10 op=LOAD May 10 00:50:32.382000 audit: BPF prog-id=10 op=UNLOAD May 10 00:50:32.382000 audit: BPF prog-id=11 op=LOAD May 10 00:50:32.382000 audit: BPF prog-id=11 op=UNLOAD May 10 00:50:32.553000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 10 00:50:32.553000 audit[908]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:50:32.553000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:50:32.564000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 10 00:50:32.564000 audit[908]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:50:32.564000 audit: CWD cwd="/" May 10 00:50:32.564000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:32.564000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:32.564000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:50:35.754000 audit: BPF prog-id=12 op=LOAD May 10 00:50:35.754000 audit: BPF prog-id=3 op=UNLOAD May 10 00:50:35.766000 audit: BPF prog-id=13 op=LOAD May 10 00:50:35.773000 audit: BPF prog-id=14 op=LOAD May 10 00:50:35.773000 audit: BPF prog-id=4 op=UNLOAD May 10 00:50:35.773000 audit: BPF prog-id=5 op=UNLOAD May 10 00:50:35.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:35.831000 audit: BPF prog-id=12 op=UNLOAD May 10 00:50:35.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:35.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.646000 audit: BPF prog-id=15 op=LOAD May 10 00:50:36.646000 audit: BPF prog-id=16 op=LOAD May 10 00:50:36.646000 audit: BPF prog-id=17 op=LOAD May 10 00:50:36.646000 audit: BPF prog-id=13 op=UNLOAD May 10 00:50:36.646000 audit: BPF prog-id=14 op=UNLOAD May 10 00:50:36.682000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 00:50:36.682000 audit[999]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc4b4d3830 a2=4000 a3=7ffc4b4d38cc items=0 ppid=1 pid=999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:50:36.682000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 10 00:50:35.753159 systemd[1]: Queued start job for default target multi-user.target. May 10 00:50:32.549063 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:50:35.753175 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 10 00:50:32.550393 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:50:35.776655 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 00:50:32.550432 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:50:32.550489 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 10 00:50:32.550510 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=debug msg="skipped missing lower profile" missing profile=oem May 10 00:50:32.550573 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 10 00:50:32.550600 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 10 00:50:32.550959 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 10 00:50:32.551037 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:50:32.551063 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:50:32.554273 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 10 00:50:32.554349 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 10 00:50:32.554388 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 10 00:50:32.554417 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 10 00:50:32.554452 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 10 00:50:32.554479 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 10 00:50:35.084313 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:50:35.084626 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:50:35.084782 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:50:35.085072 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:50:35.085137 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 10 00:50:35.085210 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-10T00:50:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 10 00:50:36.707963 systemd[1]: Starting systemd-remount-fs.service... May 10 00:50:36.723955 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:50:36.736941 systemd[1]: verity-setup.service: Deactivated successfully. May 10 00:50:36.737037 systemd[1]: Stopped verity-setup.service. May 10 00:50:36.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.762101 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:50:36.770946 systemd[1]: Started systemd-journald.service. May 10 00:50:36.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.780403 systemd[1]: Mounted dev-hugepages.mount. May 10 00:50:36.787267 systemd[1]: Mounted dev-mqueue.mount. May 10 00:50:36.794302 systemd[1]: Mounted media.mount. May 10 00:50:36.801275 systemd[1]: Mounted sys-kernel-debug.mount. May 10 00:50:36.810338 systemd[1]: Mounted sys-kernel-tracing.mount. May 10 00:50:36.819324 systemd[1]: Mounted tmp.mount. May 10 00:50:36.826437 systemd[1]: Finished flatcar-tmpfiles.service. May 10 00:50:36.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.835611 systemd[1]: Finished kmod-static-nodes.service. May 10 00:50:36.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.844542 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:50:36.844786 systemd[1]: Finished modprobe@configfs.service. May 10 00:50:36.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.854541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:50:36.854881 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:50:36.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.864621 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:50:36.864850 systemd[1]: Finished modprobe@drm.service. May 10 00:50:36.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.873602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:50:36.873829 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:50:36.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.882583 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:50:36.882813 systemd[1]: Finished modprobe@fuse.service. May 10 00:50:36.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.891531 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:50:36.891767 systemd[1]: Finished modprobe@loop.service. May 10 00:50:36.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.900607 systemd[1]: Finished systemd-modules-load.service. May 10 00:50:36.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.910613 systemd[1]: Finished systemd-network-generator.service. May 10 00:50:36.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.919545 systemd[1]: Finished systemd-remount-fs.service. May 10 00:50:36.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.928562 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:50:36.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.937869 systemd[1]: Reached target network-pre.target. May 10 00:50:36.947686 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 10 00:50:36.957663 systemd[1]: Mounting sys-kernel-config.mount... May 10 00:50:36.965114 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:50:36.968116 systemd[1]: Starting systemd-hwdb-update.service... May 10 00:50:36.976839 systemd[1]: Starting systemd-journal-flush.service... May 10 00:50:36.986615 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:50:36.987961 systemd-journald[999]: Time spent on flushing to /var/log/journal/8b5d6874158a57611deafa7941266321 is 69.770ms for 1155 entries. May 10 00:50:36.987961 systemd-journald[999]: System Journal (/var/log/journal/8b5d6874158a57611deafa7941266321) is 8.0M, max 584.8M, 576.8M free. May 10 00:50:37.086280 systemd-journald[999]: Received client request to flush runtime journal. May 10 00:50:37.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:37.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:36.988469 systemd[1]: Starting systemd-random-seed.service... May 10 00:50:37.003149 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:50:37.005035 systemd[1]: Starting systemd-sysctl.service... May 10 00:50:37.014039 systemd[1]: Starting systemd-sysusers.service... May 10 00:50:37.022810 systemd[1]: Starting systemd-udev-settle.service... May 10 00:50:37.088486 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 10 00:50:37.033456 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 10 00:50:37.042260 systemd[1]: Mounted sys-kernel-config.mount. May 10 00:50:37.051443 systemd[1]: Finished systemd-random-seed.service. May 10 00:50:37.060572 systemd[1]: Finished systemd-sysctl.service. May 10 00:50:37.074074 systemd[1]: Reached target first-boot-complete.target. May 10 00:50:37.088014 systemd[1]: Finished systemd-journal-flush.service. May 10 00:50:37.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:37.096656 systemd[1]: Finished systemd-sysusers.service. May 10 00:50:37.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:37.710498 systemd[1]: Finished systemd-hwdb-update.service. May 10 00:50:37.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:37.718000 audit: BPF prog-id=18 op=LOAD May 10 00:50:37.718000 audit: BPF prog-id=19 op=LOAD May 10 00:50:37.718000 audit: BPF prog-id=7 op=UNLOAD May 10 00:50:37.718000 audit: BPF prog-id=8 op=UNLOAD May 10 00:50:37.721073 systemd[1]: Starting systemd-udevd.service... May 10 00:50:37.744520 systemd-udevd[1016]: Using default interface naming scheme 'v252'. May 10 00:50:37.799236 systemd[1]: Started systemd-udevd.service. May 10 00:50:37.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:37.809000 audit: BPF prog-id=20 op=LOAD May 10 00:50:37.812667 systemd[1]: Starting systemd-networkd.service... May 10 00:50:37.826000 audit: BPF prog-id=21 op=LOAD May 10 00:50:37.827000 audit: BPF prog-id=22 op=LOAD May 10 00:50:37.827000 audit: BPF prog-id=23 op=LOAD May 10 00:50:37.830219 systemd[1]: Starting systemd-userdbd.service... May 10 00:50:37.882480 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 10 00:50:37.899153 systemd[1]: Started systemd-userdbd.service. May 10 00:50:37.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:37.982975 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 10 00:50:38.028867 systemd-networkd[1029]: lo: Link UP May 10 00:50:38.028881 systemd-networkd[1029]: lo: Gained carrier May 10 00:50:38.030305 systemd-networkd[1029]: Enumeration completed May 10 00:50:38.030474 systemd[1]: Started systemd-networkd.service. May 10 00:50:38.031490 systemd-networkd[1029]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:50:38.033803 systemd-networkd[1029]: eth0: Link UP May 10 00:50:38.033989 systemd-networkd[1029]: eth0: Gained carrier May 10 00:50:38.040939 kernel: ACPI: button: Power Button [PWRF] May 10 00:50:38.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:38.045318 systemd-networkd[1029]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388.c.flatcar-212911.internal' to 'ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388' May 10 00:50:38.045350 systemd-networkd[1029]: eth0: DHCPv4 address 10.128.0.57/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 10 00:50:38.104945 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 10 00:50:38.070000 audit[1018]: AVC avc: denied { confidentiality } for pid=1018 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:50:38.070000 audit[1018]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56057f5ac710 a1=338ac a2=7fd0820fdbc5 a3=5 items=110 ppid=1016 pid=1018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:50:38.070000 audit: CWD cwd="/" May 10 00:50:38.070000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=1 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=2 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=3 name=(null) inode=14546 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=4 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=5 name=(null) inode=14547 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=6 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=7 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=8 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=9 name=(null) inode=14549 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=10 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=11 name=(null) inode=14550 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=12 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=13 name=(null) inode=14551 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=14 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=15 name=(null) inode=14552 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=16 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=17 name=(null) inode=14553 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=18 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=19 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=20 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=21 name=(null) inode=14555 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=22 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=23 name=(null) inode=14556 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=24 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=25 name=(null) inode=14557 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.121943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:50:38.070000 audit: PATH item=26 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=27 name=(null) inode=14558 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=28 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=29 name=(null) inode=14559 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=30 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=31 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=32 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=33 name=(null) inode=14561 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=34 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=35 name=(null) inode=14562 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=36 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=37 name=(null) inode=14563 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=38 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=39 name=(null) inode=14564 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=40 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=41 name=(null) inode=14565 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=42 name=(null) inode=14545 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=43 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=44 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=45 name=(null) inode=14567 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=46 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=47 name=(null) inode=14568 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=48 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=49 name=(null) inode=14569 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=50 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=51 name=(null) inode=14570 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=52 name=(null) inode=14566 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=53 name=(null) inode=14571 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=55 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=56 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=57 name=(null) inode=14573 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=58 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=59 name=(null) inode=14574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=60 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=61 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=62 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=63 name=(null) inode=14576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=64 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=65 name=(null) inode=14577 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=66 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=67 name=(null) inode=14578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=68 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=69 name=(null) inode=14579 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=70 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=71 name=(null) inode=14580 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=72 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=73 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=74 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=75 name=(null) inode=14582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=76 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=77 name=(null) inode=14583 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=78 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=79 name=(null) inode=14584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=80 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=81 name=(null) inode=14585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=82 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=83 name=(null) inode=14586 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=84 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=85 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=86 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=87 name=(null) inode=14588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=88 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=89 name=(null) inode=14589 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=90 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=91 name=(null) inode=14590 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=92 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=93 name=(null) inode=14591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=94 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=95 name=(null) inode=14592 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=96 name=(null) inode=14572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=97 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=98 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=99 name=(null) inode=14594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=100 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=101 name=(null) inode=14595 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=102 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=103 name=(null) inode=14596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=104 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=105 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=106 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=107 name=(null) inode=14598 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PATH item=109 name=(null) inode=14599 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:50:38.070000 audit: PROCTITLE proctitle="(udev-worker)" May 10 00:50:38.141021 kernel: EDAC MC: Ver: 3.0.0 May 10 00:50:38.174064 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 10 00:50:38.184934 kernel: ACPI: button: Sleep Button [SLPF] May 10 00:50:38.204950 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 10 00:50:38.215973 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:50:38.235478 systemd[1]: Finished systemd-udev-settle.service. May 10 00:50:38.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:38.246817 systemd[1]: Starting lvm2-activation-early.service... May 10 00:50:38.277241 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:50:38.310341 systemd[1]: Finished lvm2-activation-early.service. May 10 00:50:38.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:38.319288 systemd[1]: Reached target cryptsetup.target. May 10 00:50:38.329695 systemd[1]: Starting lvm2-activation.service... May 10 00:50:38.336081 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:50:38.359343 systemd[1]: Finished lvm2-activation.service. May 10 00:50:38.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:38.368369 systemd[1]: Reached target local-fs-pre.target. May 10 00:50:38.377103 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:50:38.377185 systemd[1]: Reached target local-fs.target. May 10 00:50:38.386099 systemd[1]: Reached target machines.target. May 10 00:50:38.395695 systemd[1]: Starting ldconfig.service... May 10 00:50:38.404354 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:50:38.404452 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:50:38.406321 systemd[1]: Starting systemd-boot-update.service... May 10 00:50:38.415770 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 10 00:50:38.425215 systemd[1]: Starting systemd-machine-id-commit.service... May 10 00:50:38.435802 systemd[1]: Starting systemd-sysext.service... May 10 00:50:38.443673 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1056 (bootctl) May 10 00:50:38.446572 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 10 00:50:38.461942 systemd[1]: Unmounting usr-share-oem.mount... May 10 00:50:38.472070 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 10 00:50:38.472373 systemd[1]: Unmounted usr-share-oem.mount. May 10 00:50:38.495963 kernel: loop0: detected capacity change from 0 to 210664 May 10 00:50:38.498404 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 10 00:50:38.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:38.647580 systemd-fsck[1067]: fsck.fat 4.2 (2021-01-31) May 10 00:50:38.647580 systemd-fsck[1067]: /dev/sda1: 790 files, 120688/258078 clusters May 10 00:50:38.652003 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 10 00:50:38.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:38.664009 systemd[1]: Mounting boot.mount... May 10 00:50:38.688751 systemd[1]: Mounted boot.mount. May 10 00:50:38.714370 systemd[1]: Finished systemd-boot-update.service. May 10 00:50:38.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:38.918930 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:50:38.976995 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:50:38.977990 systemd[1]: Finished systemd-machine-id-commit.service. May 10 00:50:38.990140 kernel: loop1: detected capacity change from 0 to 210664 May 10 00:50:38.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:39.074438 ldconfig[1055]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:50:39.079771 systemd[1]: Finished ldconfig.service. May 10 00:50:39.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:39.105875 (sd-sysext)[1073]: Using extensions 'kubernetes'. May 10 00:50:39.106586 (sd-sysext)[1073]: Merged extensions into '/usr'. May 10 00:50:39.128426 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:50:39.130564 systemd[1]: Mounting usr-share-oem.mount... May 10 00:50:39.138274 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:50:39.140980 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:50:39.150037 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:50:39.159873 systemd[1]: Starting modprobe@loop.service... May 10 00:50:39.167153 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:50:39.167431 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:50:39.167662 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:50:39.172070 systemd[1]: Mounted usr-share-oem.mount. May 10 00:50:39.179679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:50:39.179921 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:50:39.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:39.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:39.188835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:50:39.189094 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:50:39.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:39.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:39.198886 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:50:39.199174 systemd[1]: Finished modprobe@loop.service. May 10 00:50:39.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:39.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:39.208919 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:50:39.209168 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:50:39.210667 systemd[1]: Finished systemd-sysext.service. May 10 00:50:39.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:39.220834 systemd[1]: Starting ensure-sysext.service... May 10 00:50:39.226097 systemd-networkd[1029]: eth0: Gained IPv6LL May 10 00:50:39.229510 systemd[1]: Starting systemd-tmpfiles-setup.service... May 10 00:50:39.241724 systemd[1]: Reloading. May 10 00:50:39.261662 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 10 00:50:39.272753 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:50:39.289512 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:50:39.349705 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2025-05-10T00:50:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:50:39.349757 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2025-05-10T00:50:39Z" level=info msg="torcx already run" May 10 00:50:39.511583 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:50:39.511942 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:50:39.553593 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:50:39.639000 audit: BPF prog-id=24 op=LOAD May 10 00:50:39.639000 audit: BPF prog-id=25 op=LOAD May 10 00:50:39.639000 audit: BPF prog-id=18 op=UNLOAD May 10 00:50:39.639000 audit: BPF prog-id=19 op=UNLOAD May 10 00:50:39.641000 audit: BPF prog-id=26 op=LOAD May 10 00:50:39.641000 audit: BPF prog-id=15 op=UNLOAD May 10 00:50:39.641000 audit: BPF prog-id=27 op=LOAD May 10 00:50:39.641000 audit: BPF prog-id=28 op=LOAD May 10 00:50:39.641000 audit: BPF prog-id=16 op=UNLOAD May 10 00:50:39.641000 audit: BPF prog-id=17 op=UNLOAD May 10 00:50:39.644000 audit: BPF prog-id=29 op=LOAD May 10 00:50:39.644000 audit: BPF prog-id=21 op=UNLOAD May 10 00:50:39.644000 audit: BPF prog-id=30 op=LOAD May 10 00:50:39.644000 audit: BPF prog-id=31 op=LOAD May 10 00:50:39.644000 audit: BPF prog-id=22 op=UNLOAD May 10 00:50:39.644000 audit: BPF prog-id=23 op=UNLOAD May 10 00:50:39.646000 audit: BPF prog-id=32 op=LOAD May 10 00:50:39.646000 audit: BPF prog-id=20 op=UNLOAD May 10 00:50:39.655585 systemd[1]: Finished systemd-tmpfiles-setup.service. May 10 00:50:39.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:39.670846 systemd[1]: Starting audit-rules.service... May 10 00:50:39.680235 systemd[1]: Starting clean-ca-certificates.service... May 10 00:50:39.691386 systemd[1]: Starting oem-gce-enable-oslogin.service... May 10 00:50:39.701954 systemd[1]: Starting systemd-journal-catalog-update.service... May 10 00:50:39.709000 audit: BPF prog-id=33 op=LOAD May 10 00:50:39.713155 systemd[1]: Starting systemd-resolved.service... May 10 00:50:39.722878 systemd[1]: Starting systemd-timesyncd.service... May 10 00:50:39.719000 audit: BPF prog-id=34 op=LOAD May 10 00:50:39.732523 systemd[1]: Starting systemd-update-utmp.service... May 10 00:50:39.742473 systemd[1]: Finished clean-ca-certificates.service. May 10 00:50:39.741000 audit[1170]: SYSTEM_BOOT pid=1170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 10 00:50:39.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:50:39.751716 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 10 00:50:39.751990 systemd[1]: Finished oem-gce-enable-oslogin.service. May 10 00:50:39.757000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 10 00:50:39.757000 audit[1174]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffef82f0a90 a2=420 a3=0 items=0 ppid=1144 pid=1174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:50:39.757000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 10 00:50:39.759762 augenrules[1174]: No rules May 10 00:50:39.761697 systemd[1]: Finished audit-rules.service. May 10 00:50:39.769711 systemd[1]: Finished systemd-journal-catalog-update.service. May 10 00:50:39.784670 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:50:39.785210 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:50:39.790073 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:50:39.799395 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:50:39.809207 systemd[1]: Starting modprobe@loop.service... May 10 00:50:39.818277 systemd[1]: Starting oem-gce-enable-oslogin.service... May 10 00:50:39.825506 enable-oslogin[1182]: /etc/pam.d/sshd already exists. Not enabling OS Login May 10 00:50:39.827134 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:50:39.827499 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:50:39.830222 systemd[1]: Starting systemd-update-done.service... May 10 00:50:39.837064 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:50:39.837414 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:50:39.841324 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:50:39.841569 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:50:39.850988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:50:39.851217 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:50:39.861051 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:50:39.861277 systemd[1]: Finished modprobe@loop.service. May 10 00:50:39.871003 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 10 00:50:39.871303 systemd[1]: Finished oem-gce-enable-oslogin.service. May 10 00:50:39.881136 systemd[1]: Finished systemd-update-done.service. May 10 00:50:39.891085 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:50:39.891377 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:50:39.896524 systemd[1]: Finished systemd-update-utmp.service. May 10 00:50:39.907043 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:50:39.907504 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:50:39.909874 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:50:39.912136 systemd-timesyncd[1165]: Contacted time server 169.254.169.254:123 (169.254.169.254). May 10 00:50:39.913543 systemd-timesyncd[1165]: Initial clock synchronization to Sat 2025-05-10 00:50:39.581724 UTC. May 10 00:50:39.918719 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:50:39.920813 systemd-resolved[1161]: Positive Trust Anchors: May 10 00:50:39.920833 systemd-resolved[1161]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:50:39.920886 systemd-resolved[1161]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:50:39.928000 systemd[1]: Starting modprobe@loop.service... May 10 00:50:39.937002 systemd[1]: Starting oem-gce-enable-oslogin.service... May 10 00:50:39.942259 enable-oslogin[1188]: /etc/pam.d/sshd already exists. Not enabling OS Login May 10 00:50:39.945140 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:50:39.945346 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:50:39.945484 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:50:39.945572 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:50:39.946629 systemd[1]: Started systemd-timesyncd.service. May 10 00:50:39.956425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:50:39.956647 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:50:39.963666 systemd-resolved[1161]: Defaulting to hostname 'linux'. May 10 00:50:39.965602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:50:39.965766 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:50:39.974525 systemd[1]: Started systemd-resolved.service. May 10 00:50:39.983676 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:50:39.983924 systemd[1]: Finished modprobe@loop.service. May 10 00:50:39.992657 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 10 00:50:39.992898 systemd[1]: Finished oem-gce-enable-oslogin.service. May 10 00:50:40.005280 systemd[1]: Reached target network.target. May 10 00:50:40.013240 systemd[1]: Reached target nss-lookup.target. May 10 00:50:40.021248 systemd[1]: Reached target time-set.target. May 10 00:50:40.029207 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:50:40.029651 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:50:40.031662 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:50:40.041072 systemd[1]: Starting modprobe@drm.service... May 10 00:50:40.050133 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:50:40.059082 systemd[1]: Starting modprobe@loop.service... May 10 00:50:40.068311 systemd[1]: Starting oem-gce-enable-oslogin.service... May 10 00:50:40.072325 enable-oslogin[1193]: /etc/pam.d/sshd already exists. Not enabling OS Login May 10 00:50:40.077205 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:50:40.077548 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:50:40.079546 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 00:50:40.088091 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:50:40.088331 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:50:40.090825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:50:40.091097 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:50:40.099665 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:50:40.099930 systemd[1]: Finished modprobe@drm.service. May 10 00:50:40.108634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:50:40.108850 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:50:40.117642 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:50:40.117846 systemd[1]: Finished modprobe@loop.service. May 10 00:50:40.126834 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 10 00:50:40.127168 systemd[1]: Finished oem-gce-enable-oslogin.service. May 10 00:50:40.135692 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 00:50:40.146105 systemd[1]: Reached target network-online.target. May 10 00:50:40.154149 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:50:40.154209 systemd[1]: Reached target sysinit.target. May 10 00:50:40.162228 systemd[1]: Started motdgen.path. May 10 00:50:40.169168 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 10 00:50:40.179314 systemd[1]: Started logrotate.timer. May 10 00:50:40.186287 systemd[1]: Started mdadm.timer. May 10 00:50:40.193113 systemd[1]: Started systemd-tmpfiles-clean.timer. May 10 00:50:40.201070 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:50:40.201129 systemd[1]: Reached target paths.target. May 10 00:50:40.208090 systemd[1]: Reached target timers.target. May 10 00:50:40.215520 systemd[1]: Listening on dbus.socket. May 10 00:50:40.223443 systemd[1]: Starting docker.socket... May 10 00:50:40.234498 systemd[1]: Listening on sshd.socket. May 10 00:50:40.241244 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:50:40.241352 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:50:40.242432 systemd[1]: Finished ensure-sysext.service. May 10 00:50:40.251325 systemd[1]: Listening on docker.socket. May 10 00:50:40.259266 systemd[1]: Reached target sockets.target. May 10 00:50:40.267092 systemd[1]: Reached target basic.target. May 10 00:50:40.274127 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:50:40.274177 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:50:40.275849 systemd[1]: Starting containerd.service... May 10 00:50:40.284596 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 10 00:50:40.294912 systemd[1]: Starting dbus.service... May 10 00:50:40.304159 systemd[1]: Starting enable-oem-cloudinit.service... May 10 00:50:40.317887 systemd[1]: Starting extend-filesystems.service... May 10 00:50:40.319541 jq[1200]: false May 10 00:50:40.327535 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 10 00:50:40.330478 systemd[1]: Starting kubelet.service... May 10 00:50:40.340645 systemd[1]: Starting motdgen.service... May 10 00:50:40.347341 systemd[1]: Starting oem-gce.service... May 10 00:50:40.354724 systemd[1]: Starting prepare-helm.service... May 10 00:50:40.364479 systemd[1]: Starting ssh-key-proc-cmdline.service... May 10 00:50:40.373413 systemd[1]: Starting sshd-keygen.service... May 10 00:50:40.384304 systemd[1]: Starting systemd-logind.service... May 10 00:50:40.391076 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:50:40.391212 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). May 10 00:50:40.392029 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:50:40.393371 systemd[1]: Starting update-engine.service... May 10 00:50:40.398260 extend-filesystems[1201]: Found loop1 May 10 00:50:40.407174 extend-filesystems[1201]: Found sda May 10 00:50:40.407174 extend-filesystems[1201]: Found sda1 May 10 00:50:40.407174 extend-filesystems[1201]: Found sda2 May 10 00:50:40.407174 extend-filesystems[1201]: Found sda3 May 10 00:50:40.407174 extend-filesystems[1201]: Found usr May 10 00:50:40.407174 extend-filesystems[1201]: Found sda4 May 10 00:50:40.407174 extend-filesystems[1201]: Found sda6 May 10 00:50:40.407174 extend-filesystems[1201]: Found sda7 May 10 00:50:40.407174 extend-filesystems[1201]: Found sda9 May 10 00:50:40.407174 extend-filesystems[1201]: Checking size of /dev/sda9 May 10 00:50:40.402606 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 10 00:50:40.498284 extend-filesystems[1201]: Resized partition /dev/sda9 May 10 00:50:40.507778 jq[1224]: true May 10 00:50:40.419869 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:50:40.509070 extend-filesystems[1238]: resize2fs 1.46.5 (30-Dec-2021) May 10 00:50:40.420305 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 10 00:50:40.508389 dbus-daemon[1199]: [system] SELinux support is enabled May 10 00:50:40.422597 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:50:40.423830 systemd[1]: Finished motdgen.service. May 10 00:50:40.441102 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:50:40.441407 systemd[1]: Finished ssh-key-proc-cmdline.service. May 10 00:50:40.523373 jq[1235]: true May 10 00:50:40.508618 systemd[1]: Started dbus.service. May 10 00:50:40.523759 mkfs.ext4[1237]: mke2fs 1.46.5 (30-Dec-2021) May 10 00:50:40.523759 mkfs.ext4[1237]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done May 10 00:50:40.523759 mkfs.ext4[1237]: Creating filesystem with 262144 4k blocks and 65536 inodes May 10 00:50:40.523759 mkfs.ext4[1237]: Filesystem UUID: 84af8af1-dffc-400f-ab3f-cfc22118a13c May 10 00:50:40.523759 mkfs.ext4[1237]: Superblock backups stored on blocks: May 10 00:50:40.523759 mkfs.ext4[1237]: 32768, 98304, 163840, 229376 May 10 00:50:40.523759 mkfs.ext4[1237]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 10 00:50:40.523759 mkfs.ext4[1237]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 10 00:50:40.523759 mkfs.ext4[1237]: Creating journal (8192 blocks): done May 10 00:50:40.523759 mkfs.ext4[1237]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 10 00:50:40.523656 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:50:40.523724 systemd[1]: Reached target system-config.target. May 10 00:50:40.527826 dbus-daemon[1199]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1029 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 10 00:50:40.536939 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks May 10 00:50:40.537190 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:50:40.537250 systemd[1]: Reached target user-config.target. May 10 00:50:40.549507 dbus-daemon[1199]: [system] Successfully activated service 'org.freedesktop.systemd1' May 10 00:50:40.558369 systemd[1]: Starting systemd-hostnamed.service... May 10 00:50:40.583458 tar[1230]: linux-amd64/helm May 10 00:50:40.603930 kernel: EXT4-fs (sda9): resized filesystem to 2538491 May 10 00:50:40.646018 umount[1253]: umount: /var/lib/flatcar-oem-gce.img: not mounted. May 10 00:50:40.639895 systemd[1]: Started update-engine.service. May 10 00:50:40.646376 update_engine[1222]: I0510 00:50:40.632571 1222 main.cc:92] Flatcar Update Engine starting May 10 00:50:40.646376 update_engine[1222]: I0510 00:50:40.639997 1222 update_check_scheduler.cc:74] Next update check in 2m13s May 10 00:50:40.650977 systemd[1]: Started locksmithd.service. May 10 00:50:40.651435 extend-filesystems[1238]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 10 00:50:40.651435 extend-filesystems[1238]: old_desc_blocks = 1, new_desc_blocks = 2 May 10 00:50:40.651435 extend-filesystems[1238]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. May 10 00:50:40.720560 kernel: loop2: detected capacity change from 0 to 2097152 May 10 00:50:40.720633 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 00:50:40.659078 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:50:40.720822 bash[1262]: Updated "/home/core/.ssh/authorized_keys" May 10 00:50:40.721004 extend-filesystems[1201]: Resized filesystem in /dev/sda9 May 10 00:50:40.659341 systemd[1]: Finished extend-filesystems.service. May 10 00:50:40.707036 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 10 00:50:40.838613 env[1236]: time="2025-05-10T00:50:40.838489505Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 10 00:50:40.918501 systemd-logind[1219]: Watching system buttons on /dev/input/event1 (Power Button) May 10 00:50:40.922967 systemd-logind[1219]: Watching system buttons on /dev/input/event3 (Sleep Button) May 10 00:50:40.923220 systemd-logind[1219]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 00:50:40.923583 systemd-logind[1219]: New seat seat0. May 10 00:50:40.936338 systemd[1]: Started systemd-logind.service. May 10 00:50:41.084334 env[1236]: time="2025-05-10T00:50:41.084271629Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:50:41.095850 env[1236]: time="2025-05-10T00:50:41.095743869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:50:41.097237 coreos-metadata[1198]: May 10 00:50:41.097 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 May 10 00:50:41.100174 env[1236]: time="2025-05-10T00:50:41.100118230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:50:41.100382 env[1236]: time="2025-05-10T00:50:41.100353641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:50:41.100821 env[1236]: time="2025-05-10T00:50:41.100784119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:50:41.100998 env[1236]: time="2025-05-10T00:50:41.100971888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:50:41.101127 env[1236]: time="2025-05-10T00:50:41.101101251Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 10 00:50:41.101973 env[1236]: time="2025-05-10T00:50:41.101942439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:50:41.102245 env[1236]: time="2025-05-10T00:50:41.102216485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:50:41.103472 env[1236]: time="2025-05-10T00:50:41.103439333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:50:41.106016 env[1236]: time="2025-05-10T00:50:41.105976979Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:50:41.108111 coreos-metadata[1198]: May 10 00:50:41.107 INFO Fetch failed with 404: resource not found May 10 00:50:41.108374 coreos-metadata[1198]: May 10 00:50:41.108 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 May 10 00:50:41.108499 env[1236]: time="2025-05-10T00:50:41.108469827Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:50:41.109065 env[1236]: time="2025-05-10T00:50:41.109033466Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 10 00:50:41.109312 env[1236]: time="2025-05-10T00:50:41.109286211Z" level=info msg="metadata content store policy set" policy=shared May 10 00:50:41.109922 coreos-metadata[1198]: May 10 00:50:41.109 INFO Fetch successful May 10 00:50:41.110160 coreos-metadata[1198]: May 10 00:50:41.110 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 May 10 00:50:41.111938 coreos-metadata[1198]: May 10 00:50:41.111 INFO Fetch failed with 404: resource not found May 10 00:50:41.112178 coreos-metadata[1198]: May 10 00:50:41.112 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 May 10 00:50:41.113882 coreos-metadata[1198]: May 10 00:50:41.113 INFO Fetch failed with 404: resource not found May 10 00:50:41.114117 coreos-metadata[1198]: May 10 00:50:41.113 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 May 10 00:50:41.115860 coreos-metadata[1198]: May 10 00:50:41.115 INFO Fetch successful May 10 00:50:41.118072 env[1236]: time="2025-05-10T00:50:41.118010623Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:50:41.118297 env[1236]: time="2025-05-10T00:50:41.118261226Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:50:41.118419 env[1236]: time="2025-05-10T00:50:41.118396920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:50:41.118609 env[1236]: time="2025-05-10T00:50:41.118577344Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:50:41.118836 env[1236]: time="2025-05-10T00:50:41.118813285Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:50:41.120013 env[1236]: time="2025-05-10T00:50:41.119968435Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:50:41.120170 env[1236]: time="2025-05-10T00:50:41.120145204Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:50:41.120311 env[1236]: time="2025-05-10T00:50:41.120286387Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:50:41.121209 env[1236]: time="2025-05-10T00:50:41.121162675Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 10 00:50:41.121358 env[1236]: time="2025-05-10T00:50:41.121334280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:50:41.121506 env[1236]: time="2025-05-10T00:50:41.121483307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:50:41.121630 env[1236]: time="2025-05-10T00:50:41.121609264Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:50:41.122336 env[1236]: time="2025-05-10T00:50:41.122289398Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:50:41.124327 env[1236]: time="2025-05-10T00:50:41.124296636Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:50:41.124960 env[1236]: time="2025-05-10T00:50:41.124887724Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:50:41.125933 unknown[1198]: wrote ssh authorized keys file for user: core May 10 00:50:41.127163 env[1236]: time="2025-05-10T00:50:41.127109723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:50:41.131771 env[1236]: time="2025-05-10T00:50:41.131731067Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:50:41.141426 env[1236]: time="2025-05-10T00:50:41.141361525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:50:41.142592 env[1236]: time="2025-05-10T00:50:41.142539416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:50:41.143430 env[1236]: time="2025-05-10T00:50:41.143384965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:50:41.143983 env[1236]: time="2025-05-10T00:50:41.143953579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:50:41.145334 env[1236]: time="2025-05-10T00:50:41.145301332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:50:41.145494 env[1236]: time="2025-05-10T00:50:41.145470015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:50:41.145986 env[1236]: time="2025-05-10T00:50:41.145955168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:50:41.146123 env[1236]: time="2025-05-10T00:50:41.146100311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:50:41.146285 env[1236]: time="2025-05-10T00:50:41.146262407Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:50:41.146663 env[1236]: time="2025-05-10T00:50:41.146635815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:50:41.147983 env[1236]: time="2025-05-10T00:50:41.147952889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:50:41.149877 env[1236]: time="2025-05-10T00:50:41.149831378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:50:41.150051 env[1236]: time="2025-05-10T00:50:41.150024163Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:50:41.150223 env[1236]: time="2025-05-10T00:50:41.150194164Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 10 00:50:41.151013 env[1236]: time="2025-05-10T00:50:41.150983451Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:50:41.151176 env[1236]: time="2025-05-10T00:50:41.151151460Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 10 00:50:41.151343 env[1236]: time="2025-05-10T00:50:41.151311607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:50:41.151960 env[1236]: time="2025-05-10T00:50:41.151845488Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:50:41.157142 env[1236]: time="2025-05-10T00:50:41.155462666Z" level=info msg="Connect containerd service" May 10 00:50:41.157142 env[1236]: time="2025-05-10T00:50:41.155589386Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:50:41.157142 env[1236]: time="2025-05-10T00:50:41.156710134Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:50:41.157142 env[1236]: time="2025-05-10T00:50:41.156848711Z" level=info msg="Start subscribing containerd event" May 10 00:50:41.157142 env[1236]: time="2025-05-10T00:50:41.156926801Z" level=info msg="Start recovering state" May 10 00:50:41.152600 systemd[1]: Started systemd-hostnamed.service. May 10 00:50:41.152332 dbus-daemon[1199]: [system] Successfully activated service 'org.freedesktop.hostname1' May 10 00:50:41.153064 dbus-daemon[1199]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1254 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 10 00:50:41.166095 systemd[1]: Starting polkit.service... May 10 00:50:41.168025 env[1236]: time="2025-05-10T00:50:41.167868275Z" level=info msg="Start event monitor" May 10 00:50:41.168025 env[1236]: time="2025-05-10T00:50:41.167938066Z" level=info msg="Start snapshots syncer" May 10 00:50:41.168025 env[1236]: time="2025-05-10T00:50:41.167958461Z" level=info msg="Start cni network conf syncer for default" May 10 00:50:41.168025 env[1236]: time="2025-05-10T00:50:41.167971120Z" level=info msg="Start streaming server" May 10 00:50:41.168986 env[1236]: time="2025-05-10T00:50:41.168610182Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:50:41.168986 env[1236]: time="2025-05-10T00:50:41.168712556Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:50:41.168986 env[1236]: time="2025-05-10T00:50:41.168857026Z" level=info msg="containerd successfully booted in 0.375750s" May 10 00:50:41.173466 systemd[1]: Started containerd.service. May 10 00:50:41.185431 update-ssh-keys[1275]: Updated "/home/core/.ssh/authorized_keys" May 10 00:50:41.186641 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 10 00:50:41.282420 polkitd[1276]: Started polkitd version 121 May 10 00:50:41.330800 polkitd[1276]: Loading rules from directory /etc/polkit-1/rules.d May 10 00:50:41.331080 polkitd[1276]: Loading rules from directory /usr/share/polkit-1/rules.d May 10 00:50:41.336376 polkitd[1276]: Finished loading, compiling and executing 2 rules May 10 00:50:41.337199 dbus-daemon[1199]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 10 00:50:41.337447 systemd[1]: Started polkit.service. May 10 00:50:41.341568 polkitd[1276]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 10 00:50:41.370470 systemd-hostnamed[1254]: Hostname set to (transient) May 10 00:50:41.373322 systemd-resolved[1161]: System hostname changed to 'ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388'. May 10 00:50:42.033390 tar[1230]: linux-amd64/LICENSE May 10 00:50:42.033935 tar[1230]: linux-amd64/README.md May 10 00:50:42.054376 systemd[1]: Finished prepare-helm.service. May 10 00:50:42.805405 systemd[1]: Started kubelet.service. May 10 00:50:43.847639 locksmithd[1265]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:50:44.286109 kubelet[1306]: E0510 00:50:44.285973 1306 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:50:44.289235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:50:44.289439 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:50:44.289782 systemd[1]: kubelet.service: Consumed 1.517s CPU time. May 10 00:50:44.501295 sshd_keygen[1225]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:50:44.545463 systemd[1]: Finished sshd-keygen.service. May 10 00:50:44.555608 systemd[1]: Starting issuegen.service... May 10 00:50:44.567629 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:50:44.567935 systemd[1]: Finished issuegen.service. May 10 00:50:44.577699 systemd[1]: Starting systemd-user-sessions.service... May 10 00:50:44.591312 systemd[1]: Finished systemd-user-sessions.service. May 10 00:50:44.602896 systemd[1]: Started getty@tty1.service. May 10 00:50:44.612590 systemd[1]: Started serial-getty@ttyS0.service. May 10 00:50:44.621473 systemd[1]: Reached target getty.target. May 10 00:50:46.690410 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. May 10 00:50:48.217376 systemd[1]: Created slice system-sshd.slice. May 10 00:50:48.228559 systemd[1]: Started sshd@0-10.128.0.57:22-147.75.109.163:55472.service. May 10 00:50:48.549479 sshd[1329]: Accepted publickey for core from 147.75.109.163 port 55472 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:50:48.553259 sshd[1329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:48.573752 systemd[1]: Created slice user-500.slice. May 10 00:50:48.585206 systemd[1]: Starting user-runtime-dir@500.service... May 10 00:50:48.598882 systemd-logind[1219]: New session 1 of user core. May 10 00:50:48.610650 systemd[1]: Finished user-runtime-dir@500.service. May 10 00:50:48.621636 systemd[1]: Starting user@500.service... May 10 00:50:48.646784 (systemd)[1332]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:48.706951 kernel: loop2: detected capacity change from 0 to 2097152 May 10 00:50:48.734444 systemd-nspawn[1338]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. May 10 00:50:48.734444 systemd-nspawn[1338]: Press ^] three times within 1s to kill container. May 10 00:50:48.749946 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 00:50:48.775385 systemd[1]: tmp-unifiedF7TFT0.mount: Deactivated successfully. May 10 00:50:48.789359 systemd[1332]: Queued start job for default target default.target. May 10 00:50:48.792797 systemd[1332]: Reached target paths.target. May 10 00:50:48.792849 systemd[1332]: Reached target sockets.target. May 10 00:50:48.792873 systemd[1332]: Reached target timers.target. May 10 00:50:48.792927 systemd[1332]: Reached target basic.target. May 10 00:50:48.793094 systemd[1]: Started user@500.service. May 10 00:50:48.793375 systemd[1332]: Reached target default.target. May 10 00:50:48.793443 systemd[1332]: Startup finished in 133ms. May 10 00:50:48.801670 systemd[1]: Started session-1.scope. May 10 00:50:48.853227 systemd[1]: Started oem-gce.service. May 10 00:50:48.860543 systemd[1]: Reached target multi-user.target. May 10 00:50:48.871528 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 10 00:50:48.884533 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 10 00:50:48.884811 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 10 00:50:48.894429 systemd[1]: Startup finished in 1.049s (kernel) + 8.268s (initrd) + 16.809s (userspace) = 26.127s. May 10 00:50:48.938613 systemd-nspawn[1338]: + '[' -e /etc/default/instance_configs.cfg.template ']' May 10 00:50:48.938613 systemd-nspawn[1338]: + echo -e '[InstanceSetup]\nset_host_keys = false' May 10 00:50:48.938915 systemd-nspawn[1338]: + /usr/bin/google_instance_setup May 10 00:50:49.030320 systemd[1]: Started sshd@1-10.128.0.57:22-147.75.109.163:55478.service. May 10 00:50:49.321138 sshd[1348]: Accepted publickey for core from 147.75.109.163 port 55478 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:50:49.323864 sshd[1348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:49.331012 systemd-logind[1219]: New session 2 of user core. May 10 00:50:49.331984 systemd[1]: Started session-2.scope. May 10 00:50:49.536215 sshd[1348]: pam_unix(sshd:session): session closed for user core May 10 00:50:49.540712 systemd[1]: sshd@1-10.128.0.57:22-147.75.109.163:55478.service: Deactivated successfully. May 10 00:50:49.542382 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:50:49.542412 systemd-logind[1219]: Session 2 logged out. Waiting for processes to exit. May 10 00:50:49.543972 systemd-logind[1219]: Removed session 2. May 10 00:50:49.582418 systemd[1]: Started sshd@2-10.128.0.57:22-147.75.109.163:55490.service. May 10 00:50:49.669921 instance-setup[1345]: INFO Running google_set_multiqueue. May 10 00:50:49.693254 instance-setup[1345]: INFO Set channels for eth0 to 2. May 10 00:50:49.697186 instance-setup[1345]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. May 10 00:50:49.698660 instance-setup[1345]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 May 10 00:50:49.699260 instance-setup[1345]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. May 10 00:50:49.700487 instance-setup[1345]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 May 10 00:50:49.701002 instance-setup[1345]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. May 10 00:50:49.702474 instance-setup[1345]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 May 10 00:50:49.703101 instance-setup[1345]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. May 10 00:50:49.704751 instance-setup[1345]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 May 10 00:50:49.717052 instance-setup[1345]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus May 10 00:50:49.717691 instance-setup[1345]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus May 10 00:50:49.760254 systemd-nspawn[1338]: + /usr/bin/google_metadata_script_runner --script-type startup May 10 00:50:49.880954 sshd[1357]: Accepted publickey for core from 147.75.109.163 port 55490 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:50:49.882477 sshd[1357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:49.890662 systemd[1]: Started session-3.scope. May 10 00:50:49.891981 systemd-logind[1219]: New session 3 of user core. May 10 00:50:50.090948 sshd[1357]: pam_unix(sshd:session): session closed for user core May 10 00:50:50.096201 systemd-logind[1219]: Session 3 logged out. Waiting for processes to exit. May 10 00:50:50.098381 systemd[1]: sshd@2-10.128.0.57:22-147.75.109.163:55490.service: Deactivated successfully. May 10 00:50:50.099504 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:50:50.102059 systemd-logind[1219]: Removed session 3. May 10 00:50:50.125301 startup-script[1387]: INFO Starting startup scripts. May 10 00:50:50.136015 systemd[1]: Started sshd@3-10.128.0.57:22-147.75.109.163:55506.service. May 10 00:50:50.143427 startup-script[1387]: INFO No startup scripts found in metadata. May 10 00:50:50.143618 startup-script[1387]: INFO Finished running startup scripts. May 10 00:50:50.183832 systemd-nspawn[1338]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM May 10 00:50:50.183832 systemd-nspawn[1338]: + daemon_pids=() May 10 00:50:50.184198 systemd-nspawn[1338]: + for d in accounts clock_skew network May 10 00:50:50.184322 systemd-nspawn[1338]: + daemon_pids+=($!) May 10 00:50:50.184322 systemd-nspawn[1338]: + for d in accounts clock_skew network May 10 00:50:50.184707 systemd-nspawn[1338]: + daemon_pids+=($!) May 10 00:50:50.184707 systemd-nspawn[1338]: + for d in accounts clock_skew network May 10 00:50:50.184936 systemd-nspawn[1338]: + /usr/bin/google_accounts_daemon May 10 00:50:50.185137 systemd-nspawn[1338]: + daemon_pids+=($!) May 10 00:50:50.185753 systemd-nspawn[1338]: + /usr/bin/google_network_daemon May 10 00:50:50.185753 systemd-nspawn[1338]: + NOTIFY_SOCKET=/run/systemd/notify May 10 00:50:50.185753 systemd-nspawn[1338]: + /usr/bin/systemd-notify --ready May 10 00:50:50.185753 systemd-nspawn[1338]: + /usr/bin/google_clock_skew_daemon May 10 00:50:50.258541 systemd-nspawn[1338]: + wait -n 36 37 38 May 10 00:50:50.467154 sshd[1394]: Accepted publickey for core from 147.75.109.163 port 55506 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:50:50.469096 sshd[1394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:50.477231 systemd[1]: Started session-4.scope. May 10 00:50:50.478986 systemd-logind[1219]: New session 4 of user core. May 10 00:50:50.687211 sshd[1394]: pam_unix(sshd:session): session closed for user core May 10 00:50:50.691974 systemd[1]: sshd@3-10.128.0.57:22-147.75.109.163:55506.service: Deactivated successfully. May 10 00:50:50.693105 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:50:50.695486 systemd-logind[1219]: Session 4 logged out. Waiting for processes to exit. May 10 00:50:50.697341 systemd-logind[1219]: Removed session 4. May 10 00:50:50.731290 systemd[1]: Started sshd@4-10.128.0.57:22-147.75.109.163:55508.service. May 10 00:50:50.911882 google-networking[1398]: INFO Starting Google Networking daemon. May 10 00:50:50.995843 google-clock-skew[1397]: INFO Starting Google Clock Skew daemon. May 10 00:50:51.010362 google-clock-skew[1397]: INFO Clock drift token has changed: 0. May 10 00:50:51.015171 groupadd[1414]: group added to /etc/group: name=google-sudoers, GID=1000 May 10 00:50:51.015692 systemd-nspawn[1338]: hwclock: Cannot access the Hardware Clock via any known method. May 10 00:50:51.015692 systemd-nspawn[1338]: hwclock: Use the --verbose option to see the details of our search for an access method. May 10 00:50:51.016835 google-clock-skew[1397]: WARNING Failed to sync system time with hardware clock. May 10 00:50:51.019504 groupadd[1414]: group added to /etc/gshadow: name=google-sudoers May 10 00:50:51.024960 groupadd[1414]: new group: name=google-sudoers, GID=1000 May 10 00:50:51.036021 sshd[1407]: Accepted publickey for core from 147.75.109.163 port 55508 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:50:51.038731 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:51.043418 google-accounts[1396]: INFO Starting Google Accounts daemon. May 10 00:50:51.046774 systemd[1]: Started session-5.scope. May 10 00:50:51.047664 systemd-logind[1219]: New session 5 of user core. May 10 00:50:51.078782 google-accounts[1396]: WARNING OS Login not installed. May 10 00:50:51.080042 google-accounts[1396]: INFO Creating a new user account for 0. May 10 00:50:51.086112 systemd-nspawn[1338]: useradd: invalid user name '0': use --badname to ignore May 10 00:50:51.086883 google-accounts[1396]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. May 10 00:50:51.232663 sudo[1426]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:50:51.233190 sudo[1426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 00:50:51.267967 systemd[1]: Starting docker.service... May 10 00:50:51.317609 env[1436]: time="2025-05-10T00:50:51.317545333Z" level=info msg="Starting up" May 10 00:50:51.319819 env[1436]: time="2025-05-10T00:50:51.319781825Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:50:51.320039 env[1436]: time="2025-05-10T00:50:51.320013286Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:50:51.320151 env[1436]: time="2025-05-10T00:50:51.320130177Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:50:51.320227 env[1436]: time="2025-05-10T00:50:51.320211687Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:50:51.322391 env[1436]: time="2025-05-10T00:50:51.322329831Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:50:51.322391 env[1436]: time="2025-05-10T00:50:51.322362194Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:50:51.322391 env[1436]: time="2025-05-10T00:50:51.322385149Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:50:51.322609 env[1436]: time="2025-05-10T00:50:51.322401175Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:50:51.360121 env[1436]: time="2025-05-10T00:50:51.360051013Z" level=info msg="Loading containers: start." May 10 00:50:51.536938 kernel: Initializing XFRM netlink socket May 10 00:50:51.582029 env[1436]: time="2025-05-10T00:50:51.581975592Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 10 00:50:51.669819 systemd-networkd[1029]: docker0: Link UP May 10 00:50:51.690289 env[1436]: time="2025-05-10T00:50:51.690229884Z" level=info msg="Loading containers: done." May 10 00:50:51.705384 env[1436]: time="2025-05-10T00:50:51.705324602Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:50:51.705983 env[1436]: time="2025-05-10T00:50:51.705946259Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 10 00:50:51.706312 env[1436]: time="2025-05-10T00:50:51.706255991Z" level=info msg="Daemon has completed initialization" May 10 00:50:51.729527 systemd[1]: Started docker.service. May 10 00:50:51.742128 env[1436]: time="2025-05-10T00:50:51.742046021Z" level=info msg="API listen on /run/docker.sock" May 10 00:50:53.009064 env[1236]: time="2025-05-10T00:50:53.009006933Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 10 00:50:53.480344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2787419919.mount: Deactivated successfully. May 10 00:50:54.540742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:50:54.541125 systemd[1]: Stopped kubelet.service. May 10 00:50:54.541199 systemd[1]: kubelet.service: Consumed 1.517s CPU time. May 10 00:50:54.543474 systemd[1]: Starting kubelet.service... May 10 00:50:54.789518 systemd[1]: Started kubelet.service. May 10 00:50:54.907050 kubelet[1569]: E0510 00:50:54.906992 1569 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:50:54.911934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:50:54.912159 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:50:55.451102 env[1236]: time="2025-05-10T00:50:55.451027347Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:55.454389 env[1236]: time="2025-05-10T00:50:55.454316847Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:55.457654 env[1236]: time="2025-05-10T00:50:55.457604457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:55.461288 env[1236]: time="2025-05-10T00:50:55.461215571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:55.462673 env[1236]: time="2025-05-10T00:50:55.462613025Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 10 00:50:55.477790 env[1236]: time="2025-05-10T00:50:55.477734697Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 00:50:57.270439 env[1236]: time="2025-05-10T00:50:57.270356658Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:57.276387 env[1236]: time="2025-05-10T00:50:57.276321383Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:57.280258 env[1236]: time="2025-05-10T00:50:57.280181201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:57.282805 env[1236]: time="2025-05-10T00:50:57.282754377Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:57.283865 env[1236]: time="2025-05-10T00:50:57.283793178Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 10 00:50:57.300559 env[1236]: time="2025-05-10T00:50:57.300509905Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 00:50:58.489253 env[1236]: time="2025-05-10T00:50:58.489167341Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:58.491913 env[1236]: time="2025-05-10T00:50:58.491844676Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:58.494691 env[1236]: time="2025-05-10T00:50:58.494647557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:58.497153 env[1236]: time="2025-05-10T00:50:58.497091946Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:58.498179 env[1236]: time="2025-05-10T00:50:58.498131569Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 10 00:50:58.513825 env[1236]: time="2025-05-10T00:50:58.513759955Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 00:50:59.606429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1924197854.mount: Deactivated successfully. May 10 00:51:00.325369 env[1236]: time="2025-05-10T00:51:00.325287451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:00.328534 env[1236]: time="2025-05-10T00:51:00.328473736Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:00.331195 env[1236]: time="2025-05-10T00:51:00.331143105Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:00.333503 env[1236]: time="2025-05-10T00:51:00.333459727Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:00.334123 env[1236]: time="2025-05-10T00:51:00.334078819Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 10 00:51:00.348003 env[1236]: time="2025-05-10T00:51:00.347950316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:51:00.754043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631421299.mount: Deactivated successfully. May 10 00:51:01.882581 env[1236]: time="2025-05-10T00:51:01.882504009Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:01.885951 env[1236]: time="2025-05-10T00:51:01.885874566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:01.888769 env[1236]: time="2025-05-10T00:51:01.888709910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:01.891429 env[1236]: time="2025-05-10T00:51:01.891350970Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:01.892508 env[1236]: time="2025-05-10T00:51:01.892448926Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 10 00:51:01.909298 env[1236]: time="2025-05-10T00:51:01.909243897Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 00:51:02.321210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount466171860.mount: Deactivated successfully. May 10 00:51:02.328977 env[1236]: time="2025-05-10T00:51:02.328884016Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:02.332820 env[1236]: time="2025-05-10T00:51:02.331964450Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:02.334210 env[1236]: time="2025-05-10T00:51:02.334158660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:02.336502 env[1236]: time="2025-05-10T00:51:02.336446631Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:02.337340 env[1236]: time="2025-05-10T00:51:02.337286642Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 10 00:51:02.352831 env[1236]: time="2025-05-10T00:51:02.352775581Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 00:51:02.731263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount558447287.mount: Deactivated successfully. May 10 00:51:05.163286 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:51:05.163603 systemd[1]: Stopped kubelet.service. May 10 00:51:05.166040 systemd[1]: Starting kubelet.service... May 10 00:51:05.402774 systemd[1]: Started kubelet.service. May 10 00:51:05.490622 kubelet[1610]: E0510 00:51:05.490061 1610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:51:05.493831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:51:05.494072 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:51:05.517545 env[1236]: time="2025-05-10T00:51:05.517417150Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:05.520821 env[1236]: time="2025-05-10T00:51:05.520764155Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:05.523620 env[1236]: time="2025-05-10T00:51:05.523565962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:05.526158 env[1236]: time="2025-05-10T00:51:05.526110211Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:05.527275 env[1236]: time="2025-05-10T00:51:05.527218492Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 10 00:51:09.601558 systemd[1]: Stopped kubelet.service. May 10 00:51:09.605124 systemd[1]: Starting kubelet.service... May 10 00:51:09.634979 systemd[1]: Reloading. May 10 00:51:09.772063 /usr/lib/systemd/system-generators/torcx-generator[1702]: time="2025-05-10T00:51:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:51:09.772118 /usr/lib/systemd/system-generators/torcx-generator[1702]: time="2025-05-10T00:51:09Z" level=info msg="torcx already run" May 10 00:51:09.914298 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:51:09.914325 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:51:09.938314 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:51:10.114755 systemd[1]: Started kubelet.service. May 10 00:51:10.122443 systemd[1]: Stopping kubelet.service... May 10 00:51:10.123413 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:51:10.123708 systemd[1]: Stopped kubelet.service. May 10 00:51:10.126185 systemd[1]: Starting kubelet.service... May 10 00:51:10.411983 systemd[1]: Started kubelet.service. May 10 00:51:10.480361 kubelet[1757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:51:10.480361 kubelet[1757]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:51:10.480361 kubelet[1757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:51:10.481005 kubelet[1757]: I0510 00:51:10.480442 1757 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:51:10.884610 kubelet[1757]: I0510 00:51:10.884494 1757 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:51:10.884820 kubelet[1757]: I0510 00:51:10.884795 1757 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:51:10.885384 kubelet[1757]: I0510 00:51:10.885364 1757 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:51:10.927361 kubelet[1757]: I0510 00:51:10.927319 1757 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:51:10.928766 kubelet[1757]: E0510 00:51:10.928739 1757 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:10.946020 kubelet[1757]: I0510 00:51:10.945985 1757 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:51:10.949075 kubelet[1757]: I0510 00:51:10.949019 1757 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:51:10.949357 kubelet[1757]: I0510 00:51:10.949076 1757 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:51:10.949585 kubelet[1757]: I0510 00:51:10.949382 1757 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:51:10.949585 kubelet[1757]: I0510 00:51:10.949403 1757 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:51:10.949711 kubelet[1757]: I0510 00:51:10.949588 1757 state_mem.go:36] "Initialized new in-memory state store" May 10 00:51:10.951270 kubelet[1757]: I0510 00:51:10.951241 1757 kubelet.go:400] "Attempting to sync node with API server" May 10 00:51:10.951459 kubelet[1757]: I0510 00:51:10.951436 1757 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:51:10.951542 kubelet[1757]: I0510 00:51:10.951483 1757 kubelet.go:312] "Adding apiserver pod source" May 10 00:51:10.951542 kubelet[1757]: I0510 00:51:10.951507 1757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:51:10.952158 kubelet[1757]: W0510 00:51:10.952079 1757 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250509-2100-23376fd288632b292388&limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:10.952158 kubelet[1757]: E0510 00:51:10.952156 1757 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250509-2100-23376fd288632b292388&limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:10.960420 kubelet[1757]: I0510 00:51:10.960387 1757 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:51:10.971384 kubelet[1757]: I0510 00:51:10.971324 1757 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:51:10.971652 kubelet[1757]: W0510 00:51:10.971454 1757 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:51:10.972752 kubelet[1757]: I0510 00:51:10.972265 1757 server.go:1264] "Started kubelet" May 10 00:51:10.972752 kubelet[1757]: W0510 00:51:10.972472 1757 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:10.972752 kubelet[1757]: E0510 00:51:10.972544 1757 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:10.996669 kubelet[1757]: I0510 00:51:10.996265 1757 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:51:10.999639 kubelet[1757]: I0510 00:51:10.999600 1757 server.go:455] "Adding debug handlers to kubelet server" May 10 00:51:11.005966 kubelet[1757]: I0510 00:51:11.000921 1757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:51:11.005966 kubelet[1757]: I0510 00:51:11.001229 1757 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:51:11.009341 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 10 00:51:11.010427 kubelet[1757]: I0510 00:51:11.009638 1757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:51:11.011609 kubelet[1757]: I0510 00:51:11.011579 1757 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:51:11.012243 kubelet[1757]: I0510 00:51:11.012213 1757 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:51:11.012380 kubelet[1757]: I0510 00:51:11.012313 1757 reconciler.go:26] "Reconciler: start to sync state" May 10 00:51:11.016369 kubelet[1757]: E0510 00:51:11.015593 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388?timeout=10s\": dial tcp 10.128.0.57:6443: connect: connection refused" interval="200ms" May 10 00:51:11.016369 kubelet[1757]: W0510 00:51:11.015874 1757 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:11.016369 kubelet[1757]: E0510 00:51:11.015975 1757 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:11.016369 kubelet[1757]: E0510 00:51:11.011869 1757 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388.183e0426fd5b7d5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,UID:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,},FirstTimestamp:2025-05-10 00:51:10.972231006 +0000 UTC m=+0.553438642,LastTimestamp:2025-05-10 00:51:10.972231006 +0000 UTC m=+0.553438642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,}" May 10 00:51:11.016756 kubelet[1757]: I0510 00:51:11.016680 1757 factory.go:221] Registration of the systemd container factory successfully May 10 00:51:11.016831 kubelet[1757]: I0510 00:51:11.016774 1757 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:51:11.018701 kubelet[1757]: I0510 00:51:11.018676 1757 factory.go:221] Registration of the containerd container factory successfully May 10 00:51:11.046100 kubelet[1757]: I0510 00:51:11.046072 1757 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:51:11.046345 kubelet[1757]: I0510 00:51:11.046326 1757 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:51:11.046495 kubelet[1757]: I0510 00:51:11.046480 1757 state_mem.go:36] "Initialized new in-memory state store" May 10 00:51:11.049403 kubelet[1757]: I0510 00:51:11.049372 1757 policy_none.go:49] "None policy: Start" May 10 00:51:11.050507 kubelet[1757]: I0510 00:51:11.050485 1757 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:51:11.050712 kubelet[1757]: I0510 00:51:11.050695 1757 state_mem.go:35] "Initializing new in-memory state store" May 10 00:51:11.058886 kubelet[1757]: I0510 00:51:11.058836 1757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:51:11.061362 kubelet[1757]: I0510 00:51:11.061326 1757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:51:11.061362 kubelet[1757]: I0510 00:51:11.061364 1757 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:51:11.062187 kubelet[1757]: I0510 00:51:11.062131 1757 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:51:11.062306 kubelet[1757]: E0510 00:51:11.062213 1757 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:51:11.064843 kubelet[1757]: W0510 00:51:11.064806 1757 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:11.065770 kubelet[1757]: E0510 00:51:11.065733 1757 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:11.066480 systemd[1]: Created slice kubepods.slice. May 10 00:51:11.075164 systemd[1]: Created slice kubepods-besteffort.slice. May 10 00:51:11.086233 systemd[1]: Created slice kubepods-burstable.slice. May 10 00:51:11.088567 kubelet[1757]: I0510 00:51:11.088540 1757 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:51:11.088942 kubelet[1757]: I0510 00:51:11.088876 1757 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:51:11.089172 kubelet[1757]: I0510 00:51:11.089159 1757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:51:11.091761 kubelet[1757]: E0510 00:51:11.091417 1757 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" not found" May 10 00:51:11.124587 kubelet[1757]: I0510 00:51:11.124534 1757 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.125023 kubelet[1757]: E0510 00:51:11.124974 1757 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.57:6443/api/v1/nodes\": dial tcp 10.128.0.57:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.163753 kubelet[1757]: I0510 00:51:11.163205 1757 topology_manager.go:215] "Topology Admit Handler" podUID="1b91be5520f5f7c60e188c0973f69f37" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.170318 kubelet[1757]: I0510 00:51:11.170251 1757 topology_manager.go:215] "Topology Admit Handler" podUID="2f1398ec1acd18ef33a022ee5067f366" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.179517 kubelet[1757]: I0510 00:51:11.179467 1757 topology_manager.go:215] "Topology Admit Handler" podUID="69e1cf39c38483f55cf97326e7c07681" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.185943 systemd[1]: Created slice kubepods-burstable-pod1b91be5520f5f7c60e188c0973f69f37.slice. May 10 00:51:11.210517 systemd[1]: Created slice kubepods-burstable-pod2f1398ec1acd18ef33a022ee5067f366.slice. May 10 00:51:11.214483 kubelet[1757]: I0510 00:51:11.214441 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f1398ec1acd18ef33a022ee5067f366-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"2f1398ec1acd18ef33a022ee5067f366\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.214654 kubelet[1757]: I0510 00:51:11.214490 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f1398ec1acd18ef33a022ee5067f366-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"2f1398ec1acd18ef33a022ee5067f366\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.214654 kubelet[1757]: I0510 00:51:11.214521 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b91be5520f5f7c60e188c0973f69f37-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"1b91be5520f5f7c60e188c0973f69f37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.214654 kubelet[1757]: I0510 00:51:11.214548 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b91be5520f5f7c60e188c0973f69f37-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"1b91be5520f5f7c60e188c0973f69f37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.214654 kubelet[1757]: I0510 00:51:11.214575 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b91be5520f5f7c60e188c0973f69f37-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"1b91be5520f5f7c60e188c0973f69f37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.214884 kubelet[1757]: I0510 00:51:11.214605 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f1398ec1acd18ef33a022ee5067f366-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"2f1398ec1acd18ef33a022ee5067f366\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.214884 kubelet[1757]: I0510 00:51:11.214632 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f1398ec1acd18ef33a022ee5067f366-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"2f1398ec1acd18ef33a022ee5067f366\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.214884 kubelet[1757]: I0510 00:51:11.214658 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69e1cf39c38483f55cf97326e7c07681-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"69e1cf39c38483f55cf97326e7c07681\") " pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.214884 kubelet[1757]: I0510 00:51:11.214686 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f1398ec1acd18ef33a022ee5067f366-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"2f1398ec1acd18ef33a022ee5067f366\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.216884 kubelet[1757]: E0510 00:51:11.216827 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388?timeout=10s\": dial tcp 10.128.0.57:6443: connect: connection refused" interval="400ms" May 10 00:51:11.222433 systemd[1]: Created slice kubepods-burstable-pod69e1cf39c38483f55cf97326e7c07681.slice. May 10 00:51:11.333455 kubelet[1757]: I0510 00:51:11.333418 1757 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.334132 kubelet[1757]: E0510 00:51:11.334075 1757 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.57:6443/api/v1/nodes\": dial tcp 10.128.0.57:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.380583 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 10 00:51:11.508383 env[1236]: time="2025-05-10T00:51:11.507365042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,Uid:1b91be5520f5f7c60e188c0973f69f37,Namespace:kube-system,Attempt:0,}" May 10 00:51:11.516351 env[1236]: time="2025-05-10T00:51:11.516258739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,Uid:2f1398ec1acd18ef33a022ee5067f366,Namespace:kube-system,Attempt:0,}" May 10 00:51:11.526821 env[1236]: time="2025-05-10T00:51:11.526760206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,Uid:69e1cf39c38483f55cf97326e7c07681,Namespace:kube-system,Attempt:0,}" May 10 00:51:11.617759 kubelet[1757]: E0510 00:51:11.617666 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388?timeout=10s\": dial tcp 10.128.0.57:6443: connect: connection refused" interval="800ms" May 10 00:51:11.742484 kubelet[1757]: I0510 00:51:11.742443 1757 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.742951 kubelet[1757]: E0510 00:51:11.742881 1757 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.57:6443/api/v1/nodes\": dial tcp 10.128.0.57:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:11.945666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2409868293.mount: Deactivated successfully. May 10 00:51:11.956301 kubelet[1757]: W0510 00:51:11.956197 1757 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:11.956520 kubelet[1757]: E0510 00:51:11.956321 1757 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:11.958014 env[1236]: time="2025-05-10T00:51:11.957944054Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:11.962843 env[1236]: time="2025-05-10T00:51:11.962771779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:11.963994 env[1236]: time="2025-05-10T00:51:11.963946656Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:11.965832 env[1236]: time="2025-05-10T00:51:11.965773637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:11.969050 env[1236]: time="2025-05-10T00:51:11.969005492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:11.970792 env[1236]: time="2025-05-10T00:51:11.970730334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:11.971648 env[1236]: time="2025-05-10T00:51:11.971607815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:11.974149 env[1236]: time="2025-05-10T00:51:11.974093166Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:11.975199 env[1236]: time="2025-05-10T00:51:11.975148491Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:11.976127 env[1236]: time="2025-05-10T00:51:11.976087288Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:11.980359 env[1236]: time="2025-05-10T00:51:11.980286340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:11.982973 env[1236]: time="2025-05-10T00:51:11.982894695Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:12.041834 env[1236]: time="2025-05-10T00:51:12.041735340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:51:12.042169 env[1236]: time="2025-05-10T00:51:12.042124541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:51:12.042339 env[1236]: time="2025-05-10T00:51:12.042302423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:51:12.042706 env[1236]: time="2025-05-10T00:51:12.042648432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7b5bb52d9249e72c4b55539f71593184071b218aa70b39beb337d6a125f6a41 pid=1806 runtime=io.containerd.runc.v2 May 10 00:51:12.045568 env[1236]: time="2025-05-10T00:51:12.045490726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:51:12.045712 env[1236]: time="2025-05-10T00:51:12.045588262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:51:12.045712 env[1236]: time="2025-05-10T00:51:12.045632024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:51:12.045960 env[1236]: time="2025-05-10T00:51:12.045889732Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d3dfd29e3bd0108ffca883b689ae67fda6143821e577b92d437c136ffcf2ae3 pid=1803 runtime=io.containerd.runc.v2 May 10 00:51:12.055499 env[1236]: time="2025-05-10T00:51:12.055292094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:51:12.055699 env[1236]: time="2025-05-10T00:51:12.055548436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:51:12.055699 env[1236]: time="2025-05-10T00:51:12.055654661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:51:12.056225 env[1236]: time="2025-05-10T00:51:12.056159724Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3772e54d49335032a2f933d401af612f2f9eb8583a4b82b1a1d21b465a8b8b26 pid=1823 runtime=io.containerd.runc.v2 May 10 00:51:12.083952 systemd[1]: Started cri-containerd-0d3dfd29e3bd0108ffca883b689ae67fda6143821e577b92d437c136ffcf2ae3.scope. May 10 00:51:12.108950 kubelet[1757]: W0510 00:51:12.106830 1757 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:12.108950 kubelet[1757]: E0510 00:51:12.106962 1757 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:12.138022 systemd[1]: Started cri-containerd-c7b5bb52d9249e72c4b55539f71593184071b218aa70b39beb337d6a125f6a41.scope. May 10 00:51:12.155803 systemd[1]: Started cri-containerd-3772e54d49335032a2f933d401af612f2f9eb8583a4b82b1a1d21b465a8b8b26.scope. May 10 00:51:12.201695 env[1236]: time="2025-05-10T00:51:12.201541840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,Uid:69e1cf39c38483f55cf97326e7c07681,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d3dfd29e3bd0108ffca883b689ae67fda6143821e577b92d437c136ffcf2ae3\"" May 10 00:51:12.207759 kubelet[1757]: E0510 00:51:12.207698 1757 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-7-nightly-20250509-2100-23376fd288632b" May 10 00:51:12.216332 env[1236]: time="2025-05-10T00:51:12.216280638Z" level=info msg="CreateContainer within sandbox \"0d3dfd29e3bd0108ffca883b689ae67fda6143821e577b92d437c136ffcf2ae3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:51:12.242971 env[1236]: time="2025-05-10T00:51:12.242478411Z" level=info msg="CreateContainer within sandbox \"0d3dfd29e3bd0108ffca883b689ae67fda6143821e577b92d437c136ffcf2ae3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1acc49a4d7e3ad74b76f6e59f696a5e5111ad89ea8f7677e48b6ab8d166fc042\"" May 10 00:51:12.244535 env[1236]: time="2025-05-10T00:51:12.244491811Z" level=info msg="StartContainer for \"1acc49a4d7e3ad74b76f6e59f696a5e5111ad89ea8f7677e48b6ab8d166fc042\"" May 10 00:51:12.274882 env[1236]: time="2025-05-10T00:51:12.274820275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,Uid:1b91be5520f5f7c60e188c0973f69f37,Namespace:kube-system,Attempt:0,} returns sandbox id \"3772e54d49335032a2f933d401af612f2f9eb8583a4b82b1a1d21b465a8b8b26\"" May 10 00:51:12.276867 kubelet[1757]: E0510 00:51:12.276810 1757 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b" May 10 00:51:12.279777 env[1236]: time="2025-05-10T00:51:12.279732027Z" level=info msg="CreateContainer within sandbox \"3772e54d49335032a2f933d401af612f2f9eb8583a4b82b1a1d21b465a8b8b26\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:51:12.285890 env[1236]: time="2025-05-10T00:51:12.285823824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,Uid:2f1398ec1acd18ef33a022ee5067f366,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7b5bb52d9249e72c4b55539f71593184071b218aa70b39beb337d6a125f6a41\"" May 10 00:51:12.288892 kubelet[1757]: E0510 00:51:12.288852 1757 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376" May 10 00:51:12.291723 env[1236]: time="2025-05-10T00:51:12.291676122Z" level=info msg="CreateContainer within sandbox \"c7b5bb52d9249e72c4b55539f71593184071b218aa70b39beb337d6a125f6a41\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:51:12.298355 systemd[1]: Started cri-containerd-1acc49a4d7e3ad74b76f6e59f696a5e5111ad89ea8f7677e48b6ab8d166fc042.scope. May 10 00:51:12.309145 env[1236]: time="2025-05-10T00:51:12.309050302Z" level=info msg="CreateContainer within sandbox \"3772e54d49335032a2f933d401af612f2f9eb8583a4b82b1a1d21b465a8b8b26\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c337008439d15d3a8fc1c3195f391cb46a8b6057c5e1467a97292b09c3157d2b\"" May 10 00:51:12.313091 env[1236]: time="2025-05-10T00:51:12.313045065Z" level=info msg="StartContainer for \"c337008439d15d3a8fc1c3195f391cb46a8b6057c5e1467a97292b09c3157d2b\"" May 10 00:51:12.322266 env[1236]: time="2025-05-10T00:51:12.322190928Z" level=info msg="CreateContainer within sandbox \"c7b5bb52d9249e72c4b55539f71593184071b218aa70b39beb337d6a125f6a41\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"92991fb0d683320d62ab59160a1e6869fc0ceaab690df6b649672db7c41001f9\"" May 10 00:51:12.322969 env[1236]: time="2025-05-10T00:51:12.322889590Z" level=info msg="StartContainer for \"92991fb0d683320d62ab59160a1e6869fc0ceaab690df6b649672db7c41001f9\"" May 10 00:51:12.353433 kubelet[1757]: W0510 00:51:12.353299 1757 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250509-2100-23376fd288632b292388&limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:12.353433 kubelet[1757]: E0510 00:51:12.353390 1757 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250509-2100-23376fd288632b292388&limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:12.355513 systemd[1]: Started cri-containerd-c337008439d15d3a8fc1c3195f391cb46a8b6057c5e1467a97292b09c3157d2b.scope. May 10 00:51:12.380889 systemd[1]: Started cri-containerd-92991fb0d683320d62ab59160a1e6869fc0ceaab690df6b649672db7c41001f9.scope. May 10 00:51:12.418499 kubelet[1757]: E0510 00:51:12.418423 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388?timeout=10s\": dial tcp 10.128.0.57:6443: connect: connection refused" interval="1.6s" May 10 00:51:12.453538 env[1236]: time="2025-05-10T00:51:12.453378822Z" level=info msg="StartContainer for \"1acc49a4d7e3ad74b76f6e59f696a5e5111ad89ea8f7677e48b6ab8d166fc042\" returns successfully" May 10 00:51:12.469999 env[1236]: time="2025-05-10T00:51:12.469930880Z" level=info msg="StartContainer for \"c337008439d15d3a8fc1c3195f391cb46a8b6057c5e1467a97292b09c3157d2b\" returns successfully" May 10 00:51:12.529066 env[1236]: time="2025-05-10T00:51:12.529003877Z" level=info msg="StartContainer for \"92991fb0d683320d62ab59160a1e6869fc0ceaab690df6b649672db7c41001f9\" returns successfully" May 10 00:51:12.545515 kubelet[1757]: W0510 00:51:12.545410 1757 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:12.545515 kubelet[1757]: E0510 00:51:12.545524 1757 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused May 10 00:51:12.549168 kubelet[1757]: I0510 00:51:12.549122 1757 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:12.549580 kubelet[1757]: E0510 00:51:12.549542 1757 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.57:6443/api/v1/nodes\": dial tcp 10.128.0.57:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:14.156376 kubelet[1757]: I0510 00:51:14.156302 1757 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:15.878836 kubelet[1757]: E0510 00:51:15.878776 1757 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" not found" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:15.955520 kubelet[1757]: I0510 00:51:15.955469 1757 apiserver.go:52] "Watching apiserver" May 10 00:51:15.970022 kubelet[1757]: I0510 00:51:15.969977 1757 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:16.013417 kubelet[1757]: I0510 00:51:16.013368 1757 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:51:17.880042 systemd[1]: Reloading. May 10 00:51:18.007616 /usr/lib/systemd/system-generators/torcx-generator[2050]: time="2025-05-10T00:51:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:51:18.007668 /usr/lib/systemd/system-generators/torcx-generator[2050]: time="2025-05-10T00:51:18Z" level=info msg="torcx already run" May 10 00:51:18.125678 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:51:18.125706 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:51:18.152519 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:51:18.317930 kubelet[1757]: E0510 00:51:18.317713 1757 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388.183e0426fd5b7d5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,UID:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,},FirstTimestamp:2025-05-10 00:51:10.972231006 +0000 UTC m=+0.553438642,LastTimestamp:2025-05-10 00:51:10.972231006 +0000 UTC m=+0.553438642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388,}" May 10 00:51:18.318424 systemd[1]: Stopping kubelet.service... May 10 00:51:18.335138 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:51:18.335457 systemd[1]: Stopped kubelet.service. May 10 00:51:18.335546 systemd[1]: kubelet.service: Consumed 1.024s CPU time. May 10 00:51:18.339788 systemd[1]: Starting kubelet.service... May 10 00:51:18.603845 systemd[1]: Started kubelet.service. May 10 00:51:18.690480 kubelet[2098]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:51:18.690480 kubelet[2098]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:51:18.690480 kubelet[2098]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:51:18.691177 kubelet[2098]: I0510 00:51:18.690562 2098 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:51:18.700376 kubelet[2098]: I0510 00:51:18.700323 2098 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:51:18.700376 kubelet[2098]: I0510 00:51:18.700354 2098 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:51:18.700724 kubelet[2098]: I0510 00:51:18.700687 2098 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:51:18.702438 kubelet[2098]: I0510 00:51:18.702384 2098 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:51:18.703983 kubelet[2098]: I0510 00:51:18.703945 2098 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:51:18.713932 kubelet[2098]: I0510 00:51:18.713887 2098 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:51:18.714335 kubelet[2098]: I0510 00:51:18.714264 2098 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:51:18.714594 kubelet[2098]: I0510 00:51:18.714342 2098 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:51:18.714780 kubelet[2098]: I0510 00:51:18.714613 2098 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:51:18.714780 kubelet[2098]: I0510 00:51:18.714630 2098 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:51:18.714780 kubelet[2098]: I0510 00:51:18.714691 2098 state_mem.go:36] "Initialized new in-memory state store" May 10 00:51:18.715031 kubelet[2098]: I0510 00:51:18.714815 2098 kubelet.go:400] "Attempting to sync node with API server" May 10 00:51:18.715031 kubelet[2098]: I0510 00:51:18.714831 2098 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:51:18.715031 kubelet[2098]: I0510 00:51:18.714864 2098 kubelet.go:312] "Adding apiserver pod source" May 10 00:51:18.715031 kubelet[2098]: I0510 00:51:18.714890 2098 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:51:18.721794 kubelet[2098]: I0510 00:51:18.717557 2098 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:51:18.721794 kubelet[2098]: I0510 00:51:18.717787 2098 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:51:18.721794 kubelet[2098]: I0510 00:51:18.718398 2098 server.go:1264] "Started kubelet" May 10 00:51:18.724447 kubelet[2098]: I0510 00:51:18.724418 2098 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:51:18.734026 kubelet[2098]: I0510 00:51:18.733963 2098 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:51:18.735364 kubelet[2098]: I0510 00:51:18.735333 2098 server.go:455] "Adding debug handlers to kubelet server" May 10 00:51:18.740601 kubelet[2098]: I0510 00:51:18.740227 2098 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:51:18.740601 kubelet[2098]: I0510 00:51:18.740544 2098 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:51:18.745053 kubelet[2098]: I0510 00:51:18.745021 2098 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:51:18.749723 kubelet[2098]: I0510 00:51:18.749692 2098 reconciler.go:26] "Reconciler: start to sync state" May 10 00:51:18.752241 kubelet[2098]: I0510 00:51:18.752186 2098 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:51:18.754025 kubelet[2098]: I0510 00:51:18.753994 2098 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:51:18.754156 kubelet[2098]: I0510 00:51:18.754041 2098 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:51:18.754156 kubelet[2098]: I0510 00:51:18.754072 2098 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:51:18.754156 kubelet[2098]: E0510 00:51:18.754135 2098 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:51:18.773121 kubelet[2098]: I0510 00:51:18.773080 2098 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:51:18.782284 kubelet[2098]: I0510 00:51:18.782238 2098 factory.go:221] Registration of the containerd container factory successfully May 10 00:51:18.782284 kubelet[2098]: I0510 00:51:18.782277 2098 factory.go:221] Registration of the systemd container factory successfully May 10 00:51:18.782545 kubelet[2098]: I0510 00:51:18.782376 2098 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:51:18.792063 kubelet[2098]: E0510 00:51:18.792025 2098 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:51:18.856183 kubelet[2098]: E0510 00:51:18.854484 2098 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 00:51:18.856183 kubelet[2098]: I0510 00:51:18.855158 2098 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:18.880037 kubelet[2098]: I0510 00:51:18.879970 2098 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:51:18.880299 kubelet[2098]: I0510 00:51:18.880275 2098 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:51:18.880479 kubelet[2098]: I0510 00:51:18.880459 2098 state_mem.go:36] "Initialized new in-memory state store" May 10 00:51:18.881228 kubelet[2098]: I0510 00:51:18.881200 2098 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:51:18.881478 kubelet[2098]: I0510 00:51:18.881416 2098 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:51:18.881640 kubelet[2098]: I0510 00:51:18.881623 2098 policy_none.go:49] "None policy: Start" May 10 00:51:18.887180 kubelet[2098]: I0510 00:51:18.887145 2098 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:18.887472 kubelet[2098]: I0510 00:51:18.887452 2098 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:18.892809 kubelet[2098]: I0510 00:51:18.892779 2098 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:51:18.893194 kubelet[2098]: I0510 00:51:18.893174 2098 state_mem.go:35] "Initializing new in-memory state store" May 10 00:51:18.894566 kubelet[2098]: I0510 00:51:18.894541 2098 state_mem.go:75] "Updated machine memory state" May 10 00:51:18.903470 sudo[2128]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 00:51:18.904202 sudo[2128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 10 00:51:18.919296 kubelet[2098]: I0510 00:51:18.919253 2098 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:51:18.919572 kubelet[2098]: I0510 00:51:18.919509 2098 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:51:18.920424 kubelet[2098]: I0510 00:51:18.920403 2098 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:51:19.055314 kubelet[2098]: I0510 00:51:19.055249 2098 topology_manager.go:215] "Topology Admit Handler" podUID="1b91be5520f5f7c60e188c0973f69f37" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.055532 kubelet[2098]: I0510 00:51:19.055403 2098 topology_manager.go:215] "Topology Admit Handler" podUID="2f1398ec1acd18ef33a022ee5067f366" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.055532 kubelet[2098]: I0510 00:51:19.055491 2098 topology_manager.go:215] "Topology Admit Handler" podUID="69e1cf39c38483f55cf97326e7c07681" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.075853 kubelet[2098]: W0510 00:51:19.075803 2098 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 10 00:51:19.077231 kubelet[2098]: W0510 00:51:19.077186 2098 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 10 00:51:19.077945 kubelet[2098]: W0510 00:51:19.077921 2098 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 10 00:51:19.153293 kubelet[2098]: I0510 00:51:19.153237 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f1398ec1acd18ef33a022ee5067f366-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"2f1398ec1acd18ef33a022ee5067f366\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.153632 kubelet[2098]: I0510 00:51:19.153566 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f1398ec1acd18ef33a022ee5067f366-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"2f1398ec1acd18ef33a022ee5067f366\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.153760 kubelet[2098]: I0510 00:51:19.153637 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f1398ec1acd18ef33a022ee5067f366-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"2f1398ec1acd18ef33a022ee5067f366\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.153760 kubelet[2098]: I0510 00:51:19.153671 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b91be5520f5f7c60e188c0973f69f37-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"1b91be5520f5f7c60e188c0973f69f37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.153760 kubelet[2098]: I0510 00:51:19.153727 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f1398ec1acd18ef33a022ee5067f366-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"2f1398ec1acd18ef33a022ee5067f366\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.153973 kubelet[2098]: I0510 00:51:19.153779 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b91be5520f5f7c60e188c0973f69f37-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"1b91be5520f5f7c60e188c0973f69f37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.153973 kubelet[2098]: I0510 00:51:19.153867 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f1398ec1acd18ef33a022ee5067f366-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"2f1398ec1acd18ef33a022ee5067f366\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.154104 kubelet[2098]: I0510 00:51:19.154020 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69e1cf39c38483f55cf97326e7c07681-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"69e1cf39c38483f55cf97326e7c07681\") " pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.154104 kubelet[2098]: I0510 00:51:19.154057 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b91be5520f5f7c60e188c0973f69f37-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" (UID: \"1b91be5520f5f7c60e188c0973f69f37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" May 10 00:51:19.688609 sudo[2128]: pam_unix(sudo:session): session closed for user root May 10 00:51:19.725568 kubelet[2098]: I0510 00:51:19.725509 2098 apiserver.go:52] "Watching apiserver" May 10 00:51:19.774118 kubelet[2098]: I0510 00:51:19.774074 2098 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:51:19.854586 kubelet[2098]: I0510 00:51:19.854493 2098 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" podStartSLOduration=0.854467012 podStartE2EDuration="854.467012ms" podCreationTimestamp="2025-05-10 00:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:51:19.852354041 +0000 UTC m=+1.239849864" watchObservedRunningTime="2025-05-10 00:51:19.854467012 +0000 UTC m=+1.241962830" May 10 00:51:19.889650 kubelet[2098]: I0510 00:51:19.889563 2098 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" podStartSLOduration=0.889535037 podStartE2EDuration="889.535037ms" podCreationTimestamp="2025-05-10 00:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:51:19.868200011 +0000 UTC m=+1.255695836" watchObservedRunningTime="2025-05-10 00:51:19.889535037 +0000 UTC m=+1.277030854" May 10 00:51:21.864019 sudo[1426]: pam_unix(sudo:session): session closed for user root May 10 00:51:21.906747 sshd[1407]: pam_unix(sshd:session): session closed for user core May 10 00:51:21.911737 systemd-logind[1219]: Session 5 logged out. Waiting for processes to exit. May 10 00:51:21.912063 systemd[1]: sshd@4-10.128.0.57:22-147.75.109.163:55508.service: Deactivated successfully. May 10 00:51:21.913270 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:51:21.913495 systemd[1]: session-5.scope: Consumed 7.074s CPU time. May 10 00:51:21.914689 systemd-logind[1219]: Removed session 5. May 10 00:51:25.771524 update_engine[1222]: I0510 00:51:25.771431 1222 update_attempter.cc:509] Updating boot flags... May 10 00:51:25.955694 kubelet[2098]: I0510 00:51:25.955604 2098 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" podStartSLOduration=6.955487642 podStartE2EDuration="6.955487642s" podCreationTimestamp="2025-05-10 00:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:51:19.890370219 +0000 UTC m=+1.277866040" watchObservedRunningTime="2025-05-10 00:51:25.955487642 +0000 UTC m=+7.342983466" May 10 00:51:32.790353 kubelet[2098]: I0510 00:51:32.790305 2098 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:51:32.790978 env[1236]: time="2025-05-10T00:51:32.790777253Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:51:32.791431 kubelet[2098]: I0510 00:51:32.791045 2098 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:51:33.399419 kubelet[2098]: I0510 00:51:33.399361 2098 topology_manager.go:215] "Topology Admit Handler" podUID="af5f20c6-05d8-462b-9777-0e4a01334891" podNamespace="kube-system" podName="kube-proxy-vb2pz" May 10 00:51:33.408039 systemd[1]: Created slice kubepods-besteffort-podaf5f20c6_05d8_462b_9777_0e4a01334891.slice. May 10 00:51:33.427535 kubelet[2098]: I0510 00:51:33.427466 2098 topology_manager.go:215] "Topology Admit Handler" podUID="f2059765-1614-4308-aaab-b28039c37725" podNamespace="kube-system" podName="cilium-qzp4m" May 10 00:51:33.438480 kubelet[2098]: W0510 00:51:33.438424 2098 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388' and this object May 10 00:51:33.438688 kubelet[2098]: E0510 00:51:33.438499 2098 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388' and this object May 10 00:51:33.438688 kubelet[2098]: W0510 00:51:33.438626 2098 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388' and this object May 10 00:51:33.438688 kubelet[2098]: E0510 00:51:33.438648 2098 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388' and this object May 10 00:51:33.452516 systemd[1]: Created slice kubepods-burstable-podf2059765_1614_4308_aaab_b28039c37725.slice. May 10 00:51:33.545107 kubelet[2098]: I0510 00:51:33.545056 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af5f20c6-05d8-462b-9777-0e4a01334891-xtables-lock\") pod \"kube-proxy-vb2pz\" (UID: \"af5f20c6-05d8-462b-9777-0e4a01334891\") " pod="kube-system/kube-proxy-vb2pz" May 10 00:51:33.545481 kubelet[2098]: I0510 00:51:33.545437 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-lib-modules\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.545693 kubelet[2098]: I0510 00:51:33.545666 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-host-proc-sys-kernel\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.545856 kubelet[2098]: I0510 00:51:33.545832 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-etc-cni-netd\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.546046 kubelet[2098]: I0510 00:51:33.546022 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af5f20c6-05d8-462b-9777-0e4a01334891-lib-modules\") pod \"kube-proxy-vb2pz\" (UID: \"af5f20c6-05d8-462b-9777-0e4a01334891\") " pod="kube-system/kube-proxy-vb2pz" May 10 00:51:33.546219 kubelet[2098]: I0510 00:51:33.546193 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cilium-cgroup\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.546370 kubelet[2098]: I0510 00:51:33.546345 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-hostproc\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.546540 kubelet[2098]: I0510 00:51:33.546516 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cilium-run\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.546711 kubelet[2098]: I0510 00:51:33.546686 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cni-path\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.546972 kubelet[2098]: I0510 00:51:33.546946 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2059765-1614-4308-aaab-b28039c37725-cilium-config-path\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.547156 kubelet[2098]: I0510 00:51:33.547133 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-xtables-lock\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.547379 kubelet[2098]: I0510 00:51:33.547352 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af5f20c6-05d8-462b-9777-0e4a01334891-kube-proxy\") pod \"kube-proxy-vb2pz\" (UID: \"af5f20c6-05d8-462b-9777-0e4a01334891\") " pod="kube-system/kube-proxy-vb2pz" May 10 00:51:33.547589 kubelet[2098]: I0510 00:51:33.547566 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2059765-1614-4308-aaab-b28039c37725-clustermesh-secrets\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.547790 kubelet[2098]: I0510 00:51:33.547764 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7rh\" (UniqueName: \"kubernetes.io/projected/f2059765-1614-4308-aaab-b28039c37725-kube-api-access-th7rh\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.548045 kubelet[2098]: I0510 00:51:33.548022 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-host-proc-sys-net\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.548263 kubelet[2098]: I0510 00:51:33.548234 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2bzd\" (UniqueName: \"kubernetes.io/projected/af5f20c6-05d8-462b-9777-0e4a01334891-kube-api-access-z2bzd\") pod \"kube-proxy-vb2pz\" (UID: \"af5f20c6-05d8-462b-9777-0e4a01334891\") " pod="kube-system/kube-proxy-vb2pz" May 10 00:51:33.548474 kubelet[2098]: I0510 00:51:33.548449 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-bpf-maps\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.548665 kubelet[2098]: I0510 00:51:33.548644 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2059765-1614-4308-aaab-b28039c37725-hubble-tls\") pod \"cilium-qzp4m\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " pod="kube-system/cilium-qzp4m" May 10 00:51:33.718487 env[1236]: time="2025-05-10T00:51:33.718330726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vb2pz,Uid:af5f20c6-05d8-462b-9777-0e4a01334891,Namespace:kube-system,Attempt:0,}" May 10 00:51:33.748390 env[1236]: time="2025-05-10T00:51:33.748285545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:51:33.748647 env[1236]: time="2025-05-10T00:51:33.748356543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:51:33.748647 env[1236]: time="2025-05-10T00:51:33.748375230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:51:33.749058 env[1236]: time="2025-05-10T00:51:33.748985489Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/121de917a1cc6c9a6c87e30dc323f55c33e5a46f44c37abc9c981927bb8d2d91 pid=2195 runtime=io.containerd.runc.v2 May 10 00:51:33.780726 systemd[1]: run-containerd-runc-k8s.io-121de917a1cc6c9a6c87e30dc323f55c33e5a46f44c37abc9c981927bb8d2d91-runc.qR621e.mount: Deactivated successfully. May 10 00:51:33.787468 systemd[1]: Started cri-containerd-121de917a1cc6c9a6c87e30dc323f55c33e5a46f44c37abc9c981927bb8d2d91.scope. May 10 00:51:33.831570 env[1236]: time="2025-05-10T00:51:33.831508821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vb2pz,Uid:af5f20c6-05d8-462b-9777-0e4a01334891,Namespace:kube-system,Attempt:0,} returns sandbox id \"121de917a1cc6c9a6c87e30dc323f55c33e5a46f44c37abc9c981927bb8d2d91\"" May 10 00:51:33.837669 env[1236]: time="2025-05-10T00:51:33.837613621Z" level=info msg="CreateContainer within sandbox \"121de917a1cc6c9a6c87e30dc323f55c33e5a46f44c37abc9c981927bb8d2d91\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:51:33.864404 kubelet[2098]: I0510 00:51:33.864345 2098 topology_manager.go:215] "Topology Admit Handler" podUID="44d7b4c1-77e3-4069-9436-be061fc50517" podNamespace="kube-system" podName="cilium-operator-599987898-757bt" May 10 00:51:33.873294 systemd[1]: Created slice kubepods-besteffort-pod44d7b4c1_77e3_4069_9436_be061fc50517.slice. May 10 00:51:33.884594 env[1236]: time="2025-05-10T00:51:33.884537470Z" level=info msg="CreateContainer within sandbox \"121de917a1cc6c9a6c87e30dc323f55c33e5a46f44c37abc9c981927bb8d2d91\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2a95cfe40bafc52108143fc33f58f38cfba72c33d17514baf15825eda74adc7a\"" May 10 00:51:33.886231 env[1236]: time="2025-05-10T00:51:33.886186542Z" level=info msg="StartContainer for \"2a95cfe40bafc52108143fc33f58f38cfba72c33d17514baf15825eda74adc7a\"" May 10 00:51:33.929026 systemd[1]: Started cri-containerd-2a95cfe40bafc52108143fc33f58f38cfba72c33d17514baf15825eda74adc7a.scope. May 10 00:51:33.958203 kubelet[2098]: I0510 00:51:33.958019 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44d7b4c1-77e3-4069-9436-be061fc50517-cilium-config-path\") pod \"cilium-operator-599987898-757bt\" (UID: \"44d7b4c1-77e3-4069-9436-be061fc50517\") " pod="kube-system/cilium-operator-599987898-757bt" May 10 00:51:33.958203 kubelet[2098]: I0510 00:51:33.958088 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg9vt\" (UniqueName: \"kubernetes.io/projected/44d7b4c1-77e3-4069-9436-be061fc50517-kube-api-access-pg9vt\") pod \"cilium-operator-599987898-757bt\" (UID: \"44d7b4c1-77e3-4069-9436-be061fc50517\") " pod="kube-system/cilium-operator-599987898-757bt" May 10 00:51:34.008283 env[1236]: time="2025-05-10T00:51:34.008133815Z" level=info msg="StartContainer for \"2a95cfe40bafc52108143fc33f58f38cfba72c33d17514baf15825eda74adc7a\" returns successfully" May 10 00:51:34.179007 env[1236]: time="2025-05-10T00:51:34.178938308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-757bt,Uid:44d7b4c1-77e3-4069-9436-be061fc50517,Namespace:kube-system,Attempt:0,}" May 10 00:51:34.207276 env[1236]: time="2025-05-10T00:51:34.207150257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:51:34.207276 env[1236]: time="2025-05-10T00:51:34.207209038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:51:34.207619 env[1236]: time="2025-05-10T00:51:34.207229429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:51:34.208179 env[1236]: time="2025-05-10T00:51:34.208092539Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c pid=2306 runtime=io.containerd.runc.v2 May 10 00:51:34.228794 systemd[1]: Started cri-containerd-561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c.scope. May 10 00:51:34.323774 env[1236]: time="2025-05-10T00:51:34.323269166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-757bt,Uid:44d7b4c1-77e3-4069-9436-be061fc50517,Namespace:kube-system,Attempt:0,} returns sandbox id \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\"" May 10 00:51:34.328653 env[1236]: time="2025-05-10T00:51:34.328360200Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 00:51:34.650983 kubelet[2098]: E0510 00:51:34.650915 2098 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 10 00:51:34.650983 kubelet[2098]: E0510 00:51:34.650963 2098 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-qzp4m: failed to sync secret cache: timed out waiting for the condition May 10 00:51:34.651298 kubelet[2098]: E0510 00:51:34.651117 2098 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f2059765-1614-4308-aaab-b28039c37725-hubble-tls podName:f2059765-1614-4308-aaab-b28039c37725 nodeName:}" failed. No retries permitted until 2025-05-10 00:51:35.151042552 +0000 UTC m=+16.538538376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/f2059765-1614-4308-aaab-b28039c37725-hubble-tls") pod "cilium-qzp4m" (UID: "f2059765-1614-4308-aaab-b28039c37725") : failed to sync secret cache: timed out waiting for the condition May 10 00:51:35.259789 env[1236]: time="2025-05-10T00:51:35.259708434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qzp4m,Uid:f2059765-1614-4308-aaab-b28039c37725,Namespace:kube-system,Attempt:0,}" May 10 00:51:35.292891 env[1236]: time="2025-05-10T00:51:35.292772304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:51:35.292891 env[1236]: time="2025-05-10T00:51:35.292838636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:51:35.292891 env[1236]: time="2025-05-10T00:51:35.292857719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:51:35.293616 env[1236]: time="2025-05-10T00:51:35.293533398Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54 pid=2437 runtime=io.containerd.runc.v2 May 10 00:51:35.326867 systemd[1]: Started cri-containerd-7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54.scope. May 10 00:51:35.367318 env[1236]: time="2025-05-10T00:51:35.367248908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qzp4m,Uid:f2059765-1614-4308-aaab-b28039c37725,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\"" May 10 00:51:35.676170 systemd[1]: run-containerd-runc-k8s.io-7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54-runc.WxvgjU.mount: Deactivated successfully. May 10 00:51:35.769069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1998591673.mount: Deactivated successfully. May 10 00:51:36.674407 env[1236]: time="2025-05-10T00:51:36.674327215Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:36.677663 env[1236]: time="2025-05-10T00:51:36.677605084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:36.680520 env[1236]: time="2025-05-10T00:51:36.680441031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:36.681649 env[1236]: time="2025-05-10T00:51:36.681579440Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 00:51:36.686042 env[1236]: time="2025-05-10T00:51:36.685682952Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 00:51:36.688283 env[1236]: time="2025-05-10T00:51:36.687796575Z" level=info msg="CreateContainer within sandbox \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 00:51:36.708994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130152740.mount: Deactivated successfully. May 10 00:51:36.718807 env[1236]: time="2025-05-10T00:51:36.718728921Z" level=info msg="CreateContainer within sandbox \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\"" May 10 00:51:36.720545 env[1236]: time="2025-05-10T00:51:36.719431398Z" level=info msg="StartContainer for \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\"" May 10 00:51:36.763007 systemd[1]: Started cri-containerd-1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3.scope. May 10 00:51:36.813074 env[1236]: time="2025-05-10T00:51:36.812993087Z" level=info msg="StartContainer for \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\" returns successfully" May 10 00:51:36.884083 kubelet[2098]: I0510 00:51:36.883470 2098 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-757bt" podStartSLOduration=1.525875885 podStartE2EDuration="3.883442766s" podCreationTimestamp="2025-05-10 00:51:33 +0000 UTC" firstStartedPulling="2025-05-10 00:51:34.325791555 +0000 UTC m=+15.713287370" lastFinishedPulling="2025-05-10 00:51:36.683358436 +0000 UTC m=+18.070854251" observedRunningTime="2025-05-10 00:51:36.882797447 +0000 UTC m=+18.270293271" watchObservedRunningTime="2025-05-10 00:51:36.883442766 +0000 UTC m=+18.270938598" May 10 00:51:36.884083 kubelet[2098]: I0510 00:51:36.883763 2098 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vb2pz" podStartSLOduration=3.883743943 podStartE2EDuration="3.883743943s" podCreationTimestamp="2025-05-10 00:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:51:34.867964038 +0000 UTC m=+16.255459866" watchObservedRunningTime="2025-05-10 00:51:36.883743943 +0000 UTC m=+18.271239771" May 10 00:51:37.702749 systemd[1]: run-containerd-runc-k8s.io-1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3-runc.mBMJFO.mount: Deactivated successfully. May 10 00:51:42.544185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2401839723.mount: Deactivated successfully. May 10 00:51:46.038638 env[1236]: time="2025-05-10T00:51:46.038568425Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:46.042258 env[1236]: time="2025-05-10T00:51:46.042206441Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:46.044766 env[1236]: time="2025-05-10T00:51:46.044711459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:46.045745 env[1236]: time="2025-05-10T00:51:46.045692368Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 00:51:46.050413 env[1236]: time="2025-05-10T00:51:46.050348942Z" level=info msg="CreateContainer within sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:51:46.071465 env[1236]: time="2025-05-10T00:51:46.071374739Z" level=info msg="CreateContainer within sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8\"" May 10 00:51:46.074491 env[1236]: time="2025-05-10T00:51:46.072755252Z" level=info msg="StartContainer for \"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8\"" May 10 00:51:46.114422 systemd[1]: run-containerd-runc-k8s.io-de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8-runc.OHhwgv.mount: Deactivated successfully. May 10 00:51:46.121707 systemd[1]: Started cri-containerd-de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8.scope. May 10 00:51:46.161634 env[1236]: time="2025-05-10T00:51:46.161226427Z" level=info msg="StartContainer for \"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8\" returns successfully" May 10 00:51:46.177233 systemd[1]: cri-containerd-de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8.scope: Deactivated successfully. May 10 00:51:47.063310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8-rootfs.mount: Deactivated successfully. May 10 00:51:48.257969 env[1236]: time="2025-05-10T00:51:48.257660127Z" level=info msg="shim disconnected" id=de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8 May 10 00:51:48.258671 env[1236]: time="2025-05-10T00:51:48.258624503Z" level=warning msg="cleaning up after shim disconnected" id=de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8 namespace=k8s.io May 10 00:51:48.258671 env[1236]: time="2025-05-10T00:51:48.258663245Z" level=info msg="cleaning up dead shim" May 10 00:51:48.270459 env[1236]: time="2025-05-10T00:51:48.270386643Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:51:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2560 runtime=io.containerd.runc.v2\n" May 10 00:51:48.900828 env[1236]: time="2025-05-10T00:51:48.900404267Z" level=info msg="CreateContainer within sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:51:48.924530 env[1236]: time="2025-05-10T00:51:48.924440609Z" level=info msg="CreateContainer within sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc\"" May 10 00:51:48.925873 env[1236]: time="2025-05-10T00:51:48.925830718Z" level=info msg="StartContainer for \"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc\"" May 10 00:51:48.967353 systemd[1]: Started cri-containerd-e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc.scope. May 10 00:51:49.032729 env[1236]: time="2025-05-10T00:51:49.031884984Z" level=info msg="StartContainer for \"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc\" returns successfully" May 10 00:51:49.040230 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:51:49.041417 systemd[1]: Stopped systemd-sysctl.service. May 10 00:51:49.045092 systemd[1]: Stopping systemd-sysctl.service... May 10 00:51:49.048185 systemd[1]: Starting systemd-sysctl.service... May 10 00:51:49.054674 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 00:51:49.056279 systemd[1]: cri-containerd-e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc.scope: Deactivated successfully. May 10 00:51:49.068496 systemd[1]: Finished systemd-sysctl.service. May 10 00:51:49.093982 env[1236]: time="2025-05-10T00:51:49.093913585Z" level=info msg="shim disconnected" id=e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc May 10 00:51:49.093982 env[1236]: time="2025-05-10T00:51:49.093981801Z" level=warning msg="cleaning up after shim disconnected" id=e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc namespace=k8s.io May 10 00:51:49.094364 env[1236]: time="2025-05-10T00:51:49.093997799Z" level=info msg="cleaning up dead shim" May 10 00:51:49.105417 env[1236]: time="2025-05-10T00:51:49.105359533Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:51:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2624 runtime=io.containerd.runc.v2\n" May 10 00:51:49.906703 env[1236]: time="2025-05-10T00:51:49.906637589Z" level=info msg="CreateContainer within sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:51:49.914506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc-rootfs.mount: Deactivated successfully. May 10 00:51:49.939383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2904665910.mount: Deactivated successfully. May 10 00:51:49.950931 env[1236]: time="2025-05-10T00:51:49.948009499Z" level=info msg="CreateContainer within sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872\"" May 10 00:51:49.952563 env[1236]: time="2025-05-10T00:51:49.951938078Z" level=info msg="StartContainer for \"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872\"" May 10 00:51:49.996945 systemd[1]: Started cri-containerd-004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872.scope. May 10 00:51:50.047497 systemd[1]: cri-containerd-004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872.scope: Deactivated successfully. May 10 00:51:50.051488 env[1236]: time="2025-05-10T00:51:50.051426972Z" level=info msg="StartContainer for \"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872\" returns successfully" May 10 00:51:50.088876 env[1236]: time="2025-05-10T00:51:50.088807018Z" level=info msg="shim disconnected" id=004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872 May 10 00:51:50.089228 env[1236]: time="2025-05-10T00:51:50.088882559Z" level=warning msg="cleaning up after shim disconnected" id=004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872 namespace=k8s.io May 10 00:51:50.089228 env[1236]: time="2025-05-10T00:51:50.088898593Z" level=info msg="cleaning up dead shim" May 10 00:51:50.102186 env[1236]: time="2025-05-10T00:51:50.102125702Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:51:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2684 runtime=io.containerd.runc.v2\n" May 10 00:51:50.910480 env[1236]: time="2025-05-10T00:51:50.910422460Z" level=info msg="CreateContainer within sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:51:50.914010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872-rootfs.mount: Deactivated successfully. May 10 00:51:50.952036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168898392.mount: Deactivated successfully. May 10 00:51:50.962584 env[1236]: time="2025-05-10T00:51:50.962511606Z" level=info msg="CreateContainer within sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282\"" May 10 00:51:50.963646 env[1236]: time="2025-05-10T00:51:50.963602042Z" level=info msg="StartContainer for \"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282\"" May 10 00:51:51.011111 systemd[1]: Started cri-containerd-41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282.scope. May 10 00:51:51.051361 systemd[1]: cri-containerd-41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282.scope: Deactivated successfully. May 10 00:51:51.055196 env[1236]: time="2025-05-10T00:51:51.055139068Z" level=info msg="StartContainer for \"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282\" returns successfully" May 10 00:51:51.087059 env[1236]: time="2025-05-10T00:51:51.086993518Z" level=info msg="shim disconnected" id=41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282 May 10 00:51:51.087372 env[1236]: time="2025-05-10T00:51:51.087062188Z" level=warning msg="cleaning up after shim disconnected" id=41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282 namespace=k8s.io May 10 00:51:51.087372 env[1236]: time="2025-05-10T00:51:51.087140673Z" level=info msg="cleaning up dead shim" May 10 00:51:51.100329 env[1236]: time="2025-05-10T00:51:51.100261990Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:51:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2740 runtime=io.containerd.runc.v2\n" May 10 00:51:51.914093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282-rootfs.mount: Deactivated successfully. May 10 00:51:51.920352 env[1236]: time="2025-05-10T00:51:51.920283858Z" level=info msg="CreateContainer within sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:51:51.949148 env[1236]: time="2025-05-10T00:51:51.949072207Z" level=info msg="CreateContainer within sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\"" May 10 00:51:51.956381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3323548027.mount: Deactivated successfully. May 10 00:51:51.958500 env[1236]: time="2025-05-10T00:51:51.958445832Z" level=info msg="StartContainer for \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\"" May 10 00:51:52.000520 systemd[1]: Started cri-containerd-b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a.scope. May 10 00:51:52.046422 env[1236]: time="2025-05-10T00:51:52.046361765Z" level=info msg="StartContainer for \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\" returns successfully" May 10 00:51:52.182011 kubelet[2098]: I0510 00:51:52.181841 2098 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 00:51:52.221014 kubelet[2098]: I0510 00:51:52.220948 2098 topology_manager.go:215] "Topology Admit Handler" podUID="3c9a93fb-8e8d-4a5a-8adb-8a081dcec67e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-75tqc" May 10 00:51:52.229659 systemd[1]: Created slice kubepods-burstable-pod3c9a93fb_8e8d_4a5a_8adb_8a081dcec67e.slice. May 10 00:51:52.241988 kubelet[2098]: I0510 00:51:52.241943 2098 topology_manager.go:215] "Topology Admit Handler" podUID="e5258db3-1a1a-43dd-b0f7-78af2a1393ab" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fkvbf" May 10 00:51:52.251073 systemd[1]: Created slice kubepods-burstable-pode5258db3_1a1a_43dd_b0f7_78af2a1393ab.slice. May 10 00:51:52.399497 kubelet[2098]: I0510 00:51:52.399446 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c9a93fb-8e8d-4a5a-8adb-8a081dcec67e-config-volume\") pod \"coredns-7db6d8ff4d-75tqc\" (UID: \"3c9a93fb-8e8d-4a5a-8adb-8a081dcec67e\") " pod="kube-system/coredns-7db6d8ff4d-75tqc" May 10 00:51:52.399856 kubelet[2098]: I0510 00:51:52.399823 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsvfz\" (UniqueName: \"kubernetes.io/projected/e5258db3-1a1a-43dd-b0f7-78af2a1393ab-kube-api-access-xsvfz\") pod \"coredns-7db6d8ff4d-fkvbf\" (UID: \"e5258db3-1a1a-43dd-b0f7-78af2a1393ab\") " pod="kube-system/coredns-7db6d8ff4d-fkvbf" May 10 00:51:52.400127 kubelet[2098]: I0510 00:51:52.400099 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgb8j\" (UniqueName: \"kubernetes.io/projected/3c9a93fb-8e8d-4a5a-8adb-8a081dcec67e-kube-api-access-wgb8j\") pod \"coredns-7db6d8ff4d-75tqc\" (UID: \"3c9a93fb-8e8d-4a5a-8adb-8a081dcec67e\") " pod="kube-system/coredns-7db6d8ff4d-75tqc" May 10 00:51:52.400347 kubelet[2098]: I0510 00:51:52.400312 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5258db3-1a1a-43dd-b0f7-78af2a1393ab-config-volume\") pod \"coredns-7db6d8ff4d-fkvbf\" (UID: \"e5258db3-1a1a-43dd-b0f7-78af2a1393ab\") " pod="kube-system/coredns-7db6d8ff4d-fkvbf" May 10 00:51:52.545329 env[1236]: time="2025-05-10T00:51:52.545175432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-75tqc,Uid:3c9a93fb-8e8d-4a5a-8adb-8a081dcec67e,Namespace:kube-system,Attempt:0,}" May 10 00:51:52.557631 env[1236]: time="2025-05-10T00:51:52.557559456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fkvbf,Uid:e5258db3-1a1a-43dd-b0f7-78af2a1393ab,Namespace:kube-system,Attempt:0,}" May 10 00:51:52.926115 systemd[1]: run-containerd-runc-k8s.io-b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a-runc.EYiS0I.mount: Deactivated successfully. May 10 00:51:54.388848 systemd-networkd[1029]: cilium_host: Link UP May 10 00:51:54.389087 systemd-networkd[1029]: cilium_net: Link UP May 10 00:51:54.395994 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 10 00:51:54.396776 systemd-networkd[1029]: cilium_net: Gained carrier May 10 00:51:54.404018 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 10 00:51:54.404315 systemd-networkd[1029]: cilium_host: Gained carrier May 10 00:51:54.408985 systemd-networkd[1029]: cilium_net: Gained IPv6LL May 10 00:51:54.561179 systemd-networkd[1029]: cilium_vxlan: Link UP May 10 00:51:54.561199 systemd-networkd[1029]: cilium_vxlan: Gained carrier May 10 00:51:54.690562 systemd-networkd[1029]: cilium_host: Gained IPv6LL May 10 00:51:54.854949 kernel: NET: Registered PF_ALG protocol family May 10 00:51:55.769188 systemd-networkd[1029]: lxc_health: Link UP May 10 00:51:55.809961 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:51:55.810494 systemd-networkd[1029]: lxc_health: Gained carrier May 10 00:51:56.122869 systemd-networkd[1029]: lxc401eda4e113b: Link UP May 10 00:51:56.133961 kernel: eth0: renamed from tmpca5e7 May 10 00:51:56.150951 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc401eda4e113b: link becomes ready May 10 00:51:56.154728 systemd-networkd[1029]: lxc401eda4e113b: Gained carrier May 10 00:51:56.160025 systemd-networkd[1029]: lxcf5229d2170f6: Link UP May 10 00:51:56.172054 kernel: eth0: renamed from tmp82c7b May 10 00:51:56.192194 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf5229d2170f6: link becomes ready May 10 00:51:56.192564 systemd-networkd[1029]: lxcf5229d2170f6: Gained carrier May 10 00:51:56.282182 systemd-networkd[1029]: cilium_vxlan: Gained IPv6LL May 10 00:51:57.178096 systemd-networkd[1029]: lxc_health: Gained IPv6LL May 10 00:51:57.292940 kubelet[2098]: I0510 00:51:57.292842 2098 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qzp4m" podStartSLOduration=13.615076977 podStartE2EDuration="24.292815694s" podCreationTimestamp="2025-05-10 00:51:33 +0000 UTC" firstStartedPulling="2025-05-10 00:51:35.369571272 +0000 UTC m=+16.757067087" lastFinishedPulling="2025-05-10 00:51:46.047309999 +0000 UTC m=+27.434805804" observedRunningTime="2025-05-10 00:51:52.956237316 +0000 UTC m=+34.343733141" watchObservedRunningTime="2025-05-10 00:51:57.292815694 +0000 UTC m=+38.680311517" May 10 00:51:57.690204 systemd-networkd[1029]: lxcf5229d2170f6: Gained IPv6LL May 10 00:51:58.074262 systemd-networkd[1029]: lxc401eda4e113b: Gained IPv6LL May 10 00:52:01.442204 env[1236]: time="2025-05-10T00:52:01.442110365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:52:01.442967 env[1236]: time="2025-05-10T00:52:01.442896951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:52:01.443175 env[1236]: time="2025-05-10T00:52:01.443138381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:52:01.444081 env[1236]: time="2025-05-10T00:52:01.444020372Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/82c7beb2fa9613d961a9104e6d8ff5cd51be5a684cbd82aaf48952f78a3d74f5 pid=3286 runtime=io.containerd.runc.v2 May 10 00:52:01.478694 systemd[1]: Started cri-containerd-82c7beb2fa9613d961a9104e6d8ff5cd51be5a684cbd82aaf48952f78a3d74f5.scope. May 10 00:52:01.496568 env[1236]: time="2025-05-10T00:52:01.496470115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:52:01.496881 env[1236]: time="2025-05-10T00:52:01.496827829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:52:01.497117 env[1236]: time="2025-05-10T00:52:01.497076162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:52:01.497507 env[1236]: time="2025-05-10T00:52:01.497459570Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca5e7939c9871c44ed8b2cc5a08a1dbe71781fc4385f3f6835ad9b623b856938 pid=3312 runtime=io.containerd.runc.v2 May 10 00:52:01.555872 systemd[1]: Started cri-containerd-ca5e7939c9871c44ed8b2cc5a08a1dbe71781fc4385f3f6835ad9b623b856938.scope. May 10 00:52:01.640686 env[1236]: time="2025-05-10T00:52:01.640614335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fkvbf,Uid:e5258db3-1a1a-43dd-b0f7-78af2a1393ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"82c7beb2fa9613d961a9104e6d8ff5cd51be5a684cbd82aaf48952f78a3d74f5\"" May 10 00:52:01.646620 env[1236]: time="2025-05-10T00:52:01.646555943Z" level=info msg="CreateContainer within sandbox \"82c7beb2fa9613d961a9104e6d8ff5cd51be5a684cbd82aaf48952f78a3d74f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:52:01.666533 env[1236]: time="2025-05-10T00:52:01.666467928Z" level=info msg="CreateContainer within sandbox \"82c7beb2fa9613d961a9104e6d8ff5cd51be5a684cbd82aaf48952f78a3d74f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e1771b4b970dba34074ec3ae0faf154d4a46976de0f37acc759256fb16225c4\"" May 10 00:52:01.667734 env[1236]: time="2025-05-10T00:52:01.667682362Z" level=info msg="StartContainer for \"6e1771b4b970dba34074ec3ae0faf154d4a46976de0f37acc759256fb16225c4\"" May 10 00:52:01.698967 systemd[1]: Started cri-containerd-6e1771b4b970dba34074ec3ae0faf154d4a46976de0f37acc759256fb16225c4.scope. May 10 00:52:01.735383 env[1236]: time="2025-05-10T00:52:01.735323997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-75tqc,Uid:3c9a93fb-8e8d-4a5a-8adb-8a081dcec67e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca5e7939c9871c44ed8b2cc5a08a1dbe71781fc4385f3f6835ad9b623b856938\"" May 10 00:52:01.743338 env[1236]: time="2025-05-10T00:52:01.743275016Z" level=info msg="CreateContainer within sandbox \"ca5e7939c9871c44ed8b2cc5a08a1dbe71781fc4385f3f6835ad9b623b856938\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:52:01.769228 env[1236]: time="2025-05-10T00:52:01.769148666Z" level=info msg="CreateContainer within sandbox \"ca5e7939c9871c44ed8b2cc5a08a1dbe71781fc4385f3f6835ad9b623b856938\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80edb24f2772dd51adfe4e7e686177d79b4f79731d4433933287d65256b7fe1c\"" May 10 00:52:01.770625 env[1236]: time="2025-05-10T00:52:01.770540607Z" level=info msg="StartContainer for \"80edb24f2772dd51adfe4e7e686177d79b4f79731d4433933287d65256b7fe1c\"" May 10 00:52:01.807657 env[1236]: time="2025-05-10T00:52:01.807599168Z" level=info msg="StartContainer for \"6e1771b4b970dba34074ec3ae0faf154d4a46976de0f37acc759256fb16225c4\" returns successfully" May 10 00:52:01.831726 systemd[1]: Started cri-containerd-80edb24f2772dd51adfe4e7e686177d79b4f79731d4433933287d65256b7fe1c.scope. May 10 00:52:01.909207 env[1236]: time="2025-05-10T00:52:01.909134909Z" level=info msg="StartContainer for \"80edb24f2772dd51adfe4e7e686177d79b4f79731d4433933287d65256b7fe1c\" returns successfully" May 10 00:52:02.009714 kubelet[2098]: I0510 00:52:02.009521 2098 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-75tqc" podStartSLOduration=29.009474916 podStartE2EDuration="29.009474916s" podCreationTimestamp="2025-05-10 00:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:52:02.005758494 +0000 UTC m=+43.393254320" watchObservedRunningTime="2025-05-10 00:52:02.009474916 +0000 UTC m=+43.396970739" May 10 00:52:02.010378 kubelet[2098]: I0510 00:52:02.009719 2098 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fkvbf" podStartSLOduration=29.009706749 podStartE2EDuration="29.009706749s" podCreationTimestamp="2025-05-10 00:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:52:01.980194839 +0000 UTC m=+43.367690692" watchObservedRunningTime="2025-05-10 00:52:02.009706749 +0000 UTC m=+43.397202574" May 10 00:52:02.454555 systemd[1]: run-containerd-runc-k8s.io-ca5e7939c9871c44ed8b2cc5a08a1dbe71781fc4385f3f6835ad9b623b856938-runc.WYvmAA.mount: Deactivated successfully. May 10 00:52:04.770120 kubelet[2098]: I0510 00:52:04.770057 2098 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:52:15.023055 systemd[1]: Started sshd@5-10.128.0.57:22-147.75.109.163:47544.service. May 10 00:52:15.313587 sshd[3454]: Accepted publickey for core from 147.75.109.163 port 47544 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:15.316062 sshd[3454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:15.323455 systemd[1]: Started session-6.scope. May 10 00:52:15.324194 systemd-logind[1219]: New session 6 of user core. May 10 00:52:15.618369 sshd[3454]: pam_unix(sshd:session): session closed for user core May 10 00:52:15.623643 systemd[1]: sshd@5-10.128.0.57:22-147.75.109.163:47544.service: Deactivated successfully. May 10 00:52:15.624943 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:52:15.626087 systemd-logind[1219]: Session 6 logged out. Waiting for processes to exit. May 10 00:52:15.629330 systemd-logind[1219]: Removed session 6. May 10 00:52:20.667109 systemd[1]: Started sshd@6-10.128.0.57:22-147.75.109.163:41096.service. May 10 00:52:20.959114 sshd[3469]: Accepted publickey for core from 147.75.109.163 port 41096 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:20.961462 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:20.968768 systemd[1]: Started session-7.scope. May 10 00:52:20.969496 systemd-logind[1219]: New session 7 of user core. May 10 00:52:21.254698 sshd[3469]: pam_unix(sshd:session): session closed for user core May 10 00:52:21.259136 systemd[1]: sshd@6-10.128.0.57:22-147.75.109.163:41096.service: Deactivated successfully. May 10 00:52:21.260360 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:52:21.261605 systemd-logind[1219]: Session 7 logged out. Waiting for processes to exit. May 10 00:52:21.263194 systemd-logind[1219]: Removed session 7. May 10 00:52:26.303693 systemd[1]: Started sshd@7-10.128.0.57:22-147.75.109.163:41108.service. May 10 00:52:26.594502 sshd[3482]: Accepted publickey for core from 147.75.109.163 port 41108 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:26.596710 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:26.603997 systemd[1]: Started session-8.scope. May 10 00:52:26.604626 systemd-logind[1219]: New session 8 of user core. May 10 00:52:26.887039 sshd[3482]: pam_unix(sshd:session): session closed for user core May 10 00:52:26.892017 systemd[1]: sshd@7-10.128.0.57:22-147.75.109.163:41108.service: Deactivated successfully. May 10 00:52:26.893281 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:52:26.894408 systemd-logind[1219]: Session 8 logged out. Waiting for processes to exit. May 10 00:52:26.896115 systemd-logind[1219]: Removed session 8. May 10 00:52:31.934900 systemd[1]: Started sshd@8-10.128.0.57:22-147.75.109.163:40868.service. May 10 00:52:32.227618 sshd[3494]: Accepted publickey for core from 147.75.109.163 port 40868 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:32.229770 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:32.237289 systemd-logind[1219]: New session 9 of user core. May 10 00:52:32.238131 systemd[1]: Started session-9.scope. May 10 00:52:32.520956 sshd[3494]: pam_unix(sshd:session): session closed for user core May 10 00:52:32.526064 systemd[1]: sshd@8-10.128.0.57:22-147.75.109.163:40868.service: Deactivated successfully. May 10 00:52:32.527186 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:52:32.528666 systemd-logind[1219]: Session 9 logged out. Waiting for processes to exit. May 10 00:52:32.530085 systemd-logind[1219]: Removed session 9. May 10 00:52:32.572306 systemd[1]: Started sshd@9-10.128.0.57:22-147.75.109.163:40870.service. May 10 00:52:32.876191 sshd[3507]: Accepted publickey for core from 147.75.109.163 port 40870 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:32.877883 sshd[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:32.885532 systemd[1]: Started session-10.scope. May 10 00:52:32.886249 systemd-logind[1219]: New session 10 of user core. May 10 00:52:33.221776 sshd[3507]: pam_unix(sshd:session): session closed for user core May 10 00:52:33.228095 systemd[1]: sshd@9-10.128.0.57:22-147.75.109.163:40870.service: Deactivated successfully. May 10 00:52:33.229308 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:52:33.230370 systemd-logind[1219]: Session 10 logged out. Waiting for processes to exit. May 10 00:52:33.231700 systemd-logind[1219]: Removed session 10. May 10 00:52:33.266793 systemd[1]: Started sshd@10-10.128.0.57:22-147.75.109.163:40872.service. May 10 00:52:33.555461 sshd[3517]: Accepted publickey for core from 147.75.109.163 port 40872 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:33.557726 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:33.564861 systemd[1]: Started session-11.scope. May 10 00:52:33.565897 systemd-logind[1219]: New session 11 of user core. May 10 00:52:33.850282 sshd[3517]: pam_unix(sshd:session): session closed for user core May 10 00:52:33.855235 systemd-logind[1219]: Session 11 logged out. Waiting for processes to exit. May 10 00:52:33.855495 systemd[1]: sshd@10-10.128.0.57:22-147.75.109.163:40872.service: Deactivated successfully. May 10 00:52:33.856726 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:52:33.858065 systemd-logind[1219]: Removed session 11. May 10 00:52:38.897232 systemd[1]: Started sshd@11-10.128.0.57:22-147.75.109.163:56174.service. May 10 00:52:39.187556 sshd[3533]: Accepted publickey for core from 147.75.109.163 port 56174 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:39.188939 sshd[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:39.196737 systemd[1]: Started session-12.scope. May 10 00:52:39.197665 systemd-logind[1219]: New session 12 of user core. May 10 00:52:39.475069 sshd[3533]: pam_unix(sshd:session): session closed for user core May 10 00:52:39.480021 systemd-logind[1219]: Session 12 logged out. Waiting for processes to exit. May 10 00:52:39.480501 systemd[1]: sshd@11-10.128.0.57:22-147.75.109.163:56174.service: Deactivated successfully. May 10 00:52:39.481714 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:52:39.483353 systemd-logind[1219]: Removed session 12. May 10 00:52:44.525673 systemd[1]: Started sshd@12-10.128.0.57:22-147.75.109.163:56180.service. May 10 00:52:44.822835 sshd[3546]: Accepted publickey for core from 147.75.109.163 port 56180 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:44.825251 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:44.832644 systemd[1]: Started session-13.scope. May 10 00:52:44.833664 systemd-logind[1219]: New session 13 of user core. May 10 00:52:45.117028 sshd[3546]: pam_unix(sshd:session): session closed for user core May 10 00:52:45.121820 systemd-logind[1219]: Session 13 logged out. Waiting for processes to exit. May 10 00:52:45.122356 systemd[1]: sshd@12-10.128.0.57:22-147.75.109.163:56180.service: Deactivated successfully. May 10 00:52:45.123566 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:52:45.124948 systemd-logind[1219]: Removed session 13. May 10 00:52:45.162866 systemd[1]: Started sshd@13-10.128.0.57:22-147.75.109.163:56190.service. May 10 00:52:45.484614 sshd[3558]: Accepted publickey for core from 147.75.109.163 port 56190 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:45.486818 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:45.494369 systemd[1]: Started session-14.scope. May 10 00:52:45.495380 systemd-logind[1219]: New session 14 of user core. May 10 00:52:45.852176 sshd[3558]: pam_unix(sshd:session): session closed for user core May 10 00:52:45.857214 systemd[1]: sshd@13-10.128.0.57:22-147.75.109.163:56190.service: Deactivated successfully. May 10 00:52:45.858478 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:52:45.859387 systemd-logind[1219]: Session 14 logged out. Waiting for processes to exit. May 10 00:52:45.860780 systemd-logind[1219]: Removed session 14. May 10 00:52:45.900603 systemd[1]: Started sshd@14-10.128.0.57:22-147.75.109.163:56202.service. May 10 00:52:46.193455 sshd[3567]: Accepted publickey for core from 147.75.109.163 port 56202 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:46.196002 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:46.203955 systemd[1]: Started session-15.scope. May 10 00:52:46.204876 systemd-logind[1219]: New session 15 of user core. May 10 00:52:48.102019 sshd[3567]: pam_unix(sshd:session): session closed for user core May 10 00:52:48.107632 systemd-logind[1219]: Session 15 logged out. Waiting for processes to exit. May 10 00:52:48.110648 systemd[1]: sshd@14-10.128.0.57:22-147.75.109.163:56202.service: Deactivated successfully. May 10 00:52:48.111881 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:52:48.114503 systemd-logind[1219]: Removed session 15. May 10 00:52:48.148091 systemd[1]: Started sshd@15-10.128.0.57:22-147.75.109.163:33264.service. May 10 00:52:48.433489 sshd[3584]: Accepted publickey for core from 147.75.109.163 port 33264 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:48.435653 sshd[3584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:48.443184 systemd[1]: Started session-16.scope. May 10 00:52:48.444133 systemd-logind[1219]: New session 16 of user core. May 10 00:52:48.882365 sshd[3584]: pam_unix(sshd:session): session closed for user core May 10 00:52:48.887727 systemd-logind[1219]: Session 16 logged out. Waiting for processes to exit. May 10 00:52:48.888361 systemd[1]: sshd@15-10.128.0.57:22-147.75.109.163:33264.service: Deactivated successfully. May 10 00:52:48.889507 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:52:48.891821 systemd-logind[1219]: Removed session 16. May 10 00:52:48.930548 systemd[1]: Started sshd@16-10.128.0.57:22-147.75.109.163:33280.service. May 10 00:52:49.227461 sshd[3594]: Accepted publickey for core from 147.75.109.163 port 33280 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:49.229953 sshd[3594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:49.238376 systemd[1]: Started session-17.scope. May 10 00:52:49.239983 systemd-logind[1219]: New session 17 of user core. May 10 00:52:49.520243 sshd[3594]: pam_unix(sshd:session): session closed for user core May 10 00:52:49.525467 systemd[1]: sshd@16-10.128.0.57:22-147.75.109.163:33280.service: Deactivated successfully. May 10 00:52:49.526710 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:52:49.527775 systemd-logind[1219]: Session 17 logged out. Waiting for processes to exit. May 10 00:52:49.529472 systemd-logind[1219]: Removed session 17. May 10 00:52:53.769413 update_engine[1222]: I0510 00:52:53.769331 1222 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 10 00:52:53.769413 update_engine[1222]: I0510 00:52:53.769395 1222 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 10 00:52:53.770639 update_engine[1222]: I0510 00:52:53.770591 1222 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 10 00:52:53.771315 update_engine[1222]: I0510 00:52:53.771268 1222 omaha_request_params.cc:62] Current group set to lts May 10 00:52:53.772166 update_engine[1222]: I0510 00:52:53.771517 1222 update_attempter.cc:499] Already updated boot flags. Skipping. May 10 00:52:53.772166 update_engine[1222]: I0510 00:52:53.771534 1222 update_attempter.cc:643] Scheduling an action processor start. May 10 00:52:53.772166 update_engine[1222]: I0510 00:52:53.771564 1222 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 10 00:52:53.772166 update_engine[1222]: I0510 00:52:53.771605 1222 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 10 00:52:53.772166 update_engine[1222]: I0510 00:52:53.771699 1222 omaha_request_action.cc:270] Posting an Omaha request to disabled May 10 00:52:53.772166 update_engine[1222]: I0510 00:52:53.771708 1222 omaha_request_action.cc:271] Request: May 10 00:52:53.772166 update_engine[1222]: May 10 00:52:53.772166 update_engine[1222]: May 10 00:52:53.772166 update_engine[1222]: May 10 00:52:53.772166 update_engine[1222]: May 10 00:52:53.772166 update_engine[1222]: May 10 00:52:53.772166 update_engine[1222]: May 10 00:52:53.772166 update_engine[1222]: May 10 00:52:53.772166 update_engine[1222]: May 10 00:52:53.772166 update_engine[1222]: I0510 00:52:53.771716 1222 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:52:53.773571 locksmithd[1265]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 10 00:52:53.773878 update_engine[1222]: I0510 00:52:53.773468 1222 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:52:53.773878 update_engine[1222]: I0510 00:52:53.773736 1222 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:52:53.811145 update_engine[1222]: E0510 00:52:53.811070 1222 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:52:53.811338 update_engine[1222]: I0510 00:52:53.811240 1222 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 10 00:52:54.568610 systemd[1]: Started sshd@17-10.128.0.57:22-147.75.109.163:33294.service. May 10 00:52:54.861053 sshd[3606]: Accepted publickey for core from 147.75.109.163 port 33294 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:52:54.862893 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:54.869514 systemd-logind[1219]: New session 18 of user core. May 10 00:52:54.870227 systemd[1]: Started session-18.scope. May 10 00:52:55.153443 sshd[3606]: pam_unix(sshd:session): session closed for user core May 10 00:52:55.157648 systemd[1]: sshd@17-10.128.0.57:22-147.75.109.163:33294.service: Deactivated successfully. May 10 00:52:55.158879 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:52:55.160044 systemd-logind[1219]: Session 18 logged out. Waiting for processes to exit. May 10 00:52:55.161441 systemd-logind[1219]: Removed session 18. May 10 00:53:00.201202 systemd[1]: Started sshd@18-10.128.0.57:22-147.75.109.163:35442.service. May 10 00:53:00.491028 sshd[3621]: Accepted publickey for core from 147.75.109.163 port 35442 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:53:00.493710 sshd[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:53:00.501011 systemd-logind[1219]: New session 19 of user core. May 10 00:53:00.501474 systemd[1]: Started session-19.scope. May 10 00:53:00.771135 sshd[3621]: pam_unix(sshd:session): session closed for user core May 10 00:53:00.778591 systemd[1]: sshd@18-10.128.0.57:22-147.75.109.163:35442.service: Deactivated successfully. May 10 00:53:00.779933 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:53:00.780241 systemd-logind[1219]: Session 19 logged out. Waiting for processes to exit. May 10 00:53:00.782647 systemd-logind[1219]: Removed session 19. May 10 00:53:03.771009 update_engine[1222]: I0510 00:53:03.770942 1222 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:53:03.771582 update_engine[1222]: I0510 00:53:03.771332 1222 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:53:03.771582 update_engine[1222]: I0510 00:53:03.771575 1222 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:53:03.782954 update_engine[1222]: E0510 00:53:03.782879 1222 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:53:03.783129 update_engine[1222]: I0510 00:53:03.783048 1222 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 10 00:53:05.819825 systemd[1]: Started sshd@19-10.128.0.57:22-147.75.109.163:35446.service. May 10 00:53:06.112892 sshd[3635]: Accepted publickey for core from 147.75.109.163 port 35446 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:53:06.115296 sshd[3635]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:53:06.124817 systemd[1]: Started session-20.scope. May 10 00:53:06.125504 systemd-logind[1219]: New session 20 of user core. May 10 00:53:06.403465 sshd[3635]: pam_unix(sshd:session): session closed for user core May 10 00:53:06.407870 systemd[1]: sshd@19-10.128.0.57:22-147.75.109.163:35446.service: Deactivated successfully. May 10 00:53:06.409135 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:53:06.410076 systemd-logind[1219]: Session 20 logged out. Waiting for processes to exit. May 10 00:53:06.411449 systemd-logind[1219]: Removed session 20. May 10 00:53:06.449446 systemd[1]: Started sshd@20-10.128.0.57:22-147.75.109.163:35456.service. May 10 00:53:06.737313 sshd[3647]: Accepted publickey for core from 147.75.109.163 port 35456 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:53:06.739238 sshd[3647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:53:06.747588 systemd[1]: Started session-21.scope. May 10 00:53:06.748362 systemd-logind[1219]: New session 21 of user core. May 10 00:53:08.354255 env[1236]: time="2025-05-10T00:53:08.354184607Z" level=info msg="StopContainer for \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\" with timeout 30 (s)" May 10 00:53:08.354975 env[1236]: time="2025-05-10T00:53:08.354885605Z" level=info msg="Stop container \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\" with signal terminated" May 10 00:53:08.379231 systemd[1]: run-containerd-runc-k8s.io-b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a-runc.5YWou2.mount: Deactivated successfully. May 10 00:53:08.389709 systemd[1]: cri-containerd-1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3.scope: Deactivated successfully. May 10 00:53:08.425831 env[1236]: time="2025-05-10T00:53:08.425751211Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:53:08.430788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3-rootfs.mount: Deactivated successfully. May 10 00:53:08.442514 env[1236]: time="2025-05-10T00:53:08.442460806Z" level=info msg="StopContainer for \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\" with timeout 2 (s)" May 10 00:53:08.443098 env[1236]: time="2025-05-10T00:53:08.443045451Z" level=info msg="Stop container \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\" with signal terminated" May 10 00:53:08.454076 systemd-networkd[1029]: lxc_health: Link DOWN May 10 00:53:08.454088 systemd-networkd[1029]: lxc_health: Lost carrier May 10 00:53:08.477236 env[1236]: time="2025-05-10T00:53:08.472539256Z" level=info msg="shim disconnected" id=1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3 May 10 00:53:08.477236 env[1236]: time="2025-05-10T00:53:08.472610608Z" level=warning msg="cleaning up after shim disconnected" id=1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3 namespace=k8s.io May 10 00:53:08.477236 env[1236]: time="2025-05-10T00:53:08.472636936Z" level=info msg="cleaning up dead shim" May 10 00:53:08.481440 systemd[1]: cri-containerd-b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a.scope: Deactivated successfully. May 10 00:53:08.481809 systemd[1]: cri-containerd-b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a.scope: Consumed 9.609s CPU time. May 10 00:53:08.503495 env[1236]: time="2025-05-10T00:53:08.494382100Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3704 runtime=io.containerd.runc.v2\n" May 10 00:53:08.503495 env[1236]: time="2025-05-10T00:53:08.501207548Z" level=info msg="StopContainer for \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\" returns successfully" May 10 00:53:08.504315 env[1236]: time="2025-05-10T00:53:08.504252418Z" level=info msg="StopPodSandbox for \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\"" May 10 00:53:08.505523 env[1236]: time="2025-05-10T00:53:08.505477498Z" level=info msg="Container to stop \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:08.509490 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c-shm.mount: Deactivated successfully. May 10 00:53:08.530475 systemd[1]: cri-containerd-561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c.scope: Deactivated successfully. May 10 00:53:08.547733 env[1236]: time="2025-05-10T00:53:08.547664993Z" level=info msg="shim disconnected" id=b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a May 10 00:53:08.547733 env[1236]: time="2025-05-10T00:53:08.547733329Z" level=warning msg="cleaning up after shim disconnected" id=b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a namespace=k8s.io May 10 00:53:08.547733 env[1236]: time="2025-05-10T00:53:08.547750756Z" level=info msg="cleaning up dead shim" May 10 00:53:08.573804 env[1236]: time="2025-05-10T00:53:08.573749356Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3745 runtime=io.containerd.runc.v2\n" May 10 00:53:08.575402 env[1236]: time="2025-05-10T00:53:08.575344158Z" level=info msg="shim disconnected" id=561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c May 10 00:53:08.576016 env[1236]: time="2025-05-10T00:53:08.575408109Z" level=warning msg="cleaning up after shim disconnected" id=561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c namespace=k8s.io May 10 00:53:08.576016 env[1236]: time="2025-05-10T00:53:08.575424186Z" level=info msg="cleaning up dead shim" May 10 00:53:08.576731 env[1236]: time="2025-05-10T00:53:08.576683632Z" level=info msg="StopContainer for \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\" returns successfully" May 10 00:53:08.577646 env[1236]: time="2025-05-10T00:53:08.577597130Z" level=info msg="StopPodSandbox for \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\"" May 10 00:53:08.577758 env[1236]: time="2025-05-10T00:53:08.577702357Z" level=info msg="Container to stop \"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:08.577758 env[1236]: time="2025-05-10T00:53:08.577731131Z" level=info msg="Container to stop \"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:08.577758 env[1236]: time="2025-05-10T00:53:08.577751140Z" level=info msg="Container to stop \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:08.577947 env[1236]: time="2025-05-10T00:53:08.577771005Z" level=info msg="Container to stop \"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:08.577947 env[1236]: time="2025-05-10T00:53:08.577790124Z" level=info msg="Container to stop \"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:08.589608 systemd[1]: cri-containerd-7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54.scope: Deactivated successfully. May 10 00:53:08.597123 env[1236]: time="2025-05-10T00:53:08.597050457Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3766 runtime=io.containerd.runc.v2\n" May 10 00:53:08.597559 env[1236]: time="2025-05-10T00:53:08.597495541Z" level=info msg="TearDown network for sandbox \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\" successfully" May 10 00:53:08.598068 env[1236]: time="2025-05-10T00:53:08.597537238Z" level=info msg="StopPodSandbox for \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\" returns successfully" May 10 00:53:08.641667 env[1236]: time="2025-05-10T00:53:08.639147293Z" level=info msg="shim disconnected" id=7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54 May 10 00:53:08.641667 env[1236]: time="2025-05-10T00:53:08.639289170Z" level=warning msg="cleaning up after shim disconnected" id=7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54 namespace=k8s.io May 10 00:53:08.641667 env[1236]: time="2025-05-10T00:53:08.639318539Z" level=info msg="cleaning up dead shim" May 10 00:53:08.652732 env[1236]: time="2025-05-10T00:53:08.652663846Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3798 runtime=io.containerd.runc.v2\n" May 10 00:53:08.653368 env[1236]: time="2025-05-10T00:53:08.653292232Z" level=info msg="TearDown network for sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" successfully" May 10 00:53:08.653551 env[1236]: time="2025-05-10T00:53:08.653389996Z" level=info msg="StopPodSandbox for \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" returns successfully" May 10 00:53:08.716240 kubelet[2098]: I0510 00:53:08.716165 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44d7b4c1-77e3-4069-9436-be061fc50517-cilium-config-path\") pod \"44d7b4c1-77e3-4069-9436-be061fc50517\" (UID: \"44d7b4c1-77e3-4069-9436-be061fc50517\") " May 10 00:53:08.716240 kubelet[2098]: I0510 00:53:08.716239 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg9vt\" (UniqueName: \"kubernetes.io/projected/44d7b4c1-77e3-4069-9436-be061fc50517-kube-api-access-pg9vt\") pod \"44d7b4c1-77e3-4069-9436-be061fc50517\" (UID: \"44d7b4c1-77e3-4069-9436-be061fc50517\") " May 10 00:53:08.721304 kubelet[2098]: I0510 00:53:08.721236 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44d7b4c1-77e3-4069-9436-be061fc50517-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "44d7b4c1-77e3-4069-9436-be061fc50517" (UID: "44d7b4c1-77e3-4069-9436-be061fc50517"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:53:08.721691 kubelet[2098]: I0510 00:53:08.721542 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d7b4c1-77e3-4069-9436-be061fc50517-kube-api-access-pg9vt" (OuterVolumeSpecName: "kube-api-access-pg9vt") pod "44d7b4c1-77e3-4069-9436-be061fc50517" (UID: "44d7b4c1-77e3-4069-9436-be061fc50517"). InnerVolumeSpecName "kube-api-access-pg9vt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:53:08.769609 systemd[1]: Removed slice kubepods-besteffort-pod44d7b4c1_77e3_4069_9436_be061fc50517.slice. May 10 00:53:08.816813 kubelet[2098]: I0510 00:53:08.816738 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cilium-cgroup\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.816813 kubelet[2098]: I0510 00:53:08.816818 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th7rh\" (UniqueName: \"kubernetes.io/projected/f2059765-1614-4308-aaab-b28039c37725-kube-api-access-th7rh\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817196 kubelet[2098]: I0510 00:53:08.816851 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cilium-run\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817196 kubelet[2098]: I0510 00:53:08.816874 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-xtables-lock\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817196 kubelet[2098]: I0510 00:53:08.816928 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2059765-1614-4308-aaab-b28039c37725-clustermesh-secrets\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817196 kubelet[2098]: I0510 00:53:08.816958 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-lib-modules\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817196 kubelet[2098]: I0510 00:53:08.816980 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-bpf-maps\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817196 kubelet[2098]: I0510 00:53:08.817005 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-hostproc\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817525 kubelet[2098]: I0510 00:53:08.817028 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cni-path\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817525 kubelet[2098]: I0510 00:53:08.817067 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-etc-cni-netd\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817525 kubelet[2098]: I0510 00:53:08.817096 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2059765-1614-4308-aaab-b28039c37725-cilium-config-path\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817525 kubelet[2098]: I0510 00:53:08.817119 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-host-proc-sys-net\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817525 kubelet[2098]: I0510 00:53:08.817144 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2059765-1614-4308-aaab-b28039c37725-hubble-tls\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817525 kubelet[2098]: I0510 00:53:08.817170 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-host-proc-sys-kernel\") pod \"f2059765-1614-4308-aaab-b28039c37725\" (UID: \"f2059765-1614-4308-aaab-b28039c37725\") " May 10 00:53:08.817837 kubelet[2098]: I0510 00:53:08.817230 2098 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44d7b4c1-77e3-4069-9436-be061fc50517-cilium-config-path\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.817837 kubelet[2098]: I0510 00:53:08.817274 2098 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pg9vt\" (UniqueName: \"kubernetes.io/projected/44d7b4c1-77e3-4069-9436-be061fc50517-kube-api-access-pg9vt\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.817837 kubelet[2098]: I0510 00:53:08.817335 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:08.817837 kubelet[2098]: I0510 00:53:08.817389 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:08.818216 kubelet[2098]: I0510 00:53:08.818178 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-hostproc" (OuterVolumeSpecName: "hostproc") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:08.818380 kubelet[2098]: I0510 00:53:08.818358 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:08.818526 kubelet[2098]: I0510 00:53:08.818503 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:08.821895 kubelet[2098]: I0510 00:53:08.818692 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cni-path" (OuterVolumeSpecName: "cni-path") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:08.822149 kubelet[2098]: I0510 00:53:08.818727 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:08.822348 kubelet[2098]: I0510 00:53:08.821680 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:08.822854 kubelet[2098]: I0510 00:53:08.822816 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:08.823048 kubelet[2098]: I0510 00:53:08.822871 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:08.823348 kubelet[2098]: I0510 00:53:08.823314 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2059765-1614-4308-aaab-b28039c37725-kube-api-access-th7rh" (OuterVolumeSpecName: "kube-api-access-th7rh") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "kube-api-access-th7rh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:53:08.824709 kubelet[2098]: I0510 00:53:08.824669 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2059765-1614-4308-aaab-b28039c37725-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:53:08.827409 kubelet[2098]: I0510 00:53:08.827328 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2059765-1614-4308-aaab-b28039c37725-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:53:08.830937 kubelet[2098]: I0510 00:53:08.830875 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2059765-1614-4308-aaab-b28039c37725-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f2059765-1614-4308-aaab-b28039c37725" (UID: "f2059765-1614-4308-aaab-b28039c37725"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:53:08.918625 kubelet[2098]: I0510 00:53:08.918176 2098 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cilium-cgroup\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.918625 kubelet[2098]: I0510 00:53:08.918331 2098 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-th7rh\" (UniqueName: \"kubernetes.io/projected/f2059765-1614-4308-aaab-b28039c37725-kube-api-access-th7rh\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.918625 kubelet[2098]: I0510 00:53:08.918376 2098 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cilium-run\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.918625 kubelet[2098]: I0510 00:53:08.918393 2098 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-xtables-lock\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.918625 kubelet[2098]: I0510 00:53:08.918408 2098 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-lib-modules\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.918625 kubelet[2098]: I0510 00:53:08.918426 2098 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2059765-1614-4308-aaab-b28039c37725-clustermesh-secrets\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.918625 kubelet[2098]: I0510 00:53:08.918443 2098 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-bpf-maps\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.919212 kubelet[2098]: I0510 00:53:08.918459 2098 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-cni-path\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.919212 kubelet[2098]: I0510 00:53:08.918475 2098 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-hostproc\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.919212 kubelet[2098]: I0510 00:53:08.918492 2098 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2059765-1614-4308-aaab-b28039c37725-hubble-tls\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.919212 kubelet[2098]: I0510 00:53:08.918508 2098 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-etc-cni-netd\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.919212 kubelet[2098]: I0510 00:53:08.918525 2098 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2059765-1614-4308-aaab-b28039c37725-cilium-config-path\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.919212 kubelet[2098]: I0510 00:53:08.918542 2098 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-host-proc-sys-net\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.919212 kubelet[2098]: I0510 00:53:08.918559 2098 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2059765-1614-4308-aaab-b28039c37725-host-proc-sys-kernel\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:08.947239 kubelet[2098]: E0510 00:53:08.947157 2098 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:53:09.127032 kubelet[2098]: I0510 00:53:09.126983 2098 scope.go:117] "RemoveContainer" containerID="1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3" May 10 00:53:09.132740 env[1236]: time="2025-05-10T00:53:09.132441927Z" level=info msg="RemoveContainer for \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\"" May 10 00:53:09.141660 env[1236]: time="2025-05-10T00:53:09.141586673Z" level=info msg="RemoveContainer for \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\" returns successfully" May 10 00:53:09.142228 kubelet[2098]: I0510 00:53:09.142176 2098 scope.go:117] "RemoveContainer" containerID="1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3" May 10 00:53:09.143064 env[1236]: time="2025-05-10T00:53:09.142876140Z" level=error msg="ContainerStatus for \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\": not found" May 10 00:53:09.144811 kubelet[2098]: E0510 00:53:09.144762 2098 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\": not found" containerID="1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3" May 10 00:53:09.145337 kubelet[2098]: I0510 00:53:09.144945 2098 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3"} err="failed to get container status \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1dbcc209ed8ae473024d779ac1dc520788954d1b0a0f390b9d0980e508b478f3\": not found" May 10 00:53:09.145337 kubelet[2098]: I0510 00:53:09.145335 2098 scope.go:117] "RemoveContainer" containerID="b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a" May 10 00:53:09.147857 env[1236]: time="2025-05-10T00:53:09.147781117Z" level=info msg="RemoveContainer for \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\"" May 10 00:53:09.153850 env[1236]: time="2025-05-10T00:53:09.153757067Z" level=info msg="RemoveContainer for \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\" returns successfully" May 10 00:53:09.154324 kubelet[2098]: I0510 00:53:09.154241 2098 scope.go:117] "RemoveContainer" containerID="41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282" May 10 00:53:09.160427 systemd[1]: Removed slice kubepods-burstable-podf2059765_1614_4308_aaab_b28039c37725.slice. May 10 00:53:09.160607 systemd[1]: kubepods-burstable-podf2059765_1614_4308_aaab_b28039c37725.slice: Consumed 9.744s CPU time. May 10 00:53:09.164020 env[1236]: time="2025-05-10T00:53:09.163570535Z" level=info msg="RemoveContainer for \"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282\"" May 10 00:53:09.171021 env[1236]: time="2025-05-10T00:53:09.169733109Z" level=info msg="RemoveContainer for \"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282\" returns successfully" May 10 00:53:09.171205 kubelet[2098]: I0510 00:53:09.170236 2098 scope.go:117] "RemoveContainer" containerID="004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872" May 10 00:53:09.175656 env[1236]: time="2025-05-10T00:53:09.175593305Z" level=info msg="RemoveContainer for \"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872\"" May 10 00:53:09.185580 env[1236]: time="2025-05-10T00:53:09.185504488Z" level=info msg="RemoveContainer for \"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872\" returns successfully" May 10 00:53:09.185984 kubelet[2098]: I0510 00:53:09.185939 2098 scope.go:117] "RemoveContainer" containerID="e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc" May 10 00:53:09.188712 env[1236]: time="2025-05-10T00:53:09.188301470Z" level=info msg="RemoveContainer for \"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc\"" May 10 00:53:09.195331 env[1236]: time="2025-05-10T00:53:09.195269736Z" level=info msg="RemoveContainer for \"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc\" returns successfully" May 10 00:53:09.195703 kubelet[2098]: I0510 00:53:09.195670 2098 scope.go:117] "RemoveContainer" containerID="de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8" May 10 00:53:09.197741 env[1236]: time="2025-05-10T00:53:09.197692575Z" level=info msg="RemoveContainer for \"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8\"" May 10 00:53:09.203440 env[1236]: time="2025-05-10T00:53:09.203394052Z" level=info msg="RemoveContainer for \"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8\" returns successfully" May 10 00:53:09.204088 kubelet[2098]: I0510 00:53:09.204058 2098 scope.go:117] "RemoveContainer" containerID="b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a" May 10 00:53:09.204913 env[1236]: time="2025-05-10T00:53:09.204791629Z" level=error msg="ContainerStatus for \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\": not found" May 10 00:53:09.205460 kubelet[2098]: E0510 00:53:09.205404 2098 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\": not found" containerID="b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a" May 10 00:53:09.205609 kubelet[2098]: I0510 00:53:09.205457 2098 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a"} err="failed to get container status \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a\": not found" May 10 00:53:09.205609 kubelet[2098]: I0510 00:53:09.205490 2098 scope.go:117] "RemoveContainer" containerID="41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282" May 10 00:53:09.206154 env[1236]: time="2025-05-10T00:53:09.206081004Z" level=error msg="ContainerStatus for \"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282\": not found" May 10 00:53:09.206498 kubelet[2098]: E0510 00:53:09.206475 2098 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282\": not found" containerID="41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282" May 10 00:53:09.206678 kubelet[2098]: I0510 00:53:09.206648 2098 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282"} err="failed to get container status \"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282\": rpc error: code = NotFound desc = an error occurred when try to find container \"41bdfae37a5ec95555f920eb392c443e82f84795a8e56ece83f2f90d70cf0282\": not found" May 10 00:53:09.206863 kubelet[2098]: I0510 00:53:09.206838 2098 scope.go:117] "RemoveContainer" containerID="004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872" May 10 00:53:09.207377 env[1236]: time="2025-05-10T00:53:09.207277709Z" level=error msg="ContainerStatus for \"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872\": not found" May 10 00:53:09.207654 kubelet[2098]: E0510 00:53:09.207626 2098 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872\": not found" containerID="004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872" May 10 00:53:09.207824 kubelet[2098]: I0510 00:53:09.207794 2098 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872"} err="failed to get container status \"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872\": rpc error: code = NotFound desc = an error occurred when try to find container \"004aec94a8ef51e51cb44094cd0c36545534b581424b53b7e92d9a6f3d3be872\": not found" May 10 00:53:09.207981 kubelet[2098]: I0510 00:53:09.207960 2098 scope.go:117] "RemoveContainer" containerID="e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc" May 10 00:53:09.208503 env[1236]: time="2025-05-10T00:53:09.208413178Z" level=error msg="ContainerStatus for \"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc\": not found" May 10 00:53:09.208839 kubelet[2098]: E0510 00:53:09.208809 2098 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc\": not found" containerID="e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc" May 10 00:53:09.209069 kubelet[2098]: I0510 00:53:09.209028 2098 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc"} err="failed to get container status \"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7fbb6095ac65223c307c47c506c8e8809eeb3e4690c2e4d55c04f8ca58d42cc\": not found" May 10 00:53:09.209192 kubelet[2098]: I0510 00:53:09.209172 2098 scope.go:117] "RemoveContainer" containerID="de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8" May 10 00:53:09.209633 env[1236]: time="2025-05-10T00:53:09.209556697Z" level=error msg="ContainerStatus for \"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8\": not found" May 10 00:53:09.209898 kubelet[2098]: E0510 00:53:09.209865 2098 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8\": not found" containerID="de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8" May 10 00:53:09.210033 kubelet[2098]: I0510 00:53:09.209937 2098 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8"} err="failed to get container status \"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"de6477869a2405a3488e18264955987fec3525dd79c988143a0b2738f43033b8\": not found" May 10 00:53:09.368396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b82ac4b943995e02233b630c671269a1d2b3f870f166530c201724ec046f4f4a-rootfs.mount: Deactivated successfully. May 10 00:53:09.368552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54-rootfs.mount: Deactivated successfully. May 10 00:53:09.368652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54-shm.mount: Deactivated successfully. May 10 00:53:09.368770 systemd[1]: var-lib-kubelet-pods-f2059765\x2d1614\x2d4308\x2daaab\x2db28039c37725-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:53:09.368870 systemd[1]: var-lib-kubelet-pods-f2059765\x2d1614\x2d4308\x2daaab\x2db28039c37725-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:53:09.369029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c-rootfs.mount: Deactivated successfully. May 10 00:53:09.369151 systemd[1]: var-lib-kubelet-pods-44d7b4c1\x2d77e3\x2d4069\x2d9436\x2dbe061fc50517-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpg9vt.mount: Deactivated successfully. May 10 00:53:09.369285 systemd[1]: var-lib-kubelet-pods-f2059765\x2d1614\x2d4308\x2daaab\x2db28039c37725-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dth7rh.mount: Deactivated successfully. May 10 00:53:10.340880 sshd[3647]: pam_unix(sshd:session): session closed for user core May 10 00:53:10.346597 systemd[1]: sshd@20-10.128.0.57:22-147.75.109.163:35456.service: Deactivated successfully. May 10 00:53:10.347879 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:53:10.348942 systemd-logind[1219]: Session 21 logged out. Waiting for processes to exit. May 10 00:53:10.350532 systemd-logind[1219]: Removed session 21. May 10 00:53:10.386636 systemd[1]: Started sshd@21-10.128.0.57:22-147.75.109.163:50494.service. May 10 00:53:10.677374 sshd[3820]: Accepted publickey for core from 147.75.109.163 port 50494 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:53:10.679549 sshd[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:53:10.686837 systemd[1]: Started session-22.scope. May 10 00:53:10.687998 systemd-logind[1219]: New session 22 of user core. May 10 00:53:10.759118 kubelet[2098]: I0510 00:53:10.759051 2098 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44d7b4c1-77e3-4069-9436-be061fc50517" path="/var/lib/kubelet/pods/44d7b4c1-77e3-4069-9436-be061fc50517/volumes" May 10 00:53:10.759953 kubelet[2098]: I0510 00:53:10.759890 2098 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2059765-1614-4308-aaab-b28039c37725" path="/var/lib/kubelet/pods/f2059765-1614-4308-aaab-b28039c37725/volumes" May 10 00:53:11.550348 kubelet[2098]: I0510 00:53:11.550105 2098 topology_manager.go:215] "Topology Admit Handler" podUID="1ed0e620-280e-4cf3-9684-f29b20b91168" podNamespace="kube-system" podName="cilium-7gvpx" May 10 00:53:11.550625 kubelet[2098]: E0510 00:53:11.550479 2098 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2059765-1614-4308-aaab-b28039c37725" containerName="clean-cilium-state" May 10 00:53:11.550625 kubelet[2098]: E0510 00:53:11.550509 2098 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2059765-1614-4308-aaab-b28039c37725" containerName="cilium-agent" May 10 00:53:11.550625 kubelet[2098]: E0510 00:53:11.550522 2098 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44d7b4c1-77e3-4069-9436-be061fc50517" containerName="cilium-operator" May 10 00:53:11.550625 kubelet[2098]: E0510 00:53:11.550553 2098 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2059765-1614-4308-aaab-b28039c37725" containerName="mount-cgroup" May 10 00:53:11.550625 kubelet[2098]: E0510 00:53:11.550564 2098 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2059765-1614-4308-aaab-b28039c37725" containerName="apply-sysctl-overwrites" May 10 00:53:11.550625 kubelet[2098]: E0510 00:53:11.550575 2098 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2059765-1614-4308-aaab-b28039c37725" containerName="mount-bpf-fs" May 10 00:53:11.551011 kubelet[2098]: I0510 00:53:11.550635 2098 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d7b4c1-77e3-4069-9436-be061fc50517" containerName="cilium-operator" May 10 00:53:11.551011 kubelet[2098]: I0510 00:53:11.550647 2098 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2059765-1614-4308-aaab-b28039c37725" containerName="cilium-agent" May 10 00:53:11.560570 systemd[1]: Created slice kubepods-burstable-pod1ed0e620_280e_4cf3_9684_f29b20b91168.slice. May 10 00:53:11.570663 sshd[3820]: pam_unix(sshd:session): session closed for user core May 10 00:53:11.575997 systemd[1]: sshd@21-10.128.0.57:22-147.75.109.163:50494.service: Deactivated successfully. May 10 00:53:11.577220 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:53:11.578634 systemd-logind[1219]: Session 22 logged out. Waiting for processes to exit. May 10 00:53:11.580234 systemd-logind[1219]: Removed session 22. May 10 00:53:11.620238 systemd[1]: Started sshd@22-10.128.0.57:22-147.75.109.163:50496.service. May 10 00:53:11.635953 kubelet[2098]: I0510 00:53:11.634121 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-run\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.635953 kubelet[2098]: I0510 00:53:11.634199 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-bpf-maps\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.635953 kubelet[2098]: I0510 00:53:11.634277 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cni-path\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.635953 kubelet[2098]: I0510 00:53:11.634344 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-lib-modules\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.635953 kubelet[2098]: I0510 00:53:11.634375 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-xtables-lock\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.635953 kubelet[2098]: I0510 00:53:11.634438 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-ipsec-secrets\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.636444 kubelet[2098]: I0510 00:53:11.634468 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ed0e620-280e-4cf3-9684-f29b20b91168-hubble-tls\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.636444 kubelet[2098]: I0510 00:53:11.634537 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-cgroup\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.636444 kubelet[2098]: I0510 00:53:11.634596 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-config-path\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.636444 kubelet[2098]: I0510 00:53:11.634623 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-hostproc\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.636444 kubelet[2098]: I0510 00:53:11.634691 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-host-proc-sys-net\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.636444 kubelet[2098]: I0510 00:53:11.634769 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz2mh\" (UniqueName: \"kubernetes.io/projected/1ed0e620-280e-4cf3-9684-f29b20b91168-kube-api-access-kz2mh\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.636747 kubelet[2098]: I0510 00:53:11.634831 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-etc-cni-netd\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.636747 kubelet[2098]: I0510 00:53:11.634855 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ed0e620-280e-4cf3-9684-f29b20b91168-clustermesh-secrets\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.636747 kubelet[2098]: I0510 00:53:11.634936 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-host-proc-sys-kernel\") pod \"cilium-7gvpx\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " pod="kube-system/cilium-7gvpx" May 10 00:53:11.867497 env[1236]: time="2025-05-10T00:53:11.866730784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7gvpx,Uid:1ed0e620-280e-4cf3-9684-f29b20b91168,Namespace:kube-system,Attempt:0,}" May 10 00:53:11.897838 env[1236]: time="2025-05-10T00:53:11.897729173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:53:11.898134 env[1236]: time="2025-05-10T00:53:11.897835980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:53:11.898134 env[1236]: time="2025-05-10T00:53:11.897860841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:53:11.898402 env[1236]: time="2025-05-10T00:53:11.898323146Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb pid=3843 runtime=io.containerd.runc.v2 May 10 00:53:11.917799 systemd[1]: Started cri-containerd-feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb.scope. May 10 00:53:11.932169 sshd[3830]: Accepted publickey for core from 147.75.109.163 port 50496 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:53:11.933976 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:53:11.942837 systemd[1]: Started session-23.scope. May 10 00:53:11.945010 systemd-logind[1219]: New session 23 of user core. May 10 00:53:11.974195 env[1236]: time="2025-05-10T00:53:11.974137500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7gvpx,Uid:1ed0e620-280e-4cf3-9684-f29b20b91168,Namespace:kube-system,Attempt:0,} returns sandbox id \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\"" May 10 00:53:11.981996 env[1236]: time="2025-05-10T00:53:11.981937827Z" level=info msg="CreateContainer within sandbox \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:53:12.007943 env[1236]: time="2025-05-10T00:53:12.005871769Z" level=info msg="CreateContainer within sandbox \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6\"" May 10 00:53:12.008132 kubelet[2098]: I0510 00:53:12.006762 2098 setters.go:580] "Node became not ready" node="ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:53:12Z","lastTransitionTime":"2025-05-10T00:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:53:12.009185 env[1236]: time="2025-05-10T00:53:12.009124559Z" level=info msg="StartContainer for \"868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6\"" May 10 00:53:12.067829 systemd[1]: Started cri-containerd-868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6.scope. May 10 00:53:12.081310 systemd[1]: cri-containerd-868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6.scope: Deactivated successfully. May 10 00:53:12.104726 env[1236]: time="2025-05-10T00:53:12.104640306Z" level=info msg="shim disconnected" id=868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6 May 10 00:53:12.104726 env[1236]: time="2025-05-10T00:53:12.104726031Z" level=warning msg="cleaning up after shim disconnected" id=868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6 namespace=k8s.io May 10 00:53:12.105357 env[1236]: time="2025-05-10T00:53:12.104740316Z" level=info msg="cleaning up dead shim" May 10 00:53:12.123079 env[1236]: time="2025-05-10T00:53:12.122895299Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3912 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T00:53:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 10 00:53:12.123803 env[1236]: time="2025-05-10T00:53:12.123646632Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" May 10 00:53:12.126783 env[1236]: time="2025-05-10T00:53:12.126689761Z" level=error msg="Failed to pipe stdout of container \"868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6\"" error="reading from a closed fifo" May 10 00:53:12.128606 env[1236]: time="2025-05-10T00:53:12.127949805Z" level=error msg="Failed to pipe stderr of container \"868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6\"" error="reading from a closed fifo" May 10 00:53:12.135297 env[1236]: time="2025-05-10T00:53:12.131077371Z" level=error msg="StartContainer for \"868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 10 00:53:12.135507 kubelet[2098]: E0510 00:53:12.131424 2098 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6" May 10 00:53:12.135507 kubelet[2098]: E0510 00:53:12.131626 2098 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 10 00:53:12.135507 kubelet[2098]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 10 00:53:12.135507 kubelet[2098]: rm /hostbin/cilium-mount May 10 00:53:12.135776 kubelet[2098]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz2mh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-7gvpx_kube-system(1ed0e620-280e-4cf3-9684-f29b20b91168): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 10 00:53:12.135993 kubelet[2098]: E0510 00:53:12.131670 2098 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7gvpx" podUID="1ed0e620-280e-4cf3-9684-f29b20b91168" May 10 00:53:12.173801 env[1236]: time="2025-05-10T00:53:12.173736023Z" level=info msg="CreateContainer within sandbox \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" May 10 00:53:12.198503 env[1236]: time="2025-05-10T00:53:12.198435989Z" level=info msg="CreateContainer within sandbox \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3\"" May 10 00:53:12.200029 env[1236]: time="2025-05-10T00:53:12.199962958Z" level=info msg="StartContainer for \"ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3\"" May 10 00:53:12.260510 systemd[1]: Started cri-containerd-ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3.scope. May 10 00:53:12.282041 systemd[1]: cri-containerd-ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3.scope: Deactivated successfully. May 10 00:53:12.295100 env[1236]: time="2025-05-10T00:53:12.294969709Z" level=info msg="shim disconnected" id=ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3 May 10 00:53:12.295607 env[1236]: time="2025-05-10T00:53:12.295559903Z" level=warning msg="cleaning up after shim disconnected" id=ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3 namespace=k8s.io May 10 00:53:12.295804 env[1236]: time="2025-05-10T00:53:12.295777219Z" level=info msg="cleaning up dead shim" May 10 00:53:12.308413 sshd[3830]: pam_unix(sshd:session): session closed for user core May 10 00:53:12.313867 systemd-logind[1219]: Session 23 logged out. Waiting for processes to exit. May 10 00:53:12.314312 systemd[1]: sshd@22-10.128.0.57:22-147.75.109.163:50496.service: Deactivated successfully. May 10 00:53:12.315513 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:53:12.316807 env[1236]: time="2025-05-10T00:53:12.316750018Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3948 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T00:53:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 10 00:53:12.317518 env[1236]: time="2025-05-10T00:53:12.317438376Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" May 10 00:53:12.319051 env[1236]: time="2025-05-10T00:53:12.318985448Z" level=error msg="Failed to pipe stdout of container \"ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3\"" error="reading from a closed fifo" May 10 00:53:12.319208 env[1236]: time="2025-05-10T00:53:12.318000854Z" level=error msg="Failed to pipe stderr of container \"ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3\"" error="reading from a closed fifo" May 10 00:53:12.319713 systemd-logind[1219]: Removed session 23. May 10 00:53:12.322166 env[1236]: time="2025-05-10T00:53:12.322092796Z" level=error msg="StartContainer for \"ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 10 00:53:12.323591 kubelet[2098]: E0510 00:53:12.322513 2098 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3" May 10 00:53:12.323591 kubelet[2098]: E0510 00:53:12.322748 2098 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 10 00:53:12.323591 kubelet[2098]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 10 00:53:12.323591 kubelet[2098]: rm /hostbin/cilium-mount May 10 00:53:12.323946 kubelet[2098]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz2mh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-7gvpx_kube-system(1ed0e620-280e-4cf3-9684-f29b20b91168): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 10 00:53:12.324170 kubelet[2098]: E0510 00:53:12.322863 2098 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7gvpx" podUID="1ed0e620-280e-4cf3-9684-f29b20b91168" May 10 00:53:12.353521 systemd[1]: Started sshd@23-10.128.0.57:22-147.75.109.163:50500.service. May 10 00:53:12.644824 sshd[3964]: Accepted publickey for core from 147.75.109.163 port 50500 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:53:12.647153 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:53:12.655190 systemd[1]: Started session-24.scope. May 10 00:53:12.656296 systemd-logind[1219]: New session 24 of user core. May 10 00:53:13.161188 kubelet[2098]: I0510 00:53:13.161141 2098 scope.go:117] "RemoveContainer" containerID="868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6" May 10 00:53:13.162598 env[1236]: time="2025-05-10T00:53:13.162548416Z" level=info msg="StopPodSandbox for \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\"" May 10 00:53:13.163258 env[1236]: time="2025-05-10T00:53:13.163201189Z" level=info msg="Container to stop \"868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:13.163411 env[1236]: time="2025-05-10T00:53:13.163383128Z" level=info msg="Container to stop \"ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:13.167008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb-shm.mount: Deactivated successfully. May 10 00:53:13.184945 systemd[1]: cri-containerd-feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb.scope: Deactivated successfully. May 10 00:53:13.189226 env[1236]: time="2025-05-10T00:53:13.188772073Z" level=info msg="RemoveContainer for \"868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6\"" May 10 00:53:13.195402 env[1236]: time="2025-05-10T00:53:13.195194699Z" level=info msg="RemoveContainer for \"868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6\" returns successfully" May 10 00:53:13.224579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb-rootfs.mount: Deactivated successfully. May 10 00:53:13.241940 env[1236]: time="2025-05-10T00:53:13.241844113Z" level=info msg="shim disconnected" id=feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb May 10 00:53:13.241940 env[1236]: time="2025-05-10T00:53:13.241938180Z" level=warning msg="cleaning up after shim disconnected" id=feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb namespace=k8s.io May 10 00:53:13.242350 env[1236]: time="2025-05-10T00:53:13.241954763Z" level=info msg="cleaning up dead shim" May 10 00:53:13.255877 env[1236]: time="2025-05-10T00:53:13.255810054Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3990 runtime=io.containerd.runc.v2\n" May 10 00:53:13.256383 env[1236]: time="2025-05-10T00:53:13.256338356Z" level=info msg="TearDown network for sandbox \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\" successfully" May 10 00:53:13.256383 env[1236]: time="2025-05-10T00:53:13.256382310Z" level=info msg="StopPodSandbox for \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\" returns successfully" May 10 00:53:13.351584 kubelet[2098]: I0510 00:53:13.351514 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-host-proc-sys-kernel\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.351584 kubelet[2098]: I0510 00:53:13.351589 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz2mh\" (UniqueName: \"kubernetes.io/projected/1ed0e620-280e-4cf3-9684-f29b20b91168-kube-api-access-kz2mh\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.351949 kubelet[2098]: I0510 00:53:13.351621 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ed0e620-280e-4cf3-9684-f29b20b91168-clustermesh-secrets\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.351949 kubelet[2098]: I0510 00:53:13.351658 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-cgroup\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.351949 kubelet[2098]: I0510 00:53:13.351685 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-hostproc\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.351949 kubelet[2098]: I0510 00:53:13.351711 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ed0e620-280e-4cf3-9684-f29b20b91168-hubble-tls\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.351949 kubelet[2098]: I0510 00:53:13.351733 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-bpf-maps\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.351949 kubelet[2098]: I0510 00:53:13.351755 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cni-path\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.352413 kubelet[2098]: I0510 00:53:13.351778 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-run\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.352413 kubelet[2098]: I0510 00:53:13.351807 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-ipsec-secrets\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.352413 kubelet[2098]: I0510 00:53:13.351833 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-etc-cni-netd\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.352413 kubelet[2098]: I0510 00:53:13.351860 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-host-proc-sys-net\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.352413 kubelet[2098]: I0510 00:53:13.351891 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-config-path\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.352413 kubelet[2098]: I0510 00:53:13.351961 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-lib-modules\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.352733 kubelet[2098]: I0510 00:53:13.352000 2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-xtables-lock\") pod \"1ed0e620-280e-4cf3-9684-f29b20b91168\" (UID: \"1ed0e620-280e-4cf3-9684-f29b20b91168\") " May 10 00:53:13.352733 kubelet[2098]: I0510 00:53:13.352108 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.352733 kubelet[2098]: I0510 00:53:13.352152 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.353479 kubelet[2098]: I0510 00:53:13.353064 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cni-path" (OuterVolumeSpecName: "cni-path") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.353479 kubelet[2098]: I0510 00:53:13.353076 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.354108 kubelet[2098]: I0510 00:53:13.354059 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.354233 kubelet[2098]: I0510 00:53:13.354124 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-hostproc" (OuterVolumeSpecName: "hostproc") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.357307 kubelet[2098]: I0510 00:53:13.357258 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.357535 kubelet[2098]: I0510 00:53:13.357496 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.368599 kubelet[2098]: I0510 00:53:13.359957 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.362116 systemd[1]: var-lib-kubelet-pods-1ed0e620\x2d280e\x2d4cf3\x2d9684\x2df29b20b91168-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkz2mh.mount: Deactivated successfully. May 10 00:53:13.369216 kubelet[2098]: I0510 00:53:13.369172 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed0e620-280e-4cf3-9684-f29b20b91168-kube-api-access-kz2mh" (OuterVolumeSpecName: "kube-api-access-kz2mh") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "kube-api-access-kz2mh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:53:13.369530 systemd[1]: var-lib-kubelet-pods-1ed0e620\x2d280e\x2d4cf3\x2d9684\x2df29b20b91168-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 10 00:53:13.369695 systemd[1]: var-lib-kubelet-pods-1ed0e620\x2d280e\x2d4cf3\x2d9684\x2df29b20b91168-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:53:13.371800 kubelet[2098]: I0510 00:53:13.371753 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:53:13.372094 kubelet[2098]: I0510 00:53:13.372064 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.373611 kubelet[2098]: I0510 00:53:13.373556 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:53:13.373727 kubelet[2098]: I0510 00:53:13.373684 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed0e620-280e-4cf3-9684-f29b20b91168-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:53:13.377441 kubelet[2098]: I0510 00:53:13.377396 2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed0e620-280e-4cf3-9684-f29b20b91168-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1ed0e620-280e-4cf3-9684-f29b20b91168" (UID: "1ed0e620-280e-4cf3-9684-f29b20b91168"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:53:13.453391 kubelet[2098]: I0510 00:53:13.453001 2098 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-xtables-lock\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.453391 kubelet[2098]: I0510 00:53:13.453050 2098 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-host-proc-sys-net\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.453391 kubelet[2098]: I0510 00:53:13.453072 2098 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-config-path\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.453391 kubelet[2098]: I0510 00:53:13.453091 2098 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-lib-modules\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.453391 kubelet[2098]: I0510 00:53:13.453106 2098 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ed0e620-280e-4cf3-9684-f29b20b91168-clustermesh-secrets\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.453391 kubelet[2098]: I0510 00:53:13.453157 2098 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-host-proc-sys-kernel\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.453391 kubelet[2098]: I0510 00:53:13.453175 2098 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kz2mh\" (UniqueName: \"kubernetes.io/projected/1ed0e620-280e-4cf3-9684-f29b20b91168-kube-api-access-kz2mh\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.454658 kubelet[2098]: I0510 00:53:13.453198 2098 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ed0e620-280e-4cf3-9684-f29b20b91168-hubble-tls\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.454658 kubelet[2098]: I0510 00:53:13.453217 2098 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-cgroup\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.454658 kubelet[2098]: I0510 00:53:13.453242 2098 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-hostproc\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.454658 kubelet[2098]: I0510 00:53:13.453257 2098 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-bpf-maps\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.454658 kubelet[2098]: I0510 00:53:13.453271 2098 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cni-path\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.454658 kubelet[2098]: I0510 00:53:13.453290 2098 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-run\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.454658 kubelet[2098]: I0510 00:53:13.453306 2098 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1ed0e620-280e-4cf3-9684-f29b20b91168-cilium-ipsec-secrets\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.455502 kubelet[2098]: I0510 00:53:13.453324 2098 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ed0e620-280e-4cf3-9684-f29b20b91168-etc-cni-netd\") on node \"ci-3510-3-7-nightly-20250509-2100-23376fd288632b292388\" DevicePath \"\"" May 10 00:53:13.744460 systemd[1]: var-lib-kubelet-pods-1ed0e620\x2d280e\x2d4cf3\x2d9684\x2df29b20b91168-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:53:13.767840 update_engine[1222]: I0510 00:53:13.767767 1222 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:53:13.768419 update_engine[1222]: I0510 00:53:13.768204 1222 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:53:13.768491 update_engine[1222]: I0510 00:53:13.768455 1222 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:53:13.778595 update_engine[1222]: E0510 00:53:13.778389 1222 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:53:13.778595 update_engine[1222]: I0510 00:53:13.778551 1222 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 10 00:53:13.948346 kubelet[2098]: E0510 00:53:13.948278 2098 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:53:14.165485 kubelet[2098]: I0510 00:53:14.165446 2098 scope.go:117] "RemoveContainer" containerID="ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3" May 10 00:53:14.168983 env[1236]: time="2025-05-10T00:53:14.168930825Z" level=info msg="RemoveContainer for \"ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3\"" May 10 00:53:14.172556 systemd[1]: Removed slice kubepods-burstable-pod1ed0e620_280e_4cf3_9684_f29b20b91168.slice. May 10 00:53:14.178938 env[1236]: time="2025-05-10T00:53:14.178512054Z" level=info msg="RemoveContainer for \"ac142c06c6462206baf50f980a70f866978192e85db1d7c205fd1a49227224d3\" returns successfully" May 10 00:53:14.248808 kubelet[2098]: I0510 00:53:14.248744 2098 topology_manager.go:215] "Topology Admit Handler" podUID="62db6ec2-0d98-4280-9ead-23e93cd7e542" podNamespace="kube-system" podName="cilium-bk5m8" May 10 00:53:14.249079 kubelet[2098]: E0510 00:53:14.248835 2098 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1ed0e620-280e-4cf3-9684-f29b20b91168" containerName="mount-cgroup" May 10 00:53:14.249079 kubelet[2098]: E0510 00:53:14.248851 2098 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1ed0e620-280e-4cf3-9684-f29b20b91168" containerName="mount-cgroup" May 10 00:53:14.249079 kubelet[2098]: I0510 00:53:14.248883 2098 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed0e620-280e-4cf3-9684-f29b20b91168" containerName="mount-cgroup" May 10 00:53:14.249079 kubelet[2098]: I0510 00:53:14.248894 2098 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed0e620-280e-4cf3-9684-f29b20b91168" containerName="mount-cgroup" May 10 00:53:14.257338 systemd[1]: Created slice kubepods-burstable-pod62db6ec2_0d98_4280_9ead_23e93cd7e542.slice. May 10 00:53:14.360605 kubelet[2098]: I0510 00:53:14.360558 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62db6ec2-0d98-4280-9ead-23e93cd7e542-etc-cni-netd\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.361077 kubelet[2098]: I0510 00:53:14.360959 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62db6ec2-0d98-4280-9ead-23e93cd7e542-cilium-config-path\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.361362 kubelet[2098]: I0510 00:53:14.361312 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62db6ec2-0d98-4280-9ead-23e93cd7e542-host-proc-sys-net\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.361582 kubelet[2098]: I0510 00:53:14.361546 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62db6ec2-0d98-4280-9ead-23e93cd7e542-hubble-tls\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.361773 kubelet[2098]: I0510 00:53:14.361734 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62db6ec2-0d98-4280-9ead-23e93cd7e542-hostproc\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.361986 kubelet[2098]: I0510 00:53:14.361956 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjc9p\" (UniqueName: \"kubernetes.io/projected/62db6ec2-0d98-4280-9ead-23e93cd7e542-kube-api-access-tjc9p\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.362219 kubelet[2098]: I0510 00:53:14.362180 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62db6ec2-0d98-4280-9ead-23e93cd7e542-cilium-run\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.362476 kubelet[2098]: I0510 00:53:14.362449 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62db6ec2-0d98-4280-9ead-23e93cd7e542-clustermesh-secrets\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.362673 kubelet[2098]: I0510 00:53:14.362635 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62db6ec2-0d98-4280-9ead-23e93cd7e542-bpf-maps\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.362841 kubelet[2098]: I0510 00:53:14.362817 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62db6ec2-0d98-4280-9ead-23e93cd7e542-lib-modules\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.363062 kubelet[2098]: I0510 00:53:14.363026 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62db6ec2-0d98-4280-9ead-23e93cd7e542-cilium-cgroup\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.363231 kubelet[2098]: I0510 00:53:14.363208 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62db6ec2-0d98-4280-9ead-23e93cd7e542-cni-path\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.363420 kubelet[2098]: I0510 00:53:14.363396 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62db6ec2-0d98-4280-9ead-23e93cd7e542-xtables-lock\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.363605 kubelet[2098]: I0510 00:53:14.363575 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62db6ec2-0d98-4280-9ead-23e93cd7e542-host-proc-sys-kernel\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.363820 kubelet[2098]: I0510 00:53:14.363756 2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/62db6ec2-0d98-4280-9ead-23e93cd7e542-cilium-ipsec-secrets\") pod \"cilium-bk5m8\" (UID: \"62db6ec2-0d98-4280-9ead-23e93cd7e542\") " pod="kube-system/cilium-bk5m8" May 10 00:53:14.562180 env[1236]: time="2025-05-10T00:53:14.562001567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bk5m8,Uid:62db6ec2-0d98-4280-9ead-23e93cd7e542,Namespace:kube-system,Attempt:0,}" May 10 00:53:14.590701 env[1236]: time="2025-05-10T00:53:14.590572615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:53:14.590701 env[1236]: time="2025-05-10T00:53:14.590632998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:53:14.591141 env[1236]: time="2025-05-10T00:53:14.590652449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:53:14.591304 env[1236]: time="2025-05-10T00:53:14.591080132Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4 pid=4018 runtime=io.containerd.runc.v2 May 10 00:53:14.610330 systemd[1]: Started cri-containerd-b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4.scope. May 10 00:53:14.646817 env[1236]: time="2025-05-10T00:53:14.646122801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bk5m8,Uid:62db6ec2-0d98-4280-9ead-23e93cd7e542,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4\"" May 10 00:53:14.650565 env[1236]: time="2025-05-10T00:53:14.650400276Z" level=info msg="CreateContainer within sandbox \"b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:53:14.668006 env[1236]: time="2025-05-10T00:53:14.667933785Z" level=info msg="CreateContainer within sandbox \"b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1a01c43fd8534dcdc7e1c108ce99e7664042d6b8fff932083fc7c93706d1973\"" May 10 00:53:14.669247 env[1236]: time="2025-05-10T00:53:14.669197685Z" level=info msg="StartContainer for \"f1a01c43fd8534dcdc7e1c108ce99e7664042d6b8fff932083fc7c93706d1973\"" May 10 00:53:14.693899 systemd[1]: Started cri-containerd-f1a01c43fd8534dcdc7e1c108ce99e7664042d6b8fff932083fc7c93706d1973.scope. May 10 00:53:14.737589 env[1236]: time="2025-05-10T00:53:14.737526390Z" level=info msg="StartContainer for \"f1a01c43fd8534dcdc7e1c108ce99e7664042d6b8fff932083fc7c93706d1973\" returns successfully" May 10 00:53:14.760710 kubelet[2098]: I0510 00:53:14.758979 2098 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed0e620-280e-4cf3-9684-f29b20b91168" path="/var/lib/kubelet/pods/1ed0e620-280e-4cf3-9684-f29b20b91168/volumes" May 10 00:53:14.759703 systemd[1]: cri-containerd-f1a01c43fd8534dcdc7e1c108ce99e7664042d6b8fff932083fc7c93706d1973.scope: Deactivated successfully. May 10 00:53:14.799722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1a01c43fd8534dcdc7e1c108ce99e7664042d6b8fff932083fc7c93706d1973-rootfs.mount: Deactivated successfully. May 10 00:53:14.812091 env[1236]: time="2025-05-10T00:53:14.812023882Z" level=info msg="shim disconnected" id=f1a01c43fd8534dcdc7e1c108ce99e7664042d6b8fff932083fc7c93706d1973 May 10 00:53:14.813044 env[1236]: time="2025-05-10T00:53:14.812385840Z" level=warning msg="cleaning up after shim disconnected" id=f1a01c43fd8534dcdc7e1c108ce99e7664042d6b8fff932083fc7c93706d1973 namespace=k8s.io May 10 00:53:14.813238 env[1236]: time="2025-05-10T00:53:14.813203801Z" level=info msg="cleaning up dead shim" May 10 00:53:14.832761 env[1236]: time="2025-05-10T00:53:14.832644785Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4105 runtime=io.containerd.runc.v2\n" May 10 00:53:15.173798 env[1236]: time="2025-05-10T00:53:15.173738646Z" level=info msg="CreateContainer within sandbox \"b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:53:15.198856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1203465463.mount: Deactivated successfully. May 10 00:53:15.205868 env[1236]: time="2025-05-10T00:53:15.205809183Z" level=info msg="CreateContainer within sandbox \"b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ca42813fecde1f349d97a0f76c10a903c5f7ffeb7f9ffb40e259497ff94f3e0c\"" May 10 00:53:15.207249 env[1236]: time="2025-05-10T00:53:15.207209976Z" level=info msg="StartContainer for \"ca42813fecde1f349d97a0f76c10a903c5f7ffeb7f9ffb40e259497ff94f3e0c\"" May 10 00:53:15.220023 kubelet[2098]: W0510 00:53:15.219838 2098 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ed0e620_280e_4cf3_9684_f29b20b91168.slice/cri-containerd-868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6.scope WatchSource:0}: container "868808a5015e0ccff286fcc48f9ef15a30f87506c5218aab8e2c6268fcc6b4c6" in namespace "k8s.io": not found May 10 00:53:15.241000 systemd[1]: Started cri-containerd-ca42813fecde1f349d97a0f76c10a903c5f7ffeb7f9ffb40e259497ff94f3e0c.scope. May 10 00:53:15.287664 env[1236]: time="2025-05-10T00:53:15.287593770Z" level=info msg="StartContainer for \"ca42813fecde1f349d97a0f76c10a903c5f7ffeb7f9ffb40e259497ff94f3e0c\" returns successfully" May 10 00:53:15.297058 systemd[1]: cri-containerd-ca42813fecde1f349d97a0f76c10a903c5f7ffeb7f9ffb40e259497ff94f3e0c.scope: Deactivated successfully. May 10 00:53:15.328843 env[1236]: time="2025-05-10T00:53:15.328755578Z" level=info msg="shim disconnected" id=ca42813fecde1f349d97a0f76c10a903c5f7ffeb7f9ffb40e259497ff94f3e0c May 10 00:53:15.328843 env[1236]: time="2025-05-10T00:53:15.328818633Z" level=warning msg="cleaning up after shim disconnected" id=ca42813fecde1f349d97a0f76c10a903c5f7ffeb7f9ffb40e259497ff94f3e0c namespace=k8s.io May 10 00:53:15.328843 env[1236]: time="2025-05-10T00:53:15.328837319Z" level=info msg="cleaning up dead shim" May 10 00:53:15.340398 env[1236]: time="2025-05-10T00:53:15.340313780Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4169 runtime=io.containerd.runc.v2\n" May 10 00:53:16.179076 env[1236]: time="2025-05-10T00:53:16.178980405Z" level=info msg="CreateContainer within sandbox \"b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:53:16.204708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934607328.mount: Deactivated successfully. May 10 00:53:16.217550 env[1236]: time="2025-05-10T00:53:16.217479280Z" level=info msg="CreateContainer within sandbox \"b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5eab61af8cec91b2f16c800decda233dccb7e8b25e9768c203eba2297fe70615\"" May 10 00:53:16.219253 env[1236]: time="2025-05-10T00:53:16.219104255Z" level=info msg="StartContainer for \"5eab61af8cec91b2f16c800decda233dccb7e8b25e9768c203eba2297fe70615\"" May 10 00:53:16.254527 systemd[1]: Started cri-containerd-5eab61af8cec91b2f16c800decda233dccb7e8b25e9768c203eba2297fe70615.scope. May 10 00:53:16.306354 systemd[1]: cri-containerd-5eab61af8cec91b2f16c800decda233dccb7e8b25e9768c203eba2297fe70615.scope: Deactivated successfully. May 10 00:53:16.311232 env[1236]: time="2025-05-10T00:53:16.311130816Z" level=info msg="StartContainer for \"5eab61af8cec91b2f16c800decda233dccb7e8b25e9768c203eba2297fe70615\" returns successfully" May 10 00:53:16.352890 env[1236]: time="2025-05-10T00:53:16.352799070Z" level=info msg="shim disconnected" id=5eab61af8cec91b2f16c800decda233dccb7e8b25e9768c203eba2297fe70615 May 10 00:53:16.352890 env[1236]: time="2025-05-10T00:53:16.352868356Z" level=warning msg="cleaning up after shim disconnected" id=5eab61af8cec91b2f16c800decda233dccb7e8b25e9768c203eba2297fe70615 namespace=k8s.io May 10 00:53:16.352890 env[1236]: time="2025-05-10T00:53:16.352888542Z" level=info msg="cleaning up dead shim" May 10 00:53:16.365309 env[1236]: time="2025-05-10T00:53:16.365214423Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4227 runtime=io.containerd.runc.v2\n" May 10 00:53:16.744789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5eab61af8cec91b2f16c800decda233dccb7e8b25e9768c203eba2297fe70615-rootfs.mount: Deactivated successfully. May 10 00:53:17.193376 env[1236]: time="2025-05-10T00:53:17.193308961Z" level=info msg="CreateContainer within sandbox \"b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:53:17.224492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703979477.mount: Deactivated successfully. May 10 00:53:17.232839 env[1236]: time="2025-05-10T00:53:17.232776163Z" level=info msg="CreateContainer within sandbox \"b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f5af2f6e781cce9e988ada9bc51ee12ed6ffea864bfa7a67a5f39869226351b0\"" May 10 00:53:17.235290 env[1236]: time="2025-05-10T00:53:17.233771580Z" level=info msg="StartContainer for \"f5af2f6e781cce9e988ada9bc51ee12ed6ffea864bfa7a67a5f39869226351b0\"" May 10 00:53:17.268581 systemd[1]: Started cri-containerd-f5af2f6e781cce9e988ada9bc51ee12ed6ffea864bfa7a67a5f39869226351b0.scope. May 10 00:53:17.314743 systemd[1]: cri-containerd-f5af2f6e781cce9e988ada9bc51ee12ed6ffea864bfa7a67a5f39869226351b0.scope: Deactivated successfully. May 10 00:53:17.318071 env[1236]: time="2025-05-10T00:53:17.317874751Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62db6ec2_0d98_4280_9ead_23e93cd7e542.slice/cri-containerd-f5af2f6e781cce9e988ada9bc51ee12ed6ffea864bfa7a67a5f39869226351b0.scope/memory.events\": no such file or directory" May 10 00:53:17.320258 env[1236]: time="2025-05-10T00:53:17.320193478Z" level=info msg="StartContainer for \"f5af2f6e781cce9e988ada9bc51ee12ed6ffea864bfa7a67a5f39869226351b0\" returns successfully" May 10 00:53:17.354594 env[1236]: time="2025-05-10T00:53:17.354528625Z" level=info msg="shim disconnected" id=f5af2f6e781cce9e988ada9bc51ee12ed6ffea864bfa7a67a5f39869226351b0 May 10 00:53:17.355157 env[1236]: time="2025-05-10T00:53:17.355117829Z" level=warning msg="cleaning up after shim disconnected" id=f5af2f6e781cce9e988ada9bc51ee12ed6ffea864bfa7a67a5f39869226351b0 namespace=k8s.io May 10 00:53:17.355157 env[1236]: time="2025-05-10T00:53:17.355152270Z" level=info msg="cleaning up dead shim" May 10 00:53:17.367830 env[1236]: time="2025-05-10T00:53:17.367750414Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4282 runtime=io.containerd.runc.v2\n" May 10 00:53:17.744922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5af2f6e781cce9e988ada9bc51ee12ed6ffea864bfa7a67a5f39869226351b0-rootfs.mount: Deactivated successfully. May 10 00:53:17.754821 kubelet[2098]: E0510 00:53:17.754739 2098 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-fkvbf" podUID="e5258db3-1a1a-43dd-b0f7-78af2a1393ab" May 10 00:53:18.198558 env[1236]: time="2025-05-10T00:53:18.198500726Z" level=info msg="CreateContainer within sandbox \"b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:53:18.226643 env[1236]: time="2025-05-10T00:53:18.226578867Z" level=info msg="CreateContainer within sandbox \"b8e946fe67ab907a3bf6cbb48dfc7b6aeb02c7530167dc77bfb44fc6726bd3d4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3008bf5b15d6a795f890bbd0a452074c8243455498a38685b05d3072ff841017\"" May 10 00:53:18.228154 env[1236]: time="2025-05-10T00:53:18.228109417Z" level=info msg="StartContainer for \"3008bf5b15d6a795f890bbd0a452074c8243455498a38685b05d3072ff841017\"" May 10 00:53:18.265319 systemd[1]: Started cri-containerd-3008bf5b15d6a795f890bbd0a452074c8243455498a38685b05d3072ff841017.scope. May 10 00:53:18.330849 env[1236]: time="2025-05-10T00:53:18.330780534Z" level=info msg="StartContainer for \"3008bf5b15d6a795f890bbd0a452074c8243455498a38685b05d3072ff841017\" returns successfully" May 10 00:53:18.385939 kubelet[2098]: W0510 00:53:18.385829 2098 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62db6ec2_0d98_4280_9ead_23e93cd7e542.slice/cri-containerd-f1a01c43fd8534dcdc7e1c108ce99e7664042d6b8fff932083fc7c93706d1973.scope WatchSource:0}: task f1a01c43fd8534dcdc7e1c108ce99e7664042d6b8fff932083fc7c93706d1973 not found: not found May 10 00:53:18.745031 systemd[1]: run-containerd-runc-k8s.io-3008bf5b15d6a795f890bbd0a452074c8243455498a38685b05d3072ff841017-runc.9398Av.mount: Deactivated successfully. May 10 00:53:18.794960 env[1236]: time="2025-05-10T00:53:18.794624967Z" level=info msg="StopPodSandbox for \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\"" May 10 00:53:18.794960 env[1236]: time="2025-05-10T00:53:18.794763729Z" level=info msg="TearDown network for sandbox \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\" successfully" May 10 00:53:18.794960 env[1236]: time="2025-05-10T00:53:18.794817770Z" level=info msg="StopPodSandbox for \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\" returns successfully" May 10 00:53:18.797967 env[1236]: time="2025-05-10T00:53:18.795892084Z" level=info msg="RemovePodSandbox for \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\"" May 10 00:53:18.797967 env[1236]: time="2025-05-10T00:53:18.795956034Z" level=info msg="Forcibly stopping sandbox \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\"" May 10 00:53:18.797967 env[1236]: time="2025-05-10T00:53:18.796077691Z" level=info msg="TearDown network for sandbox \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\" successfully" May 10 00:53:18.804383 env[1236]: time="2025-05-10T00:53:18.804315065Z" level=info msg="RemovePodSandbox \"561443205474ebf842e1e9597ff36819fe5e66bb713650e99ec5f810470f844c\" returns successfully" May 10 00:53:18.805635 env[1236]: time="2025-05-10T00:53:18.805582323Z" level=info msg="StopPodSandbox for \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\"" May 10 00:53:18.806036 env[1236]: time="2025-05-10T00:53:18.805952026Z" level=info msg="TearDown network for sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" successfully" May 10 00:53:18.806204 env[1236]: time="2025-05-10T00:53:18.806174994Z" level=info msg="StopPodSandbox for \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" returns successfully" May 10 00:53:18.806988 env[1236]: time="2025-05-10T00:53:18.806955662Z" level=info msg="RemovePodSandbox for \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\"" May 10 00:53:18.807184 env[1236]: time="2025-05-10T00:53:18.807130822Z" level=info msg="Forcibly stopping sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\"" May 10 00:53:18.807381 env[1236]: time="2025-05-10T00:53:18.807353209Z" level=info msg="TearDown network for sandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" successfully" May 10 00:53:18.819941 env[1236]: time="2025-05-10T00:53:18.819021628Z" level=info msg="RemovePodSandbox \"7b4742828bf0914845fdb8c7472f1fa98c085d739904da23f11585b8bb0f2b54\" returns successfully" May 10 00:53:18.819941 env[1236]: time="2025-05-10T00:53:18.819744139Z" level=info msg="StopPodSandbox for \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\"" May 10 00:53:18.820217 env[1236]: time="2025-05-10T00:53:18.819935014Z" level=info msg="TearDown network for sandbox \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\" successfully" May 10 00:53:18.820217 env[1236]: time="2025-05-10T00:53:18.820021600Z" level=info msg="StopPodSandbox for \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\" returns successfully" May 10 00:53:18.820558 env[1236]: time="2025-05-10T00:53:18.820520507Z" level=info msg="RemovePodSandbox for \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\"" May 10 00:53:18.820662 env[1236]: time="2025-05-10T00:53:18.820566007Z" level=info msg="Forcibly stopping sandbox \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\"" May 10 00:53:18.820724 env[1236]: time="2025-05-10T00:53:18.820674256Z" level=info msg="TearDown network for sandbox \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\" successfully" May 10 00:53:18.826041 env[1236]: time="2025-05-10T00:53:18.825983357Z" level=info msg="RemovePodSandbox \"feefc6c1c91be61dd42d085ccb572c81ddc50a25a10054032f777be176b940fb\" returns successfully" May 10 00:53:18.841016 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 10 00:53:21.340966 systemd[1]: run-containerd-runc-k8s.io-3008bf5b15d6a795f890bbd0a452074c8243455498a38685b05d3072ff841017-runc.KEuQ4P.mount: Deactivated successfully. May 10 00:53:21.498954 kubelet[2098]: W0510 00:53:21.496323 2098 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62db6ec2_0d98_4280_9ead_23e93cd7e542.slice/cri-containerd-ca42813fecde1f349d97a0f76c10a903c5f7ffeb7f9ffb40e259497ff94f3e0c.scope WatchSource:0}: task ca42813fecde1f349d97a0f76c10a903c5f7ffeb7f9ffb40e259497ff94f3e0c not found: not found May 10 00:53:22.205191 systemd-networkd[1029]: lxc_health: Link UP May 10 00:53:22.248035 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:53:22.248565 systemd-networkd[1029]: lxc_health: Gained carrier May 10 00:53:22.607711 kubelet[2098]: I0510 00:53:22.607516 2098 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bk5m8" podStartSLOduration=8.607489678 podStartE2EDuration="8.607489678s" podCreationTimestamp="2025-05-10 00:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:53:19.222477788 +0000 UTC m=+120.609973613" watchObservedRunningTime="2025-05-10 00:53:22.607489678 +0000 UTC m=+123.994985514" May 10 00:53:23.773638 update_engine[1222]: I0510 00:53:23.772976 1222 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:53:23.773638 update_engine[1222]: I0510 00:53:23.773329 1222 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:53:23.773638 update_engine[1222]: I0510 00:53:23.773581 1222 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:53:23.849158 update_engine[1222]: E0510 00:53:23.847980 1222 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:53:23.849158 update_engine[1222]: I0510 00:53:23.848136 1222 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 10 00:53:23.849158 update_engine[1222]: I0510 00:53:23.848151 1222 omaha_request_action.cc:621] Omaha request response: May 10 00:53:23.849158 update_engine[1222]: E0510 00:53:23.848289 1222 omaha_request_action.cc:640] Omaha request network transfer failed. May 10 00:53:23.849158 update_engine[1222]: I0510 00:53:23.848316 1222 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 10 00:53:23.849158 update_engine[1222]: I0510 00:53:23.848325 1222 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:53:23.849158 update_engine[1222]: I0510 00:53:23.848330 1222 update_attempter.cc:306] Processing Done. May 10 00:53:23.849158 update_engine[1222]: E0510 00:53:23.848385 1222 update_attempter.cc:619] Update failed. May 10 00:53:23.849158 update_engine[1222]: I0510 00:53:23.848397 1222 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 10 00:53:23.849158 update_engine[1222]: I0510 00:53:23.848403 1222 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 10 00:53:23.849158 update_engine[1222]: I0510 00:53:23.848412 1222 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 10 00:53:23.849158 update_engine[1222]: I0510 00:53:23.848516 1222 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 10 00:53:23.849158 update_engine[1222]: I0510 00:53:23.848554 1222 omaha_request_action.cc:270] Posting an Omaha request to disabled May 10 00:53:23.849158 update_engine[1222]: I0510 00:53:23.848563 1222 omaha_request_action.cc:271] Request: May 10 00:53:23.849158 update_engine[1222]: May 10 00:53:23.849158 update_engine[1222]: May 10 00:53:23.850137 update_engine[1222]: May 10 00:53:23.850137 update_engine[1222]: May 10 00:53:23.850137 update_engine[1222]: May 10 00:53:23.850137 update_engine[1222]: May 10 00:53:23.850137 update_engine[1222]: I0510 00:53:23.848570 1222 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:53:23.850137 update_engine[1222]: I0510 00:53:23.848855 1222 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:53:23.850137 update_engine[1222]: I0510 00:53:23.849102 1222 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:53:23.851067 locksmithd[1265]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 10 00:53:23.857696 update_engine[1222]: E0510 00:53:23.857372 1222 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:53:23.857696 update_engine[1222]: I0510 00:53:23.857514 1222 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 10 00:53:23.857696 update_engine[1222]: I0510 00:53:23.857528 1222 omaha_request_action.cc:621] Omaha request response: May 10 00:53:23.857696 update_engine[1222]: I0510 00:53:23.857539 1222 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:53:23.857696 update_engine[1222]: I0510 00:53:23.857546 1222 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:53:23.857696 update_engine[1222]: I0510 00:53:23.857553 1222 update_attempter.cc:306] Processing Done. May 10 00:53:23.857696 update_engine[1222]: I0510 00:53:23.857560 1222 update_attempter.cc:310] Error event sent. May 10 00:53:23.857696 update_engine[1222]: I0510 00:53:23.857573 1222 update_check_scheduler.cc:74] Next update check in 46m0s May 10 00:53:23.858838 locksmithd[1265]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 10 00:53:24.218760 systemd-networkd[1029]: lxc_health: Gained IPv6LL May 10 00:53:24.612617 kubelet[2098]: W0510 00:53:24.612453 2098 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62db6ec2_0d98_4280_9ead_23e93cd7e542.slice/cri-containerd-5eab61af8cec91b2f16c800decda233dccb7e8b25e9768c203eba2297fe70615.scope WatchSource:0}: task 5eab61af8cec91b2f16c800decda233dccb7e8b25e9768c203eba2297fe70615 not found: not found May 10 00:53:25.955297 systemd[1]: run-containerd-runc-k8s.io-3008bf5b15d6a795f890bbd0a452074c8243455498a38685b05d3072ff841017-runc.Rtg9wq.mount: Deactivated successfully. May 10 00:53:27.726996 kubelet[2098]: W0510 00:53:27.726936 2098 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62db6ec2_0d98_4280_9ead_23e93cd7e542.slice/cri-containerd-f5af2f6e781cce9e988ada9bc51ee12ed6ffea864bfa7a67a5f39869226351b0.scope WatchSource:0}: task f5af2f6e781cce9e988ada9bc51ee12ed6ffea864bfa7a67a5f39869226351b0 not found: not found May 10 00:53:28.199970 systemd[1]: run-containerd-runc-k8s.io-3008bf5b15d6a795f890bbd0a452074c8243455498a38685b05d3072ff841017-runc.aIQjU0.mount: Deactivated successfully. May 10 00:53:28.402271 sshd[3964]: pam_unix(sshd:session): session closed for user core May 10 00:53:28.407887 systemd[1]: sshd@23-10.128.0.57:22-147.75.109.163:50500.service: Deactivated successfully. May 10 00:53:28.409275 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:53:28.410342 systemd-logind[1219]: Session 24 logged out. Waiting for processes to exit. May 10 00:53:28.412430 systemd-logind[1219]: Removed session 24.