May 10 00:46:06.139417 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 9 23:12:23 -00 2025 May 10 00:46:06.139476 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:46:06.139496 kernel: BIOS-provided physical RAM map: May 10 00:46:06.139510 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved May 10 00:46:06.139524 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable May 10 00:46:06.139537 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved May 10 00:46:06.139556 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable May 10 00:46:06.139571 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved May 10 00:46:06.139584 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd277fff] usable May 10 00:46:06.139597 kernel: BIOS-e820: [mem 0x00000000bd278000-0x00000000bd281fff] ACPI data May 10 00:46:06.139611 kernel: BIOS-e820: [mem 0x00000000bd282000-0x00000000bf8ecfff] usable May 10 00:46:06.139624 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved May 10 00:46:06.139637 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data May 10 00:46:06.139651 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS May 10 00:46:06.139686 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable May 10 00:46:06.139704 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved May 10 00:46:06.139719 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable May 10 00:46:06.139734 kernel: NX (Execute Disable) protection: active May 10 00:46:06.139748 kernel: efi: EFI v2.70 by EDK II May 10 00:46:06.139763 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd278018 May 10 00:46:06.139778 kernel: random: crng init done May 10 00:46:06.139793 kernel: SMBIOS 2.4 present. May 10 00:46:06.139812 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 May 10 00:46:06.139826 kernel: Hypervisor detected: KVM May 10 00:46:06.139841 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 00:46:06.139856 kernel: kvm-clock: cpu 0, msr 188196001, primary cpu clock May 10 00:46:06.139870 kernel: kvm-clock: using sched offset of 13288029369 cycles May 10 00:46:06.139900 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 00:46:06.139918 kernel: tsc: Detected 2299.998 MHz processor May 10 00:46:06.139934 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 00:46:06.139950 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 00:46:06.139965 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 May 10 00:46:06.139984 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 00:46:06.140000 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 May 10 00:46:06.140015 kernel: Using GB pages for direct mapping May 10 00:46:06.140029 kernel: Secure boot disabled May 10 00:46:06.140044 kernel: ACPI: Early table checksum verification disabled May 10 00:46:06.140073 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) May 10 00:46:06.140089 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) May 10 00:46:06.140119 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) May 10 00:46:06.140147 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) May 10 00:46:06.140164 kernel: ACPI: FACS 0x00000000BFBF2000 000040 May 10 00:46:06.140181 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) May 10 00:46:06.140196 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) May 10 00:46:06.140213 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) May 10 00:46:06.140259 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) May 10 00:46:06.140293 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) May 10 00:46:06.140310 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) May 10 00:46:06.140326 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] May 10 00:46:06.140343 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] May 10 00:46:06.140359 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] May 10 00:46:06.140376 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] May 10 00:46:06.140393 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] May 10 00:46:06.140409 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] May 10 00:46:06.140426 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] May 10 00:46:06.140448 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] May 10 00:46:06.140464 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] May 10 00:46:06.140481 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 10 00:46:06.140497 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 10 00:46:06.140514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 10 00:46:06.140531 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] May 10 00:46:06.140548 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] May 10 00:46:06.140565 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] May 10 00:46:06.140582 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] May 10 00:46:06.140602 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] May 10 00:46:06.140619 kernel: Zone ranges: May 10 00:46:06.140636 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 00:46:06.140652 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 10 00:46:06.140669 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] May 10 00:46:06.140686 kernel: Movable zone start for each node May 10 00:46:06.140702 kernel: Early memory node ranges May 10 00:46:06.140719 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] May 10 00:46:06.140735 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] May 10 00:46:06.140757 kernel: node 0: [mem 0x0000000000100000-0x00000000bd277fff] May 10 00:46:06.140774 kernel: node 0: [mem 0x00000000bd282000-0x00000000bf8ecfff] May 10 00:46:06.140790 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] May 10 00:46:06.140807 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] May 10 00:46:06.140824 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] May 10 00:46:06.140840 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 00:46:06.140857 kernel: On node 0, zone DMA: 11 pages in unavailable ranges May 10 00:46:06.140873 kernel: On node 0, zone DMA: 104 pages in unavailable ranges May 10 00:46:06.140890 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges May 10 00:46:06.140910 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 10 00:46:06.140927 kernel: On node 0, zone Normal: 32 pages in unavailable ranges May 10 00:46:06.140943 kernel: ACPI: PM-Timer IO Port: 0xb008 May 10 00:46:06.140960 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 00:46:06.140977 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 00:46:06.140993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 00:46:06.141009 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 00:46:06.141025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 00:46:06.141042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 00:46:06.141089 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 00:46:06.141106 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 10 00:46:06.141123 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 10 00:46:06.141139 kernel: Booting paravirtualized kernel on KVM May 10 00:46:06.141155 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 00:46:06.141173 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 10 00:46:06.141189 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 10 00:46:06.141206 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 10 00:46:06.141222 kernel: pcpu-alloc: [0] 0 1 May 10 00:46:06.141243 kernel: kvm-guest: PV spinlocks enabled May 10 00:46:06.141260 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 00:46:06.141291 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 May 10 00:46:06.141307 kernel: Policy zone: Normal May 10 00:46:06.141326 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:46:06.141343 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:46:06.141358 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 10 00:46:06.141375 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 00:46:06.141392 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:46:06.141414 kernel: Memory: 7515412K/7860544K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 344872K reserved, 0K cma-reserved) May 10 00:46:06.141431 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 10 00:46:06.141447 kernel: Kernel/User page tables isolation: enabled May 10 00:46:06.141463 kernel: ftrace: allocating 34584 entries in 136 pages May 10 00:46:06.141480 kernel: ftrace: allocated 136 pages with 2 groups May 10 00:46:06.141496 kernel: rcu: Hierarchical RCU implementation. May 10 00:46:06.141513 kernel: rcu: RCU event tracing is enabled. May 10 00:46:06.141530 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 10 00:46:06.141552 kernel: Rude variant of Tasks RCU enabled. May 10 00:46:06.141581 kernel: Tracing variant of Tasks RCU enabled. May 10 00:46:06.141599 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:46:06.141619 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 10 00:46:06.141637 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 10 00:46:06.141654 kernel: Console: colour dummy device 80x25 May 10 00:46:06.141670 kernel: printk: console [ttyS0] enabled May 10 00:46:06.141686 kernel: ACPI: Core revision 20210730 May 10 00:46:06.141702 kernel: APIC: Switch to symmetric I/O mode setup May 10 00:46:06.141719 kernel: x2apic enabled May 10 00:46:06.141739 kernel: Switched APIC routing to physical x2apic. May 10 00:46:06.141756 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 May 10 00:46:06.141774 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 10 00:46:06.141791 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) May 10 00:46:06.141808 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 May 10 00:46:06.141825 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 May 10 00:46:06.141843 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 00:46:06.141864 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 10 00:46:06.141882 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 10 00:46:06.141899 kernel: Spectre V2 : Mitigation: IBRS May 10 00:46:06.141917 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 00:46:06.141935 kernel: RETBleed: Mitigation: IBRS May 10 00:46:06.141952 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 10 00:46:06.141970 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl May 10 00:46:06.141988 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 10 00:46:06.142006 kernel: MDS: Mitigation: Clear CPU buffers May 10 00:46:06.142027 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 10 00:46:06.142045 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 00:46:06.142094 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 00:46:06.142110 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 00:46:06.142125 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 00:46:06.142141 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 10 00:46:06.142156 kernel: Freeing SMP alternatives memory: 32K May 10 00:46:06.142173 kernel: pid_max: default: 32768 minimum: 301 May 10 00:46:06.142189 kernel: LSM: Security Framework initializing May 10 00:46:06.142211 kernel: SELinux: Initializing. May 10 00:46:06.142228 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 00:46:06.142246 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 00:46:06.142265 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) May 10 00:46:06.142291 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. May 10 00:46:06.142309 kernel: signal: max sigframe size: 1776 May 10 00:46:06.142326 kernel: rcu: Hierarchical SRCU implementation. May 10 00:46:06.142346 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 10 00:46:06.142364 kernel: smp: Bringing up secondary CPUs ... May 10 00:46:06.142385 kernel: x86: Booting SMP configuration: May 10 00:46:06.142403 kernel: .... node #0, CPUs: #1 May 10 00:46:06.142421 kernel: kvm-clock: cpu 1, msr 188196041, secondary cpu clock May 10 00:46:06.142439 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 10 00:46:06.142459 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 10 00:46:06.142478 kernel: smp: Brought up 1 node, 2 CPUs May 10 00:46:06.142496 kernel: smpboot: Max logical packages: 1 May 10 00:46:06.142526 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) May 10 00:46:06.142548 kernel: devtmpfs: initialized May 10 00:46:06.142566 kernel: x86/mm: Memory block size: 128MB May 10 00:46:06.142583 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) May 10 00:46:06.142601 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:46:06.142619 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 10 00:46:06.142637 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:46:06.142656 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:46:06.142674 kernel: audit: initializing netlink subsys (disabled) May 10 00:46:06.142691 kernel: audit: type=2000 audit(1746837965.309:1): state=initialized audit_enabled=0 res=1 May 10 00:46:06.142711 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:46:06.142727 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 00:46:06.142744 kernel: cpuidle: using governor menu May 10 00:46:06.142762 kernel: ACPI: bus type PCI registered May 10 00:46:06.142780 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:46:06.142798 kernel: dca service started, version 1.12.1 May 10 00:46:06.142816 kernel: PCI: Using configuration type 1 for base access May 10 00:46:06.142834 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 00:46:06.142852 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:46:06.142874 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:46:06.142891 kernel: ACPI: Added _OSI(Module Device) May 10 00:46:06.142909 kernel: ACPI: Added _OSI(Processor Device) May 10 00:46:06.142927 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:46:06.142945 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:46:06.142963 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 10 00:46:06.142981 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 10 00:46:06.142999 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 10 00:46:06.143017 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 10 00:46:06.143036 kernel: ACPI: Interpreter enabled May 10 00:46:06.145021 kernel: ACPI: PM: (supports S0 S3 S5) May 10 00:46:06.145073 kernel: ACPI: Using IOAPIC for interrupt routing May 10 00:46:06.145090 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 00:46:06.145106 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F May 10 00:46:06.145123 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:46:06.145408 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 10 00:46:06.145589 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 10 00:46:06.145632 kernel: PCI host bridge to bus 0000:00 May 10 00:46:06.145821 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 00:46:06.146011 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 00:46:06.146192 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 00:46:06.146357 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] May 10 00:46:06.146511 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:46:06.146707 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 10 00:46:06.146896 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 May 10 00:46:06.147105 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 10 00:46:06.147289 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 10 00:46:06.147481 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 May 10 00:46:06.147655 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 10 00:46:06.147831 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] May 10 00:46:06.148030 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 10 00:46:06.154426 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] May 10 00:46:06.154631 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] May 10 00:46:06.154855 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 May 10 00:46:06.155032 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] May 10 00:46:06.155226 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] May 10 00:46:06.155250 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 00:46:06.155285 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 00:46:06.155303 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 00:46:06.155322 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 00:46:06.155340 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 10 00:46:06.155358 kernel: iommu: Default domain type: Translated May 10 00:46:06.155376 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 00:46:06.155393 kernel: vgaarb: loaded May 10 00:46:06.155411 kernel: pps_core: LinuxPPS API ver. 1 registered May 10 00:46:06.155429 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 10 00:46:06.155451 kernel: PTP clock support registered May 10 00:46:06.155469 kernel: Registered efivars operations May 10 00:46:06.155486 kernel: PCI: Using ACPI for IRQ routing May 10 00:46:06.155504 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 00:46:06.155521 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] May 10 00:46:06.155539 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] May 10 00:46:06.155556 kernel: e820: reserve RAM buffer [mem 0xbd278000-0xbfffffff] May 10 00:46:06.155573 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] May 10 00:46:06.155590 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] May 10 00:46:06.155610 kernel: clocksource: Switched to clocksource kvm-clock May 10 00:46:06.155628 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:46:06.155646 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:46:06.155663 kernel: pnp: PnP ACPI init May 10 00:46:06.155681 kernel: pnp: PnP ACPI: found 7 devices May 10 00:46:06.155698 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 00:46:06.155716 kernel: NET: Registered PF_INET protocol family May 10 00:46:06.155734 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 10 00:46:06.155752 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 10 00:46:06.155773 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:46:06.155791 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 00:46:06.155809 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 10 00:46:06.155826 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 10 00:46:06.155844 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 10 00:46:06.155862 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 10 00:46:06.155879 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:46:06.155896 kernel: NET: Registered PF_XDP protocol family May 10 00:46:06.156091 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 00:46:06.156249 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 00:46:06.156406 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 00:46:06.156553 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] May 10 00:46:06.156723 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 10 00:46:06.156747 kernel: PCI: CLS 0 bytes, default 64 May 10 00:46:06.156764 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 10 00:46:06.156782 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) May 10 00:46:06.156806 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 10 00:46:06.156823 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 10 00:46:06.156841 kernel: clocksource: Switched to clocksource tsc May 10 00:46:06.156858 kernel: Initialise system trusted keyrings May 10 00:46:06.156876 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 10 00:46:06.156894 kernel: Key type asymmetric registered May 10 00:46:06.156911 kernel: Asymmetric key parser 'x509' registered May 10 00:46:06.156928 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 10 00:46:06.156949 kernel: io scheduler mq-deadline registered May 10 00:46:06.156965 kernel: io scheduler kyber registered May 10 00:46:06.156982 kernel: io scheduler bfq registered May 10 00:46:06.156999 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 00:46:06.157018 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 10 00:46:06.164562 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver May 10 00:46:06.164602 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 May 10 00:46:06.164783 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver May 10 00:46:06.164808 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 10 00:46:06.164982 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver May 10 00:46:06.165006 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:46:06.165023 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 00:46:06.165039 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 10 00:46:06.165069 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A May 10 00:46:06.165094 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A May 10 00:46:06.179569 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) May 10 00:46:06.179620 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 00:46:06.179647 kernel: i8042: Warning: Keylock active May 10 00:46:06.179665 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 00:46:06.179683 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 00:46:06.179862 kernel: rtc_cmos 00:00: RTC can wake from S4 May 10 00:46:06.180016 kernel: rtc_cmos 00:00: registered as rtc0 May 10 00:46:06.182625 kernel: rtc_cmos 00:00: setting system clock to 2025-05-10T00:46:05 UTC (1746837965) May 10 00:46:06.182818 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 10 00:46:06.182843 kernel: intel_pstate: CPU model not supported May 10 00:46:06.182869 kernel: pstore: Registered efi as persistent store backend May 10 00:46:06.182887 kernel: NET: Registered PF_INET6 protocol family May 10 00:46:06.182905 kernel: Segment Routing with IPv6 May 10 00:46:06.182923 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:46:06.182939 kernel: NET: Registered PF_PACKET protocol family May 10 00:46:06.182957 kernel: Key type dns_resolver registered May 10 00:46:06.182975 kernel: IPI shorthand broadcast: enabled May 10 00:46:06.182993 kernel: sched_clock: Marking stable (758631136, 126816446)->(901822820, -16375238) May 10 00:46:06.183011 kernel: registered taskstats version 1 May 10 00:46:06.183032 kernel: Loading compiled-in X.509 certificates May 10 00:46:06.183050 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 00:46:06.183085 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 0c62a22cd9157131d2e97d5a2e1bd9023e187117' May 10 00:46:06.183103 kernel: Key type .fscrypt registered May 10 00:46:06.183120 kernel: Key type fscrypt-provisioning registered May 10 00:46:06.183139 kernel: pstore: Using crash dump compression: deflate May 10 00:46:06.183157 kernel: ima: Allocated hash algorithm: sha1 May 10 00:46:06.183176 kernel: ima: No architecture policies found May 10 00:46:06.183193 kernel: clk: Disabling unused clocks May 10 00:46:06.183215 kernel: Freeing unused kernel image (initmem) memory: 47456K May 10 00:46:06.183233 kernel: Write protecting the kernel read-only data: 28672k May 10 00:46:06.183250 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 10 00:46:06.183275 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 10 00:46:06.183294 kernel: Run /init as init process May 10 00:46:06.183312 kernel: with arguments: May 10 00:46:06.183335 kernel: /init May 10 00:46:06.183353 kernel: with environment: May 10 00:46:06.183370 kernel: HOME=/ May 10 00:46:06.183392 kernel: TERM=linux May 10 00:46:06.183408 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:46:06.183432 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:46:06.183455 systemd[1]: Detected virtualization kvm. May 10 00:46:06.183474 systemd[1]: Detected architecture x86-64. May 10 00:46:06.183493 systemd[1]: Running in initrd. May 10 00:46:06.183511 systemd[1]: No hostname configured, using default hostname. May 10 00:46:06.183533 systemd[1]: Hostname set to . May 10 00:46:06.183553 systemd[1]: Initializing machine ID from VM UUID. May 10 00:46:06.183571 systemd[1]: Queued start job for default target initrd.target. May 10 00:46:06.183590 systemd[1]: Started systemd-ask-password-console.path. May 10 00:46:06.183608 systemd[1]: Reached target cryptsetup.target. May 10 00:46:06.183627 systemd[1]: Reached target paths.target. May 10 00:46:06.183645 systemd[1]: Reached target slices.target. May 10 00:46:06.183664 systemd[1]: Reached target swap.target. May 10 00:46:06.183682 systemd[1]: Reached target timers.target. May 10 00:46:06.183706 systemd[1]: Listening on iscsid.socket. May 10 00:46:06.183724 systemd[1]: Listening on iscsiuio.socket. May 10 00:46:06.183743 systemd[1]: Listening on systemd-journald-audit.socket. May 10 00:46:06.183762 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 00:46:06.183781 systemd[1]: Listening on systemd-journald.socket. May 10 00:46:06.183800 systemd[1]: Listening on systemd-networkd.socket. May 10 00:46:06.183818 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:46:06.183841 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:46:06.183860 systemd[1]: Reached target sockets.target. May 10 00:46:06.183899 systemd[1]: Starting kmod-static-nodes.service... May 10 00:46:06.183921 systemd[1]: Finished network-cleanup.service. May 10 00:46:06.183941 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:46:06.183981 systemd[1]: Starting systemd-journald.service... May 10 00:46:06.184001 systemd[1]: Starting systemd-modules-load.service... May 10 00:46:06.184024 systemd[1]: Starting systemd-resolved.service... May 10 00:46:06.184044 systemd[1]: Starting systemd-vconsole-setup.service... May 10 00:46:06.189764 systemd[1]: Finished kmod-static-nodes.service. May 10 00:46:06.189797 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:46:06.189819 kernel: audit: type=1130 audit(1746837966.147:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.189840 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:46:06.189858 kernel: audit: type=1130 audit(1746837966.156:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.189877 systemd[1]: Finished systemd-vconsole-setup.service. May 10 00:46:06.189903 kernel: audit: type=1130 audit(1746837966.172:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.189921 systemd[1]: Starting dracut-cmdline-ask.service... May 10 00:46:06.189938 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:46:06.189962 systemd-journald[190]: Journal started May 10 00:46:06.190174 systemd-journald[190]: Runtime Journal (/run/log/journal/10869f995603d199b34b98fd129ca41f) is 8.0M, max 148.8M, 140.8M free. May 10 00:46:06.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.177117 systemd-modules-load[191]: Inserted module 'overlay' May 10 00:46:06.197330 systemd[1]: Started systemd-journald.service. May 10 00:46:06.197406 kernel: audit: type=1130 audit(1746837966.191:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.203715 kernel: audit: type=1130 audit(1746837966.196:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.218216 systemd[1]: Finished dracut-cmdline-ask.service. May 10 00:46:06.229720 kernel: audit: type=1130 audit(1746837966.221:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.223745 systemd[1]: Starting dracut-cmdline.service... May 10 00:46:06.256963 systemd-resolved[192]: Positive Trust Anchors: May 10 00:46:06.262246 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:46:06.260108 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:46:06.260176 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:46:06.274197 kernel: Bridge firewalling registered May 10 00:46:06.274246 dracut-cmdline[206]: dracut-dracut-053 May 10 00:46:06.274246 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:46:06.271881 systemd-modules-load[191]: Inserted module 'br_netfilter' May 10 00:46:06.274955 systemd-resolved[192]: Defaulting to hostname 'linux'. May 10 00:46:06.276912 systemd[1]: Started systemd-resolved.service. May 10 00:46:06.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.307215 systemd[1]: Reached target nss-lookup.target. May 10 00:46:06.315178 kernel: audit: type=1130 audit(1746837966.306:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.315217 kernel: SCSI subsystem initialized May 10 00:46:06.333676 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:46:06.333753 kernel: device-mapper: uevent: version 1.0.3 May 10 00:46:06.335596 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 10 00:46:06.340438 systemd-modules-load[191]: Inserted module 'dm_multipath' May 10 00:46:06.341632 systemd[1]: Finished systemd-modules-load.service. May 10 00:46:06.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.355179 systemd[1]: Starting systemd-sysctl.service... May 10 00:46:06.364225 kernel: audit: type=1130 audit(1746837966.353:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.370287 systemd[1]: Finished systemd-sysctl.service. May 10 00:46:06.382208 kernel: audit: type=1130 audit(1746837966.373:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.382249 kernel: Loading iSCSI transport class v2.0-870. May 10 00:46:06.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.403164 kernel: iscsi: registered transport (tcp) May 10 00:46:06.431101 kernel: iscsi: registered transport (qla4xxx) May 10 00:46:06.431188 kernel: QLogic iSCSI HBA Driver May 10 00:46:06.475976 systemd[1]: Finished dracut-cmdline.service. May 10 00:46:06.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.478222 systemd[1]: Starting dracut-pre-udev.service... May 10 00:46:06.538104 kernel: raid6: avx2x4 gen() 18096 MB/s May 10 00:46:06.552098 kernel: raid6: avx2x4 xor() 7903 MB/s May 10 00:46:06.569100 kernel: raid6: avx2x2 gen() 18038 MB/s May 10 00:46:06.586105 kernel: raid6: avx2x2 xor() 18534 MB/s May 10 00:46:06.603100 kernel: raid6: avx2x1 gen() 13491 MB/s May 10 00:46:06.620098 kernel: raid6: avx2x1 xor() 15982 MB/s May 10 00:46:06.637138 kernel: raid6: sse2x4 gen() 10809 MB/s May 10 00:46:06.654152 kernel: raid6: sse2x4 xor() 6208 MB/s May 10 00:46:06.671124 kernel: raid6: sse2x2 gen() 10882 MB/s May 10 00:46:06.688105 kernel: raid6: sse2x2 xor() 7367 MB/s May 10 00:46:06.705126 kernel: raid6: sse2x1 gen() 9650 MB/s May 10 00:46:06.722470 kernel: raid6: sse2x1 xor() 5135 MB/s May 10 00:46:06.722539 kernel: raid6: using algorithm avx2x4 gen() 18096 MB/s May 10 00:46:06.722564 kernel: raid6: .... xor() 7903 MB/s, rmw enabled May 10 00:46:06.723164 kernel: raid6: using avx2x2 recovery algorithm May 10 00:46:06.739100 kernel: xor: automatically using best checksumming function avx May 10 00:46:06.849103 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 10 00:46:06.861767 systemd[1]: Finished dracut-pre-udev.service. May 10 00:46:06.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.862000 audit: BPF prog-id=7 op=LOAD May 10 00:46:06.862000 audit: BPF prog-id=8 op=LOAD May 10 00:46:06.864210 systemd[1]: Starting systemd-udevd.service... May 10 00:46:06.881848 systemd-udevd[389]: Using default interface naming scheme 'v252'. May 10 00:46:06.889224 systemd[1]: Started systemd-udevd.service. May 10 00:46:06.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.909931 systemd[1]: Starting dracut-pre-trigger.service... May 10 00:46:06.926858 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation May 10 00:46:06.972996 systemd[1]: Finished dracut-pre-trigger.service. May 10 00:46:06.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:06.974439 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:46:07.047708 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:46:07.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:07.127101 kernel: cryptd: max_cpu_qlen set to 1000 May 10 00:46:07.163111 kernel: AVX2 version of gcm_enc/dec engaged. May 10 00:46:07.185778 kernel: scsi host0: Virtio SCSI HBA May 10 00:46:07.185973 kernel: AES CTR mode by8 optimization enabled May 10 00:46:07.228088 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 May 10 00:46:07.311891 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) May 10 00:46:07.368763 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks May 10 00:46:07.368991 kernel: sd 0:0:1:0: [sda] Write Protect is off May 10 00:46:07.369257 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 May 10 00:46:07.369437 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 10 00:46:07.369599 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:46:07.369616 kernel: GPT:17805311 != 25165823 May 10 00:46:07.369631 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:46:07.369646 kernel: GPT:17805311 != 25165823 May 10 00:46:07.369671 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:46:07.369692 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:46:07.369709 kernel: sd 0:0:1:0: [sda] Attached SCSI disk May 10 00:46:07.422090 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (442) May 10 00:46:07.434257 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 10 00:46:07.459277 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 10 00:46:07.477722 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 10 00:46:07.498306 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 10 00:46:07.517417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:46:07.529580 systemd[1]: Starting disk-uuid.service... May 10 00:46:07.549415 disk-uuid[510]: Primary Header is updated. May 10 00:46:07.549415 disk-uuid[510]: Secondary Entries is updated. May 10 00:46:07.549415 disk-uuid[510]: Secondary Header is updated. May 10 00:46:07.575236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:46:07.588110 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:46:07.601163 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:46:08.599987 disk-uuid[511]: The operation has completed successfully. May 10 00:46:08.609225 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:46:08.665541 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:46:08.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:08.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:08.665679 systemd[1]: Finished disk-uuid.service. May 10 00:46:08.684790 systemd[1]: Starting verity-setup.service... May 10 00:46:08.711196 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 10 00:46:08.786718 systemd[1]: Found device dev-mapper-usr.device. May 10 00:46:08.788492 systemd[1]: Mounting sysusr-usr.mount... May 10 00:46:08.809657 systemd[1]: Finished verity-setup.service. May 10 00:46:08.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:08.889336 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 00:46:08.889864 systemd[1]: Mounted sysusr-usr.mount. May 10 00:46:08.890288 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 10 00:46:08.891243 systemd[1]: Starting ignition-setup.service... May 10 00:46:08.952239 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:46:08.952281 kernel: BTRFS info (device sda6): using free space tree May 10 00:46:08.952307 kernel: BTRFS info (device sda6): has skinny extents May 10 00:46:08.952329 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:46:08.946075 systemd[1]: Starting parse-ip-for-networkd.service... May 10 00:46:08.979644 systemd[1]: Finished ignition-setup.service. May 10 00:46:08.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:08.981035 systemd[1]: Starting ignition-fetch-offline.service... May 10 00:46:09.059405 systemd[1]: Finished parse-ip-for-networkd.service. May 10 00:46:09.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:09.059000 audit: BPF prog-id=9 op=LOAD May 10 00:46:09.061523 systemd[1]: Starting systemd-networkd.service... May 10 00:46:09.096507 systemd-networkd[685]: lo: Link UP May 10 00:46:09.096523 systemd-networkd[685]: lo: Gained carrier May 10 00:46:09.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:09.097469 systemd-networkd[685]: Enumeration completed May 10 00:46:09.097603 systemd[1]: Started systemd-networkd.service. May 10 00:46:09.098099 systemd-networkd[685]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:46:09.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:09.100697 systemd-networkd[685]: eth0: Link UP May 10 00:46:09.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:09.193321 iscsid[694]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 10 00:46:09.193321 iscsid[694]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 10 00:46:09.193321 iscsid[694]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 10 00:46:09.193321 iscsid[694]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 10 00:46:09.193321 iscsid[694]: If using hardware iscsi like qla4xxx this message can be ignored. May 10 00:46:09.193321 iscsid[694]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 10 00:46:09.193321 iscsid[694]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 10 00:46:09.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:09.100705 systemd-networkd[685]: eth0: Gained carrier May 10 00:46:09.253191 ignition[601]: Ignition 2.14.0 May 10 00:46:09.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:09.104524 systemd[1]: Reached target network.target. May 10 00:46:09.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:09.253209 ignition[601]: Stage: fetch-offline May 10 00:46:09.112171 systemd-networkd[685]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403.c.flatcar-212911.internal' to 'ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403' May 10 00:46:09.253299 ignition[601]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:09.112191 systemd-networkd[685]: eth0: DHCPv4 address 10.128.0.77/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 10 00:46:09.253342 ignition[601]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:46:09.127289 systemd[1]: Starting iscsiuio.service... May 10 00:46:09.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:09.273167 ignition[601]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:46:09.136654 systemd[1]: Started iscsiuio.service. May 10 00:46:09.273374 ignition[601]: parsed url from cmdline: "" May 10 00:46:09.154523 systemd[1]: Starting iscsid.service... May 10 00:46:09.273383 ignition[601]: no config URL provided May 10 00:46:09.172297 systemd[1]: Started iscsid.service. May 10 00:46:09.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:09.273394 ignition[601]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:46:09.180555 systemd[1]: Starting dracut-initqueue.service... May 10 00:46:09.273405 ignition[601]: no config at "/usr/lib/ignition/user.ign" May 10 00:46:09.200949 systemd[1]: Finished dracut-initqueue.service. May 10 00:46:09.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:09.273416 ignition[601]: failed to fetch config: resource requires networking May 10 00:46:09.233656 systemd[1]: Reached target remote-fs-pre.target. May 10 00:46:09.273573 ignition[601]: Ignition finished successfully May 10 00:46:09.276353 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:46:09.369849 ignition[709]: Ignition 2.14.0 May 10 00:46:09.286401 systemd[1]: Reached target remote-fs.target. May 10 00:46:09.369860 ignition[709]: Stage: fetch May 10 00:46:09.305476 systemd[1]: Starting dracut-pre-mount.service... May 10 00:46:09.370033 ignition[709]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:09.328679 systemd[1]: Finished ignition-fetch-offline.service. May 10 00:46:09.370094 ignition[709]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:46:09.343608 systemd[1]: Finished dracut-pre-mount.service. May 10 00:46:09.377671 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:46:09.358577 systemd[1]: Starting ignition-fetch.service... May 10 00:46:09.377863 ignition[709]: parsed url from cmdline: "" May 10 00:46:09.387697 unknown[709]: fetched base config from "system" May 10 00:46:09.377871 ignition[709]: no config URL provided May 10 00:46:09.387712 unknown[709]: fetched base config from "system" May 10 00:46:09.377879 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:46:09.387730 unknown[709]: fetched user config from "gcp" May 10 00:46:09.377890 ignition[709]: no config at "/usr/lib/ignition/user.ign" May 10 00:46:09.400757 systemd[1]: Finished ignition-fetch.service. May 10 00:46:09.377928 ignition[709]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 May 10 00:46:09.430084 systemd[1]: Starting ignition-kargs.service... May 10 00:46:09.382879 ignition[709]: GET result: OK May 10 00:46:09.466706 systemd[1]: Finished ignition-kargs.service. May 10 00:46:09.382983 ignition[709]: parsing config with SHA512: 723c027425b510fac88dca82a89fba92f2d4ee92b33bb630a1de0d7f0454b675e24b68779c852f7dc10c4a70ec0e84227607e77899f3425b3515dbc90f916832 May 10 00:46:09.474555 systemd[1]: Starting ignition-disks.service... May 10 00:46:09.390106 ignition[709]: fetch: fetch complete May 10 00:46:09.511541 systemd[1]: Finished ignition-disks.service. May 10 00:46:09.390122 ignition[709]: fetch: fetch passed May 10 00:46:09.527447 systemd[1]: Reached target initrd-root-device.target. May 10 00:46:09.390217 ignition[709]: Ignition finished successfully May 10 00:46:09.542332 systemd[1]: Reached target local-fs-pre.target. May 10 00:46:09.443158 ignition[715]: Ignition 2.14.0 May 10 00:46:09.560236 systemd[1]: Reached target local-fs.target. May 10 00:46:09.443170 ignition[715]: Stage: kargs May 10 00:46:09.574248 systemd[1]: Reached target sysinit.target. May 10 00:46:09.443315 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:09.587271 systemd[1]: Reached target basic.target. May 10 00:46:09.443351 ignition[715]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:46:09.603135 systemd[1]: Starting systemd-fsck-root.service... May 10 00:46:09.451091 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:46:09.452725 ignition[715]: kargs: kargs passed May 10 00:46:09.452789 ignition[715]: Ignition finished successfully May 10 00:46:09.485663 ignition[721]: Ignition 2.14.0 May 10 00:46:09.485672 ignition[721]: Stage: disks May 10 00:46:09.485798 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:09.485824 ignition[721]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:46:09.491749 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:46:09.495541 ignition[721]: disks: disks passed May 10 00:46:09.495602 ignition[721]: Ignition finished successfully May 10 00:46:09.648173 systemd-fsck[729]: ROOT: clean, 623/1628000 files, 124060/1617920 blocks May 10 00:46:09.865281 systemd[1]: Finished systemd-fsck-root.service. May 10 00:46:09.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:09.875660 systemd[1]: Mounting sysroot.mount... May 10 00:46:09.905314 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 00:46:09.903569 systemd[1]: Mounted sysroot.mount. May 10 00:46:09.912559 systemd[1]: Reached target initrd-root-fs.target. May 10 00:46:09.929700 systemd[1]: Mounting sysroot-usr.mount... May 10 00:46:09.942922 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 10 00:46:09.942987 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:46:09.943033 systemd[1]: Reached target ignition-diskful.target. May 10 00:46:10.037300 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (735) May 10 00:46:10.037354 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:46:10.037391 kernel: BTRFS info (device sda6): using free space tree May 10 00:46:10.037417 kernel: BTRFS info (device sda6): has skinny extents May 10 00:46:09.959050 systemd[1]: Mounted sysroot-usr.mount. May 10 00:46:09.983807 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:46:10.045690 initrd-setup-root[740]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:46:10.075347 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:46:10.011848 systemd[1]: Starting initrd-setup-root.service... May 10 00:46:10.091345 initrd-setup-root[764]: cut: /sysroot/etc/group: No such file or directory May 10 00:46:10.069044 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:46:10.111291 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:46:10.121282 initrd-setup-root[782]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:46:10.141626 systemd[1]: Finished initrd-setup-root.service. May 10 00:46:10.181398 kernel: kauditd_printk_skb: 23 callbacks suppressed May 10 00:46:10.181437 kernel: audit: type=1130 audit(1746837970.140:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:10.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:10.143337 systemd[1]: Starting ignition-mount.service... May 10 00:46:10.189576 systemd[1]: Starting sysroot-boot.service... May 10 00:46:10.203340 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 10 00:46:10.203466 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 10 00:46:10.234408 ignition[800]: INFO : Ignition 2.14.0 May 10 00:46:10.234408 ignition[800]: INFO : Stage: mount May 10 00:46:10.234408 ignition[800]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:10.234408 ignition[800]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:46:10.314422 kernel: audit: type=1130 audit(1746837970.249:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:10.314461 kernel: audit: type=1130 audit(1746837970.276:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:10.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:10.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:10.237580 systemd[1]: Finished sysroot-boot.service. May 10 00:46:10.347278 ignition[800]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:46:10.347278 ignition[800]: INFO : mount: mount passed May 10 00:46:10.347278 ignition[800]: INFO : Ignition finished successfully May 10 00:46:10.392404 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (810) May 10 00:46:10.392447 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:46:10.392463 kernel: BTRFS info (device sda6): using free space tree May 10 00:46:10.392478 kernel: BTRFS info (device sda6): has skinny extents May 10 00:46:10.250648 systemd[1]: Finished ignition-mount.service. May 10 00:46:10.420296 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:46:10.271602 systemd-networkd[685]: eth0: Gained IPv6LL May 10 00:46:10.279194 systemd[1]: Starting ignition-files.service... May 10 00:46:10.344194 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:46:10.452272 ignition[829]: INFO : Ignition 2.14.0 May 10 00:46:10.452272 ignition[829]: INFO : Stage: files May 10 00:46:10.452272 ignition[829]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:10.452272 ignition[829]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:46:10.415679 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:46:10.507227 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:46:10.507227 ignition[829]: DEBUG : files: compiled without relabeling support, skipping May 10 00:46:10.507227 ignition[829]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:46:10.507227 ignition[829]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:46:10.507227 ignition[829]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:46:10.507227 ignition[829]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:46:10.507227 ignition[829]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:46:10.507227 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 10 00:46:10.507227 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 10 00:46:10.507227 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:46:10.507227 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 10 00:46:10.475996 unknown[829]: wrote ssh authorized keys file for user: core May 10 00:46:10.663255 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 00:46:10.882051 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1603493292" May 10 00:46:10.899225 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1603493292": device or resource busy May 10 00:46:10.899225 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1603493292", trying btrfs: device or resource busy May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1603493292" May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1603493292" May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem1603493292" May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem1603493292" May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition May 10 00:46:10.899225 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1129414330" May 10 00:46:11.141291 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1129414330": device or resource busy May 10 00:46:11.141291 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1129414330", trying btrfs: device or resource busy May 10 00:46:11.141291 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1129414330" May 10 00:46:11.141291 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1129414330" May 10 00:46:11.141291 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem1129414330" May 10 00:46:11.141291 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem1129414330" May 10 00:46:11.141291 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" May 10 00:46:11.141291 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:46:11.141291 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 10 00:46:11.295280 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK May 10 00:46:11.358513 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition May 10 00:46:11.374229 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2202646352" May 10 00:46:11.374229 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2202646352": device or resource busy May 10 00:46:11.609280 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2202646352", trying btrfs: device or resource busy May 10 00:46:11.609280 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2202646352" May 10 00:46:11.609280 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2202646352" May 10 00:46:11.609280 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem2202646352" May 10 00:46:11.609280 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem2202646352" May 10 00:46:11.609280 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" May 10 00:46:11.609280 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:46:11.609280 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 10 00:46:11.609280 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): GET result: OK May 10 00:46:11.377999 systemd[1]: mnt-oem2202646352.mount: Deactivated successfully. May 10 00:46:11.944262 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:46:11.944262 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(19): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" May 10 00:46:11.979233 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(19): oem config not found in "/usr/share/oem", looking on oem partition May 10 00:46:11.979233 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1697584419" May 10 00:46:11.979233 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(19): op(1a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1697584419": device or resource busy May 10 00:46:11.979233 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(19): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1697584419", trying btrfs: device or resource busy May 10 00:46:11.979233 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1697584419" May 10 00:46:11.979233 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1697584419" May 10 00:46:11.979233 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1c): [started] unmounting "/mnt/oem1697584419" May 10 00:46:11.979233 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1c): [finished] unmounting "/mnt/oem1697584419" May 10 00:46:11.979233 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(19): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" May 10 00:46:11.979233 ignition[829]: INFO : files: op(1d): [started] processing unit "oem-gce-enable-oslogin.service" May 10 00:46:11.979233 ignition[829]: INFO : files: op(1d): [finished] processing unit "oem-gce-enable-oslogin.service" May 10 00:46:11.979233 ignition[829]: INFO : files: op(1e): [started] processing unit "coreos-metadata-sshkeys@.service" May 10 00:46:11.979233 ignition[829]: INFO : files: op(1e): [finished] processing unit "coreos-metadata-sshkeys@.service" May 10 00:46:11.979233 ignition[829]: INFO : files: op(1f): [started] processing unit "oem-gce.service" May 10 00:46:11.979233 ignition[829]: INFO : files: op(1f): [finished] processing unit "oem-gce.service" May 10 00:46:11.979233 ignition[829]: INFO : files: op(20): [started] processing unit "containerd.service" May 10 00:46:11.979233 ignition[829]: INFO : files: op(20): op(21): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 10 00:46:12.460264 kernel: audit: type=1130 audit(1746837971.978:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.460445 kernel: audit: type=1130 audit(1746837972.080:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.460473 kernel: audit: type=1130 audit(1746837972.130:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.460489 kernel: audit: type=1131 audit(1746837972.130:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.460504 kernel: audit: type=1130 audit(1746837972.233:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.460518 kernel: audit: type=1131 audit(1746837972.233:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.460537 kernel: audit: type=1130 audit(1746837972.419:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:11.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:11.963926 systemd[1]: mnt-oem1697584419.mount: Deactivated successfully. May 10 00:46:12.475250 ignition[829]: INFO : files: op(20): op(21): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 10 00:46:12.475250 ignition[829]: INFO : files: op(20): [finished] processing unit "containerd.service" May 10 00:46:12.475250 ignition[829]: INFO : files: op(22): [started] processing unit "prepare-helm.service" May 10 00:46:12.475250 ignition[829]: INFO : files: op(22): op(23): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:46:12.475250 ignition[829]: INFO : files: op(22): op(23): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:46:12.475250 ignition[829]: INFO : files: op(22): [finished] processing unit "prepare-helm.service" May 10 00:46:12.475250 ignition[829]: INFO : files: op(24): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" May 10 00:46:12.475250 ignition[829]: INFO : files: op(24): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" May 10 00:46:12.475250 ignition[829]: INFO : files: op(25): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:46:12.475250 ignition[829]: INFO : files: op(25): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:46:12.475250 ignition[829]: INFO : files: op(26): [started] setting preset to enabled for "oem-gce.service" May 10 00:46:12.475250 ignition[829]: INFO : files: op(26): [finished] setting preset to enabled for "oem-gce.service" May 10 00:46:12.475250 ignition[829]: INFO : files: op(27): [started] setting preset to enabled for "prepare-helm.service" May 10 00:46:12.475250 ignition[829]: INFO : files: op(27): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:46:12.475250 ignition[829]: INFO : files: createResultFile: createFiles: op(28): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:46:12.475250 ignition[829]: INFO : files: createResultFile: createFiles: op(28): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:46:12.475250 ignition[829]: INFO : files: files passed May 10 00:46:12.475250 ignition[829]: INFO : Ignition finished successfully May 10 00:46:12.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:11.978830 systemd[1]: Finished ignition-files.service. May 10 00:46:11.990069 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 10 00:46:12.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.866403 initrd-setup-root-after-ignition[852]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:46:12.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.021494 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 10 00:46:12.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.022676 systemd[1]: Starting ignition-quench.service... May 10 00:46:12.052616 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 10 00:46:12.081643 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:46:12.081797 systemd[1]: Finished ignition-quench.service. May 10 00:46:12.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.131467 systemd[1]: Reached target ignition-complete.target. May 10 00:46:12.189618 systemd[1]: Starting initrd-parse-etc.service... May 10 00:46:12.988312 ignition[867]: INFO : Ignition 2.14.0 May 10 00:46:12.988312 ignition[867]: INFO : Stage: umount May 10 00:46:12.988312 ignition[867]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:46:12.988312 ignition[867]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 10 00:46:12.988312 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 10 00:46:12.988312 ignition[867]: INFO : umount: umount passed May 10 00:46:12.988312 ignition[867]: INFO : Ignition finished successfully May 10 00:46:12.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:13.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:13.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:13.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:13.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:13.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:13.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.233421 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:46:12.233552 systemd[1]: Finished initrd-parse-etc.service. May 10 00:46:13.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.234555 systemd[1]: Reached target initrd-fs.target. May 10 00:46:12.317379 systemd[1]: Reached target initrd.target. May 10 00:46:12.348467 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 10 00:46:12.349822 systemd[1]: Starting dracut-pre-pivot.service... May 10 00:46:12.392700 systemd[1]: Finished dracut-pre-pivot.service. May 10 00:46:12.422048 systemd[1]: Starting initrd-cleanup.service... May 10 00:46:12.476079 systemd[1]: Stopped target nss-lookup.target. May 10 00:46:13.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.503549 systemd[1]: Stopped target remote-cryptsetup.target. May 10 00:46:13.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.523640 systemd[1]: Stopped target timers.target. May 10 00:46:12.541540 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:46:12.541758 systemd[1]: Stopped dracut-pre-pivot.service. May 10 00:46:13.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.566683 systemd[1]: Stopped target initrd.target. May 10 00:46:13.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.590470 systemd[1]: Stopped target basic.target. May 10 00:46:13.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:13.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:13.326000 audit: BPF prog-id=6 op=UNLOAD May 10 00:46:12.608541 systemd[1]: Stopped target ignition-complete.target. May 10 00:46:12.631542 systemd[1]: Stopped target ignition-diskful.target. May 10 00:46:12.645719 systemd[1]: Stopped target initrd-root-device.target. May 10 00:46:13.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.673520 systemd[1]: Stopped target remote-fs.target. May 10 00:46:13.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.687765 systemd[1]: Stopped target remote-fs-pre.target. May 10 00:46:13.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.713548 systemd[1]: Stopped target sysinit.target. May 10 00:46:12.734534 systemd[1]: Stopped target local-fs.target. May 10 00:46:13.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.755500 systemd[1]: Stopped target local-fs-pre.target. May 10 00:46:12.776454 systemd[1]: Stopped target swap.target. May 10 00:46:12.798464 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:46:13.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.798694 systemd[1]: Stopped dracut-pre-mount.service. May 10 00:46:13.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.813849 systemd[1]: Stopped target cryptsetup.target. May 10 00:46:13.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.841539 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:46:12.841765 systemd[1]: Stopped dracut-initqueue.service. May 10 00:46:12.856771 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:46:13.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.856995 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 10 00:46:13.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.876677 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:46:13.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:13.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:12.876901 systemd[1]: Stopped ignition-files.service. May 10 00:46:12.900717 systemd[1]: Stopping ignition-mount.service... May 10 00:46:12.939316 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:46:13.627000 audit: BPF prog-id=8 op=UNLOAD May 10 00:46:13.627000 audit: BPF prog-id=7 op=UNLOAD May 10 00:46:13.629000 audit: BPF prog-id=5 op=UNLOAD May 10 00:46:13.629000 audit: BPF prog-id=4 op=UNLOAD May 10 00:46:13.629000 audit: BPF prog-id=3 op=UNLOAD May 10 00:46:12.939681 systemd[1]: Stopped kmod-static-nodes.service. May 10 00:46:12.957425 systemd[1]: Stopping sysroot-boot.service... May 10 00:46:12.980367 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:46:13.664923 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). May 10 00:46:12.980669 systemd[1]: Stopped systemd-udev-trigger.service. May 10 00:46:13.673264 iscsid[694]: iscsid shutting down. May 10 00:46:12.996685 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:46:12.996897 systemd[1]: Stopped dracut-pre-trigger.service. May 10 00:46:13.007982 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:46:13.009313 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:46:13.009438 systemd[1]: Stopped ignition-mount.service. May 10 00:46:13.017946 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:46:13.018080 systemd[1]: Stopped sysroot-boot.service. May 10 00:46:13.035395 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:46:13.035615 systemd[1]: Stopped ignition-disks.service. May 10 00:46:13.067389 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:46:13.067497 systemd[1]: Stopped ignition-kargs.service. May 10 00:46:13.086394 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 00:46:13.086487 systemd[1]: Stopped ignition-fetch.service. May 10 00:46:13.101376 systemd[1]: Stopped target network.target. May 10 00:46:13.117245 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:46:13.117371 systemd[1]: Stopped ignition-fetch-offline.service. May 10 00:46:13.133373 systemd[1]: Stopped target paths.target. May 10 00:46:13.147234 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:46:13.151180 systemd[1]: Stopped systemd-ask-password-console.path. May 10 00:46:13.162275 systemd[1]: Stopped target slices.target. May 10 00:46:13.175285 systemd[1]: Stopped target sockets.target. May 10 00:46:13.192362 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:46:13.192432 systemd[1]: Closed iscsid.socket. May 10 00:46:13.207386 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:46:13.207489 systemd[1]: Closed iscsiuio.socket. May 10 00:46:13.221337 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:46:13.221476 systemd[1]: Stopped ignition-setup.service. May 10 00:46:13.236373 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:46:13.236489 systemd[1]: Stopped initrd-setup-root.service. May 10 00:46:13.251601 systemd[1]: Stopping systemd-networkd.service... May 10 00:46:13.255172 systemd-networkd[685]: eth0: DHCPv6 lease lost May 10 00:46:13.267591 systemd[1]: Stopping systemd-resolved.service... May 10 00:46:13.274985 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:46:13.275154 systemd[1]: Stopped systemd-networkd.service. May 10 00:46:13.297254 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:46:13.297394 systemd[1]: Stopped systemd-resolved.service. May 10 00:46:13.312431 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:46:13.312578 systemd[1]: Finished initrd-cleanup.service. May 10 00:46:13.327995 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:46:13.328049 systemd[1]: Closed systemd-networkd.socket. May 10 00:46:13.343614 systemd[1]: Stopping network-cleanup.service... May 10 00:46:13.357199 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:46:13.357326 systemd[1]: Stopped parse-ip-for-networkd.service. May 10 00:46:13.373433 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:46:13.373529 systemd[1]: Stopped systemd-sysctl.service. May 10 00:46:13.389458 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:46:13.389538 systemd[1]: Stopped systemd-modules-load.service. May 10 00:46:13.404548 systemd[1]: Stopping systemd-udevd.service... May 10 00:46:13.422198 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 00:46:13.422981 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:46:13.423285 systemd[1]: Stopped systemd-udevd.service. May 10 00:46:13.437113 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:46:13.437215 systemd[1]: Closed systemd-udevd-control.socket. May 10 00:46:13.450276 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:46:13.450342 systemd[1]: Closed systemd-udevd-kernel.socket. May 10 00:46:13.465380 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:46:13.465470 systemd[1]: Stopped dracut-pre-udev.service. May 10 00:46:13.480447 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:46:13.480541 systemd[1]: Stopped dracut-cmdline.service. May 10 00:46:13.496440 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:46:13.496527 systemd[1]: Stopped dracut-cmdline-ask.service. May 10 00:46:13.512809 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 10 00:46:13.535283 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:46:13.535407 systemd[1]: Stopped systemd-vconsole-setup.service. May 10 00:46:13.550867 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:46:13.551003 systemd[1]: Stopped network-cleanup.service. May 10 00:46:13.568659 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:46:13.568787 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 10 00:46:13.584544 systemd[1]: Reached target initrd-switch-root.target. May 10 00:46:13.601656 systemd[1]: Starting initrd-switch-root.service... May 10 00:46:13.623818 systemd[1]: Switching root. May 10 00:46:13.676816 systemd-journald[190]: Journal stopped May 10 00:46:18.412644 kernel: SELinux: Class mctp_socket not defined in policy. May 10 00:46:18.412809 kernel: SELinux: Class anon_inode not defined in policy. May 10 00:46:18.412838 kernel: SELinux: the above unknown classes and permissions will be allowed May 10 00:46:18.412879 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:46:18.412911 kernel: SELinux: policy capability open_perms=1 May 10 00:46:18.412934 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:46:18.412958 kernel: SELinux: policy capability always_check_network=0 May 10 00:46:18.412989 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:46:18.413012 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:46:18.413035 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:46:18.413076 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:46:18.413106 systemd[1]: Successfully loaded SELinux policy in 114.977ms. May 10 00:46:18.413163 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.558ms. May 10 00:46:18.413193 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:46:18.413222 systemd[1]: Detected virtualization kvm. May 10 00:46:18.413248 systemd[1]: Detected architecture x86-64. May 10 00:46:18.413273 systemd[1]: Detected first boot. May 10 00:46:18.413298 systemd[1]: Initializing machine ID from VM UUID. May 10 00:46:18.413321 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 10 00:46:18.413350 systemd[1]: Populated /etc with preset unit settings. May 10 00:46:18.413377 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:46:18.413415 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:46:18.413443 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:46:18.413478 systemd[1]: Queued start job for default target multi-user.target. May 10 00:46:18.413502 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 10 00:46:18.413530 systemd[1]: Created slice system-addon\x2dconfig.slice. May 10 00:46:18.413554 systemd[1]: Created slice system-addon\x2drun.slice. May 10 00:46:18.413584 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 10 00:46:18.413609 systemd[1]: Created slice system-getty.slice. May 10 00:46:18.413634 systemd[1]: Created slice system-modprobe.slice. May 10 00:46:18.413659 systemd[1]: Created slice system-serial\x2dgetty.slice. May 10 00:46:18.413682 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 10 00:46:18.413706 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 10 00:46:18.413729 systemd[1]: Created slice user.slice. May 10 00:46:18.413756 systemd[1]: Started systemd-ask-password-console.path. May 10 00:46:18.413782 systemd[1]: Started systemd-ask-password-wall.path. May 10 00:46:18.413812 systemd[1]: Set up automount boot.automount. May 10 00:46:18.413837 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 10 00:46:18.413870 systemd[1]: Reached target integritysetup.target. May 10 00:46:18.413894 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:46:18.413919 systemd[1]: Reached target remote-fs.target. May 10 00:46:18.413942 systemd[1]: Reached target slices.target. May 10 00:46:18.413965 systemd[1]: Reached target swap.target. May 10 00:46:18.413988 systemd[1]: Reached target torcx.target. May 10 00:46:18.414019 systemd[1]: Reached target veritysetup.target. May 10 00:46:18.414045 systemd[1]: Listening on systemd-coredump.socket. May 10 00:46:18.414111 systemd[1]: Listening on systemd-initctl.socket. May 10 00:46:18.414138 kernel: kauditd_printk_skb: 49 callbacks suppressed May 10 00:46:18.414165 kernel: audit: type=1400 audit(1746837977.923:86): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:46:18.414190 systemd[1]: Listening on systemd-journald-audit.socket. May 10 00:46:18.414215 kernel: audit: type=1335 audit(1746837977.923:87): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 10 00:46:18.414239 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 00:46:18.414265 systemd[1]: Listening on systemd-journald.socket. May 10 00:46:18.414295 systemd[1]: Listening on systemd-networkd.socket. May 10 00:46:18.414321 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:46:18.414346 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:46:18.414373 systemd[1]: Listening on systemd-userdbd.socket. May 10 00:46:18.414400 systemd[1]: Mounting dev-hugepages.mount... May 10 00:46:18.414426 systemd[1]: Mounting dev-mqueue.mount... May 10 00:46:18.414458 systemd[1]: Mounting media.mount... May 10 00:46:18.414484 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:18.414509 systemd[1]: Mounting sys-kernel-debug.mount... May 10 00:46:18.414540 systemd[1]: Mounting sys-kernel-tracing.mount... May 10 00:46:18.414566 systemd[1]: Mounting tmp.mount... May 10 00:46:18.414592 systemd[1]: Starting flatcar-tmpfiles.service... May 10 00:46:18.414617 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:46:18.414644 systemd[1]: Starting kmod-static-nodes.service... May 10 00:46:18.414669 systemd[1]: Starting modprobe@configfs.service... May 10 00:46:18.414695 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:46:18.414720 systemd[1]: Starting modprobe@drm.service... May 10 00:46:18.414745 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:46:18.414774 systemd[1]: Starting modprobe@fuse.service... May 10 00:46:18.414800 systemd[1]: Starting modprobe@loop.service... May 10 00:46:18.414825 kernel: fuse: init (API version 7.34) May 10 00:46:18.414850 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:46:18.414892 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 10 00:46:18.414917 kernel: loop: module loaded May 10 00:46:18.414942 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 10 00:46:18.414968 systemd[1]: Starting systemd-journald.service... May 10 00:46:18.414993 systemd[1]: Starting systemd-modules-load.service... May 10 00:46:18.415023 systemd[1]: Starting systemd-network-generator.service... May 10 00:46:18.415048 kernel: audit: type=1305 audit(1746837978.396:88): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 00:46:18.415090 kernel: audit: type=1300 audit(1746837978.396:88): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe8cd566c0 a2=4000 a3=7ffe8cd5675c items=0 ppid=1 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:46:18.415125 systemd-journald[1031]: Journal started May 10 00:46:18.415240 systemd-journald[1031]: Runtime Journal (/run/log/journal/10869f995603d199b34b98fd129ca41f) is 8.0M, max 148.8M, 140.8M free. May 10 00:46:17.923000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:46:17.923000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 10 00:46:18.396000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 00:46:18.396000 audit[1031]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe8cd566c0 a2=4000 a3=7ffe8cd5675c items=0 ppid=1 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:46:18.453166 systemd[1]: Starting systemd-remount-fs.service... May 10 00:46:18.396000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 10 00:46:18.471089 kernel: audit: type=1327 audit(1746837978.396:88): proctitle="/usr/lib/systemd/systemd-journald" May 10 00:46:18.484094 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:46:18.504083 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:18.513093 systemd[1]: Started systemd-journald.service. May 10 00:46:18.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.547216 kernel: audit: type=1130 audit(1746837978.520:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.523365 systemd[1]: Mounted dev-hugepages.mount. May 10 00:46:18.553411 systemd[1]: Mounted dev-mqueue.mount. May 10 00:46:18.560406 systemd[1]: Mounted media.mount. May 10 00:46:18.567408 systemd[1]: Mounted sys-kernel-debug.mount. May 10 00:46:18.576424 systemd[1]: Mounted sys-kernel-tracing.mount. May 10 00:46:18.585390 systemd[1]: Mounted tmp.mount. May 10 00:46:18.592602 systemd[1]: Finished flatcar-tmpfiles.service. May 10 00:46:18.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.601852 systemd[1]: Finished kmod-static-nodes.service. May 10 00:46:18.624131 kernel: audit: type=1130 audit(1746837978.600:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.632727 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:46:18.633018 systemd[1]: Finished modprobe@configfs.service. May 10 00:46:18.655112 kernel: audit: type=1130 audit(1746837978.631:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.663770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:46:18.664193 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:46:18.707527 kernel: audit: type=1130 audit(1746837978.662:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.707759 kernel: audit: type=1131 audit(1746837978.662:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.716671 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:46:18.716927 systemd[1]: Finished modprobe@drm.service. May 10 00:46:18.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.725587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:46:18.725818 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:46:18.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.734589 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:46:18.734816 systemd[1]: Finished modprobe@fuse.service. May 10 00:46:18.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.743555 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:46:18.743834 systemd[1]: Finished modprobe@loop.service. May 10 00:46:18.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.752671 systemd[1]: Finished systemd-modules-load.service. May 10 00:46:18.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.762561 systemd[1]: Finished systemd-network-generator.service. May 10 00:46:18.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.772556 systemd[1]: Finished systemd-remount-fs.service. May 10 00:46:18.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.781529 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:46:18.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.790660 systemd[1]: Reached target network-pre.target. May 10 00:46:18.800669 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 10 00:46:18.811495 systemd[1]: Mounting sys-kernel-config.mount... May 10 00:46:18.818211 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:46:18.821198 systemd[1]: Starting systemd-hwdb-update.service... May 10 00:46:18.829702 systemd[1]: Starting systemd-journal-flush.service... May 10 00:46:18.835809 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:46:18.838389 systemd[1]: Starting systemd-random-seed.service... May 10 00:46:18.846241 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:46:18.848037 systemd[1]: Starting systemd-sysctl.service... May 10 00:46:18.857048 systemd[1]: Starting systemd-sysusers.service... May 10 00:46:18.858912 systemd-journald[1031]: Time spent on flushing to /var/log/journal/10869f995603d199b34b98fd129ca41f is 56.087ms for 1093 entries. May 10 00:46:18.858912 systemd-journald[1031]: System Journal (/var/log/journal/10869f995603d199b34b98fd129ca41f) is 8.0M, max 584.8M, 576.8M free. May 10 00:46:18.943415 systemd-journald[1031]: Received client request to flush runtime journal. May 10 00:46:18.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.873972 systemd[1]: Starting systemd-udev-settle.service... May 10 00:46:18.885148 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 10 00:46:18.944803 udevadm[1054]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 10 00:46:18.893359 systemd[1]: Mounted sys-kernel-config.mount. May 10 00:46:18.902614 systemd[1]: Finished systemd-random-seed.service. May 10 00:46:18.911807 systemd[1]: Finished systemd-sysctl.service. May 10 00:46:18.923884 systemd[1]: Reached target first-boot-complete.target. May 10 00:46:18.937656 systemd[1]: Finished systemd-sysusers.service. May 10 00:46:18.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.947781 systemd[1]: Finished systemd-journal-flush.service. May 10 00:46:18.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:18.959516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:46:19.015983 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:46:19.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:19.557883 systemd[1]: Finished systemd-hwdb-update.service. May 10 00:46:19.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:19.568032 systemd[1]: Starting systemd-udevd.service... May 10 00:46:19.591995 systemd-udevd[1063]: Using default interface naming scheme 'v252'. May 10 00:46:19.638264 systemd[1]: Started systemd-udevd.service. May 10 00:46:19.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:19.649395 systemd[1]: Starting systemd-networkd.service... May 10 00:46:19.666025 systemd[1]: Starting systemd-userdbd.service... May 10 00:46:19.750870 systemd[1]: Started systemd-userdbd.service. May 10 00:46:19.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:19.763467 systemd[1]: Found device dev-ttyS0.device. May 10 00:46:19.845847 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 10 00:46:19.893567 systemd-networkd[1075]: lo: Link UP May 10 00:46:19.893582 systemd-networkd[1075]: lo: Gained carrier May 10 00:46:19.894920 systemd-networkd[1075]: Enumeration completed May 10 00:46:19.895172 systemd[1]: Started systemd-networkd.service. May 10 00:46:19.895646 systemd-networkd[1075]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:46:19.897877 systemd-networkd[1075]: eth0: Link UP May 10 00:46:19.897891 systemd-networkd[1075]: eth0: Gained carrier May 10 00:46:19.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:19.867000 audit[1070]: AVC avc: denied { confidentiality } for pid=1070 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:46:19.909254 systemd-networkd[1075]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403.c.flatcar-212911.internal' to 'ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403' May 10 00:46:19.909280 systemd-networkd[1075]: eth0: DHCPv4 address 10.128.0.77/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 10 00:46:19.867000 audit[1070]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563030bd4e80 a1=338ac a2=7f2e98a8bbc5 a3=5 items=110 ppid=1063 pid=1070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:46:19.867000 audit: CWD cwd="/" May 10 00:46:19.867000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=1 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=2 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=3 name=(null) inode=14505 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=4 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=5 name=(null) inode=14506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=6 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=7 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=8 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=9 name=(null) inode=14508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=10 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=11 name=(null) inode=14509 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=12 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=13 name=(null) inode=14510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=14 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=15 name=(null) inode=14511 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=16 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=17 name=(null) inode=14512 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=18 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=19 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=20 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=21 name=(null) inode=14514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=22 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=23 name=(null) inode=14515 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=24 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=25 name=(null) inode=14516 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=26 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=27 name=(null) inode=14517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=28 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=29 name=(null) inode=14518 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=30 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=31 name=(null) inode=14519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=32 name=(null) inode=14519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=33 name=(null) inode=14520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=34 name=(null) inode=14519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=35 name=(null) inode=14521 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=36 name=(null) inode=14519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=37 name=(null) inode=14522 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=38 name=(null) inode=14519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=39 name=(null) inode=14523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=40 name=(null) inode=14519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=41 name=(null) inode=14524 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=42 name=(null) inode=14504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=43 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=44 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=45 name=(null) inode=14526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=46 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=47 name=(null) inode=14527 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=48 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=49 name=(null) inode=14528 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=50 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=51 name=(null) inode=14529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=52 name=(null) inode=14525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=53 name=(null) inode=14530 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=55 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=56 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=57 name=(null) inode=14532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=58 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=59 name=(null) inode=14533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=60 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=61 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=62 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=63 name=(null) inode=14535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=64 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=65 name=(null) inode=14536 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=66 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=67 name=(null) inode=14537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=68 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=69 name=(null) inode=14538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=70 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=71 name=(null) inode=14539 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=72 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=73 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=74 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=75 name=(null) inode=14541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=76 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=77 name=(null) inode=14542 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=78 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=79 name=(null) inode=14543 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=80 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=81 name=(null) inode=14544 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=82 name=(null) inode=14540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=83 name=(null) inode=14545 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=84 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=85 name=(null) inode=14546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=86 name=(null) inode=14546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=87 name=(null) inode=14547 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=88 name=(null) inode=14546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=89 name=(null) inode=14548 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=90 name=(null) inode=14546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=91 name=(null) inode=14549 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=92 name=(null) inode=14546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=93 name=(null) inode=14550 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=94 name=(null) inode=14546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=95 name=(null) inode=14551 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=96 name=(null) inode=14531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=97 name=(null) inode=14552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=98 name=(null) inode=14552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=99 name=(null) inode=14553 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=100 name=(null) inode=14552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=101 name=(null) inode=14554 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=102 name=(null) inode=14552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=103 name=(null) inode=14555 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=104 name=(null) inode=14552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=105 name=(null) inode=14556 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=106 name=(null) inode=14552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=107 name=(null) inode=14557 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PATH item=109 name=(null) inode=14563 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:19.867000 audit: PROCTITLE proctitle="(udev-worker)" May 10 00:46:19.940134 kernel: ACPI: button: Power Button [PWRF] May 10 00:46:19.945081 kernel: EDAC MC: Ver: 3.0.0 May 10 00:46:19.953081 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 10 00:46:19.959106 kernel: ACPI: button: Sleep Button [SLPF] May 10 00:46:19.988088 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 10 00:46:20.005083 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 10 00:46:20.022111 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:46:20.094416 systemd[1]: Finished systemd-udev-settle.service. May 10 00:46:20.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.109179 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:46:20.118756 systemd[1]: Starting lvm2-activation-early.service... May 10 00:46:20.146276 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:46:20.177632 systemd[1]: Finished lvm2-activation-early.service. May 10 00:46:20.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.186485 systemd[1]: Reached target cryptsetup.target. May 10 00:46:20.196671 systemd[1]: Starting lvm2-activation.service... May 10 00:46:20.203192 lvm[1103]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:46:20.227555 systemd[1]: Finished lvm2-activation.service. May 10 00:46:20.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.236506 systemd[1]: Reached target local-fs-pre.target. May 10 00:46:20.245182 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:46:20.245235 systemd[1]: Reached target local-fs.target. May 10 00:46:20.253161 systemd[1]: Reached target machines.target. May 10 00:46:20.262750 systemd[1]: Starting ldconfig.service... May 10 00:46:20.271099 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:46:20.271193 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:20.272901 systemd[1]: Starting systemd-boot-update.service... May 10 00:46:20.281744 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 10 00:46:20.293071 systemd[1]: Starting systemd-machine-id-commit.service... May 10 00:46:20.295799 systemd[1]: Starting systemd-sysext.service... May 10 00:46:20.296609 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1106 (bootctl) May 10 00:46:20.299603 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 10 00:46:20.320145 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 10 00:46:20.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.329289 systemd[1]: Unmounting usr-share-oem.mount... May 10 00:46:20.336493 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 10 00:46:20.336864 systemd[1]: Unmounted usr-share-oem.mount. May 10 00:46:20.363028 kernel: loop0: detected capacity change from 0 to 210664 May 10 00:46:20.455824 systemd-fsck[1118]: fsck.fat 4.2 (2021-01-31) May 10 00:46:20.455824 systemd-fsck[1118]: /dev/sda1: 790 files, 120688/258078 clusters May 10 00:46:20.458619 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 10 00:46:20.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.470711 systemd[1]: Mounting boot.mount... May 10 00:46:20.526751 systemd[1]: Mounted boot.mount. May 10 00:46:20.547490 systemd[1]: Finished systemd-boot-update.service. May 10 00:46:20.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.674646 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:46:20.675800 systemd[1]: Finished systemd-machine-id-commit.service. May 10 00:46:20.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.701282 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:46:20.733098 kernel: loop1: detected capacity change from 0 to 210664 May 10 00:46:20.757263 (sd-sysext)[1129]: Using extensions 'kubernetes'. May 10 00:46:20.757893 (sd-sysext)[1129]: Merged extensions into '/usr'. May 10 00:46:20.786408 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:20.788484 systemd[1]: Mounting usr-share-oem.mount... May 10 00:46:20.796089 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:46:20.800776 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:46:20.810148 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:46:20.820188 systemd[1]: Starting modprobe@loop.service... May 10 00:46:20.827331 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:46:20.827542 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:20.827752 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:20.832162 systemd[1]: Mounted usr-share-oem.mount. May 10 00:46:20.839726 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:46:20.840003 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:46:20.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.850301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:46:20.850933 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:46:20.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.860010 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:46:20.860282 systemd[1]: Finished modprobe@loop.service. May 10 00:46:20.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.868898 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:46:20.869111 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:46:20.870686 systemd[1]: Finished systemd-sysext.service. May 10 00:46:20.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:20.882165 systemd[1]: Starting ensure-sysext.service... May 10 00:46:20.891966 systemd[1]: Starting systemd-tmpfiles-setup.service... May 10 00:46:20.904437 systemd[1]: Reloading. May 10 00:46:20.914481 systemd-tmpfiles[1143]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 10 00:46:20.920184 ldconfig[1105]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:46:20.927696 systemd-tmpfiles[1143]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:46:20.932927 systemd-tmpfiles[1143]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:46:21.032566 /usr/lib/systemd/system-generators/torcx-generator[1164]: time="2025-05-10T00:46:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:46:21.032622 /usr/lib/systemd/system-generators/torcx-generator[1164]: time="2025-05-10T00:46:21Z" level=info msg="torcx already run" May 10 00:46:21.197446 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:46:21.197783 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:46:21.229559 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:46:21.317619 systemd[1]: Finished ldconfig.service. May 10 00:46:21.324206 systemd-networkd[1075]: eth0: Gained IPv6LL May 10 00:46:21.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:21.326676 systemd[1]: Finished systemd-tmpfiles-setup.service. May 10 00:46:21.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:21.343670 systemd[1]: Starting audit-rules.service... May 10 00:46:21.352220 systemd[1]: Starting clean-ca-certificates.service... May 10 00:46:21.362692 systemd[1]: Starting oem-gce-enable-oslogin.service... May 10 00:46:21.373837 systemd[1]: Starting systemd-journal-catalog-update.service... May 10 00:46:21.385444 systemd[1]: Starting systemd-resolved.service... May 10 00:46:21.396812 systemd[1]: Starting systemd-timesyncd.service... May 10 00:46:21.406767 systemd[1]: Starting systemd-update-utmp.service... May 10 00:46:21.417026 systemd[1]: Finished clean-ca-certificates.service. May 10 00:46:21.419000 audit[1241]: SYSTEM_BOOT pid=1241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 10 00:46:21.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:21.426134 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 10 00:46:21.426697 systemd[1]: Finished oem-gce-enable-oslogin.service. May 10 00:46:21.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:21.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:21.444070 systemd[1]: Finished systemd-journal-catalog-update.service. May 10 00:46:21.445000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 10 00:46:21.445000 audit[1247]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeda34ecc0 a2=420 a3=0 items=0 ppid=1215 pid=1247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:46:21.445000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 10 00:46:21.446815 augenrules[1247]: No rules May 10 00:46:21.455010 systemd[1]: Finished audit-rules.service. May 10 00:46:21.463296 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:21.463951 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:46:21.467102 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:46:21.476462 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:46:21.485655 systemd[1]: Starting modprobe@loop.service... May 10 00:46:21.496117 systemd[1]: Starting oem-gce-enable-oslogin.service... May 10 00:46:21.504280 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:46:21.504670 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:21.507733 systemd[1]: Starting systemd-update-done.service... May 10 00:46:21.515188 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:46:21.520412 enable-oslogin[1261]: /etc/pam.d/sshd already exists. Not enabling OS Login May 10 00:46:21.515507 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:21.519583 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:46:21.519911 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:46:21.529281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:46:21.529593 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:46:21.539385 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:46:21.539688 systemd[1]: Finished modprobe@loop.service. May 10 00:46:21.549228 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 10 00:46:21.549625 systemd[1]: Finished oem-gce-enable-oslogin.service. May 10 00:46:21.559429 systemd[1]: Finished systemd-update-done.service. May 10 00:46:21.569339 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:46:21.569715 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:46:21.575797 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:21.578395 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:46:21.583016 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:46:21.592492 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:46:21.601481 systemd[1]: Starting modprobe@loop.service... May 10 00:46:21.611139 systemd[1]: Starting oem-gce-enable-oslogin.service... May 10 00:46:21.619268 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:46:21.619537 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:21.619743 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:46:21.619894 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:21.622386 systemd[1]: Finished systemd-update-utmp.service. May 10 00:46:21.632005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:46:21.632322 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:46:21.634329 enable-oslogin[1273]: /etc/pam.d/sshd already exists. Not enabling OS Login May 10 00:46:21.642136 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:46:21.642469 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:46:21.652023 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:46:21.652340 systemd[1]: Finished modprobe@loop.service. May 10 00:46:21.662013 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 10 00:46:21.662432 systemd[1]: Finished oem-gce-enable-oslogin.service. May 10 00:46:21.672168 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:46:21.672381 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:46:21.679382 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:21.679992 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:46:21.682681 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:46:21.688033 systemd-resolved[1230]: Positive Trust Anchors: May 10 00:46:21.688584 systemd-resolved[1230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:46:21.688772 systemd-resolved[1230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:46:21.692847 systemd[1]: Starting modprobe@drm.service... May 10 00:46:21.697502 systemd-timesyncd[1235]: Contacted time server 169.254.169.254:123 (169.254.169.254). May 10 00:46:21.697584 systemd-timesyncd[1235]: Initial clock synchronization to Sat 2025-05-10 00:46:22.063514 UTC. May 10 00:46:21.703390 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:46:21.712404 systemd[1]: Starting modprobe@loop.service... May 10 00:46:21.721677 systemd[1]: Starting oem-gce-enable-oslogin.service... May 10 00:46:21.727446 enable-oslogin[1285]: /etc/pam.d/sshd already exists. Not enabling OS Login May 10 00:46:21.730373 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:46:21.730694 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:21.733853 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 00:46:21.742292 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:46:21.742572 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:21.744925 systemd[1]: Started systemd-timesyncd.service. May 10 00:46:21.751096 systemd-resolved[1230]: Defaulting to hostname 'linux'. May 10 00:46:21.755354 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:46:21.755675 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:46:21.765743 systemd[1]: Started systemd-resolved.service. May 10 00:46:21.774876 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:46:21.775204 systemd[1]: Finished modprobe@drm.service. May 10 00:46:21.784859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:46:21.785169 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:46:21.793854 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:46:21.794156 systemd[1]: Finished modprobe@loop.service. May 10 00:46:21.802839 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 10 00:46:21.803215 systemd[1]: Finished oem-gce-enable-oslogin.service. May 10 00:46:21.812018 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 00:46:21.823384 systemd[1]: Reached target network.target. May 10 00:46:21.832314 systemd[1]: Reached target network-online.target. May 10 00:46:21.841225 systemd[1]: Reached target nss-lookup.target. May 10 00:46:21.849236 systemd[1]: Reached target time-set.target. May 10 00:46:21.857311 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:46:21.857394 systemd[1]: Reached target sysinit.target. May 10 00:46:21.866394 systemd[1]: Started motdgen.path. May 10 00:46:21.873335 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 10 00:46:21.883558 systemd[1]: Started logrotate.timer. May 10 00:46:21.890427 systemd[1]: Started mdadm.timer. May 10 00:46:21.897301 systemd[1]: Started systemd-tmpfiles-clean.timer. May 10 00:46:21.906271 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:46:21.906340 systemd[1]: Reached target paths.target. May 10 00:46:21.913211 systemd[1]: Reached target timers.target. May 10 00:46:21.921193 systemd[1]: Listening on dbus.socket. May 10 00:46:21.930302 systemd[1]: Starting docker.socket... May 10 00:46:21.940233 systemd[1]: Listening on sshd.socket. May 10 00:46:21.947372 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:21.947484 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:46:21.948642 systemd[1]: Finished ensure-sysext.service. May 10 00:46:21.957482 systemd[1]: Listening on docker.socket. May 10 00:46:21.965446 systemd[1]: Reached target sockets.target. May 10 00:46:21.974229 systemd[1]: Reached target basic.target. May 10 00:46:21.981543 systemd[1]: System is tainted: cgroupsv1 May 10 00:46:21.981672 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:46:21.981715 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:46:21.983733 systemd[1]: Starting containerd.service... May 10 00:46:21.993268 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 10 00:46:22.007609 systemd[1]: Starting dbus.service... May 10 00:46:22.015653 systemd[1]: Starting enable-oem-cloudinit.service... May 10 00:46:22.025628 systemd[1]: Starting extend-filesystems.service... May 10 00:46:22.033290 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 10 00:46:22.036113 systemd[1]: Starting kubelet.service... May 10 00:46:22.039812 jq[1297]: false May 10 00:46:22.045541 systemd[1]: Starting motdgen.service... May 10 00:46:22.055541 systemd[1]: Starting oem-gce.service... May 10 00:46:22.065448 systemd[1]: Starting prepare-helm.service... May 10 00:46:22.074741 systemd[1]: Starting ssh-key-proc-cmdline.service... May 10 00:46:22.084717 systemd[1]: Starting sshd-keygen.service... May 10 00:46:22.098232 systemd[1]: Starting systemd-logind.service... May 10 00:46:22.105266 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:22.105408 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). May 10 00:46:22.110258 systemd[1]: Starting update-engine.service... May 10 00:46:22.120279 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 10 00:46:22.134209 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:46:22.134686 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 10 00:46:22.183493 extend-filesystems[1298]: Found loop1 May 10 00:46:22.183493 extend-filesystems[1298]: Found sda May 10 00:46:22.183493 extend-filesystems[1298]: Found sda1 May 10 00:46:22.183493 extend-filesystems[1298]: Found sda2 May 10 00:46:22.183493 extend-filesystems[1298]: Found sda3 May 10 00:46:22.183493 extend-filesystems[1298]: Found usr May 10 00:46:22.183493 extend-filesystems[1298]: Found sda4 May 10 00:46:22.183493 extend-filesystems[1298]: Found sda6 May 10 00:46:22.183493 extend-filesystems[1298]: Found sda7 May 10 00:46:22.183493 extend-filesystems[1298]: Found sda9 May 10 00:46:22.183493 extend-filesystems[1298]: Checking size of /dev/sda9 May 10 00:46:22.145713 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:46:22.288764 jq[1322]: true May 10 00:46:22.146168 systemd[1]: Finished ssh-key-proc-cmdline.service. May 10 00:46:22.240115 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:46:22.240563 systemd[1]: Finished motdgen.service. May 10 00:46:22.290040 mkfs.ext4[1334]: mke2fs 1.46.5 (30-Dec-2021) May 10 00:46:22.290040 mkfs.ext4[1334]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done May 10 00:46:22.290040 mkfs.ext4[1334]: Creating filesystem with 262144 4k blocks and 65536 inodes May 10 00:46:22.290040 mkfs.ext4[1334]: Filesystem UUID: a1a3969d-fe1f-487b-8420-b318a1692c33 May 10 00:46:22.290040 mkfs.ext4[1334]: Superblock backups stored on blocks: May 10 00:46:22.290040 mkfs.ext4[1334]: 32768, 98304, 163840, 229376 May 10 00:46:22.290040 mkfs.ext4[1334]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 10 00:46:22.290040 mkfs.ext4[1334]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 10 00:46:22.290040 mkfs.ext4[1334]: Creating journal (8192 blocks): done May 10 00:46:22.290040 mkfs.ext4[1334]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 10 00:46:22.290717 jq[1338]: true May 10 00:46:22.296455 extend-filesystems[1298]: Resized partition /dev/sda9 May 10 00:46:22.314955 extend-filesystems[1355]: resize2fs 1.46.5 (30-Dec-2021) May 10 00:46:22.331566 umount[1359]: umount: /var/lib/flatcar-oem-gce.img: not mounted. May 10 00:46:22.332111 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks May 10 00:46:22.370370 dbus-daemon[1296]: [system] SELinux support is enabled May 10 00:46:22.370739 systemd[1]: Started dbus.service. May 10 00:46:22.374514 dbus-daemon[1296]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1075 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 10 00:46:22.383395 kernel: loop2: detected capacity change from 0 to 2097152 May 10 00:46:22.386711 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:46:22.386790 systemd[1]: Reached target system-config.target. May 10 00:46:22.394555 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:46:22.394612 systemd[1]: Reached target user-config.target. May 10 00:46:22.405341 dbus-daemon[1296]: [system] Successfully activated service 'org.freedesktop.systemd1' May 10 00:46:22.412387 systemd[1]: Starting systemd-hostnamed.service... May 10 00:46:22.424540 kernel: EXT4-fs (sda9): resized filesystem to 2538491 May 10 00:46:22.429410 tar[1330]: linux-amd64/helm May 10 00:46:22.460287 extend-filesystems[1355]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 10 00:46:22.460287 extend-filesystems[1355]: old_desc_blocks = 1, new_desc_blocks = 2 May 10 00:46:22.460287 extend-filesystems[1355]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. May 10 00:46:22.507308 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 00:46:22.463851 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:46:22.507605 extend-filesystems[1298]: Resized filesystem in /dev/sda9 May 10 00:46:22.464384 systemd[1]: Finished extend-filesystems.service. May 10 00:46:22.531624 env[1332]: time="2025-05-10T00:46:22.529009696Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 10 00:46:22.547159 update_engine[1319]: I0510 00:46:22.547063 1319 main.cc:92] Flatcar Update Engine starting May 10 00:46:22.556196 systemd[1]: Started update-engine.service. May 10 00:46:22.569279 systemd[1]: Started locksmithd.service. May 10 00:46:22.572094 update_engine[1319]: I0510 00:46:22.572033 1319 update_check_scheduler.cc:74] Next update check in 5m17s May 10 00:46:22.582548 bash[1374]: Updated "/home/core/.ssh/authorized_keys" May 10 00:46:22.584953 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 10 00:46:22.667112 coreos-metadata[1295]: May 10 00:46:22.666 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 May 10 00:46:22.671849 coreos-metadata[1295]: May 10 00:46:22.671 INFO Fetch failed with 404: resource not found May 10 00:46:22.671849 coreos-metadata[1295]: May 10 00:46:22.671 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 May 10 00:46:22.672452 coreos-metadata[1295]: May 10 00:46:22.672 INFO Fetch successful May 10 00:46:22.672452 coreos-metadata[1295]: May 10 00:46:22.672 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 May 10 00:46:22.672856 coreos-metadata[1295]: May 10 00:46:22.672 INFO Fetch failed with 404: resource not found May 10 00:46:22.672856 coreos-metadata[1295]: May 10 00:46:22.672 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 May 10 00:46:22.673253 coreos-metadata[1295]: May 10 00:46:22.673 INFO Fetch failed with 404: resource not found May 10 00:46:22.673253 coreos-metadata[1295]: May 10 00:46:22.673 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 May 10 00:46:22.674131 coreos-metadata[1295]: May 10 00:46:22.673 INFO Fetch successful May 10 00:46:22.676056 unknown[1295]: wrote ssh authorized keys file for user: core May 10 00:46:22.702830 update-ssh-keys[1388]: Updated "/home/core/.ssh/authorized_keys" May 10 00:46:22.703786 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 10 00:46:22.844609 systemd-logind[1318]: Watching system buttons on /dev/input/event1 (Power Button) May 10 00:46:22.847194 systemd-logind[1318]: Watching system buttons on /dev/input/event2 (Sleep Button) May 10 00:46:22.847419 systemd-logind[1318]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 00:46:22.847829 systemd-logind[1318]: New seat seat0. May 10 00:46:22.852530 systemd[1]: Started systemd-logind.service. May 10 00:46:22.893828 env[1332]: time="2025-05-10T00:46:22.893718036Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:46:22.895280 env[1332]: time="2025-05-10T00:46:22.895237929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:46:22.898569 env[1332]: time="2025-05-10T00:46:22.898513764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:46:22.899728 env[1332]: time="2025-05-10T00:46:22.899691221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:46:22.900315 env[1332]: time="2025-05-10T00:46:22.900276736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:46:22.906964 env[1332]: time="2025-05-10T00:46:22.906908918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:46:22.907239 env[1332]: time="2025-05-10T00:46:22.907149162Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 10 00:46:22.907414 env[1332]: time="2025-05-10T00:46:22.907387195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:46:22.907742 env[1332]: time="2025-05-10T00:46:22.907714656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:46:22.908520 env[1332]: time="2025-05-10T00:46:22.908478612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:46:22.909112 env[1332]: time="2025-05-10T00:46:22.909047500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:46:22.911209 env[1332]: time="2025-05-10T00:46:22.911176496Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:46:22.912293 env[1332]: time="2025-05-10T00:46:22.912259688Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 10 00:46:22.912487 env[1332]: time="2025-05-10T00:46:22.912462370Z" level=info msg="metadata content store policy set" policy=shared May 10 00:46:22.925534 env[1332]: time="2025-05-10T00:46:22.925338692Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:46:22.925768 env[1332]: time="2025-05-10T00:46:22.925740709Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:46:22.925917 env[1332]: time="2025-05-10T00:46:22.925894597Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:46:22.926126 env[1332]: time="2025-05-10T00:46:22.926093523Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:46:22.926318 env[1332]: time="2025-05-10T00:46:22.926296535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:46:22.926450 env[1332]: time="2025-05-10T00:46:22.926428490Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:46:22.926575 env[1332]: time="2025-05-10T00:46:22.926554044Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:46:22.926715 env[1332]: time="2025-05-10T00:46:22.926694383Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:46:22.926842 env[1332]: time="2025-05-10T00:46:22.926821250Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 10 00:46:22.926988 env[1332]: time="2025-05-10T00:46:22.926966795Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:46:22.927144 env[1332]: time="2025-05-10T00:46:22.927122720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:46:22.927273 env[1332]: time="2025-05-10T00:46:22.927252162Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:46:22.927614 env[1332]: time="2025-05-10T00:46:22.927589323Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:46:22.927911 env[1332]: time="2025-05-10T00:46:22.927885288Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:46:22.928781 env[1332]: time="2025-05-10T00:46:22.928749192Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:46:22.930249 env[1332]: time="2025-05-10T00:46:22.930212881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:46:22.930436 env[1332]: time="2025-05-10T00:46:22.930411046Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:46:22.930670 env[1332]: time="2025-05-10T00:46:22.930646205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:46:22.933037 env[1332]: time="2025-05-10T00:46:22.933002068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:46:22.933243 env[1332]: time="2025-05-10T00:46:22.933216453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:46:22.933389 env[1332]: time="2025-05-10T00:46:22.933365946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:46:22.933530 env[1332]: time="2025-05-10T00:46:22.933507032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:46:22.933673 env[1332]: time="2025-05-10T00:46:22.933650058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:46:22.933807 env[1332]: time="2025-05-10T00:46:22.933785881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:46:22.933936 env[1332]: time="2025-05-10T00:46:22.933914732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:46:22.934103 env[1332]: time="2025-05-10T00:46:22.934081408Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:46:22.934529 env[1332]: time="2025-05-10T00:46:22.934503123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:46:22.936205 env[1332]: time="2025-05-10T00:46:22.936170719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:46:22.936388 env[1332]: time="2025-05-10T00:46:22.936363686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:46:22.936537 env[1332]: time="2025-05-10T00:46:22.936514580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:46:22.936689 env[1332]: time="2025-05-10T00:46:22.936660544Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 10 00:46:22.936813 env[1332]: time="2025-05-10T00:46:22.936791386Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:46:22.936966 env[1332]: time="2025-05-10T00:46:22.936944706Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 10 00:46:22.937203 env[1332]: time="2025-05-10T00:46:22.937181963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:46:22.938736 env[1332]: time="2025-05-10T00:46:22.938605670Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:46:22.943329 env[1332]: time="2025-05-10T00:46:22.943283167Z" level=info msg="Connect containerd service" May 10 00:46:22.943607 env[1332]: time="2025-05-10T00:46:22.943580559Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:46:22.948621 env[1332]: time="2025-05-10T00:46:22.948565392Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:46:22.955419 env[1332]: time="2025-05-10T00:46:22.955331610Z" level=info msg="Start subscribing containerd event" May 10 00:46:22.955558 env[1332]: time="2025-05-10T00:46:22.955441836Z" level=info msg="Start recovering state" May 10 00:46:22.955618 env[1332]: time="2025-05-10T00:46:22.955563855Z" level=info msg="Start event monitor" May 10 00:46:22.955618 env[1332]: time="2025-05-10T00:46:22.955585977Z" level=info msg="Start snapshots syncer" May 10 00:46:22.955618 env[1332]: time="2025-05-10T00:46:22.955605892Z" level=info msg="Start cni network conf syncer for default" May 10 00:46:22.955765 env[1332]: time="2025-05-10T00:46:22.955621210Z" level=info msg="Start streaming server" May 10 00:46:22.961172 env[1332]: time="2025-05-10T00:46:22.961117638Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:46:22.962373 env[1332]: time="2025-05-10T00:46:22.962339926Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:46:22.963913 systemd[1]: Started containerd.service. May 10 00:46:22.966420 env[1332]: time="2025-05-10T00:46:22.966383431Z" level=info msg="containerd successfully booted in 0.448528s" May 10 00:46:23.067781 dbus-daemon[1296]: [system] Successfully activated service 'org.freedesktop.hostname1' May 10 00:46:23.068062 systemd[1]: Started systemd-hostnamed.service. May 10 00:46:23.068969 dbus-daemon[1296]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1370 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 10 00:46:23.086267 systemd[1]: Starting polkit.service... May 10 00:46:23.172880 polkitd[1402]: Started polkitd version 121 May 10 00:46:23.207039 polkitd[1402]: Loading rules from directory /etc/polkit-1/rules.d May 10 00:46:23.207581 polkitd[1402]: Loading rules from directory /usr/share/polkit-1/rules.d May 10 00:46:23.219456 polkitd[1402]: Finished loading, compiling and executing 2 rules May 10 00:46:23.220606 dbus-daemon[1296]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 10 00:46:23.220935 systemd[1]: Started polkit.service. May 10 00:46:23.222076 polkitd[1402]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 10 00:46:23.270828 systemd-hostnamed[1370]: Hostname set to (transient) May 10 00:46:23.274294 systemd-resolved[1230]: System hostname changed to 'ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403'. May 10 00:46:24.248950 tar[1330]: linux-amd64/LICENSE May 10 00:46:24.248950 tar[1330]: linux-amd64/README.md May 10 00:46:24.271068 systemd[1]: Finished prepare-helm.service. May 10 00:46:24.660692 systemd[1]: Started kubelet.service. May 10 00:46:25.282655 sshd_keygen[1331]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:46:25.340140 systemd[1]: Finished sshd-keygen.service. May 10 00:46:25.352792 systemd[1]: Starting issuegen.service... May 10 00:46:25.364444 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:46:25.364880 systemd[1]: Finished issuegen.service. May 10 00:46:25.375755 systemd[1]: Starting systemd-user-sessions.service... May 10 00:46:25.404879 systemd[1]: Finished systemd-user-sessions.service. May 10 00:46:25.418729 systemd[1]: Started getty@tty1.service. May 10 00:46:25.430768 systemd[1]: Started serial-getty@ttyS0.service. May 10 00:46:25.439711 systemd[1]: Reached target getty.target. May 10 00:46:25.491000 locksmithd[1385]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:46:26.048001 kubelet[1417]: E0510 00:46:26.047912 1417 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:46:26.050892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:46:26.051237 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:46:28.477955 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. May 10 00:46:30.671566 systemd[1]: Created slice system-sshd.slice. May 10 00:46:30.674098 systemd[1]: Started sshd@0-10.128.0.77:22-147.75.109.163:53448.service. May 10 00:46:30.749095 kernel: loop2: detected capacity change from 0 to 2097152 May 10 00:46:30.769317 systemd-nspawn[1450]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. May 10 00:46:30.769317 systemd-nspawn[1450]: Press ^] three times within 1s to kill container. May 10 00:46:30.782110 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 00:46:30.860652 systemd[1]: Started oem-gce.service. May 10 00:46:30.861167 systemd[1]: Reached target multi-user.target. May 10 00:46:30.863473 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 10 00:46:30.875118 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 10 00:46:30.875526 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 10 00:46:30.875997 systemd[1]: Startup finished in 9.179s (kernel) + 16.941s (userspace) = 26.120s. May 10 00:46:30.979662 systemd-nspawn[1450]: + '[' -e /etc/default/instance_configs.cfg.template ']' May 10 00:46:30.980132 systemd-nspawn[1450]: + echo -e '[InstanceSetup]\nset_host_keys = false' May 10 00:46:30.980132 systemd-nspawn[1450]: + /usr/bin/google_instance_setup May 10 00:46:30.996115 sshd[1447]: Accepted publickey for core from 147.75.109.163 port 53448 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:46:31.000270 sshd[1447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:31.017453 systemd[1]: Created slice user-500.slice. May 10 00:46:31.019218 systemd[1]: Starting user-runtime-dir@500.service... May 10 00:46:31.027356 systemd-logind[1318]: New session 1 of user core. May 10 00:46:31.037889 systemd[1]: Finished user-runtime-dir@500.service. May 10 00:46:31.040118 systemd[1]: Starting user@500.service... May 10 00:46:31.060952 (systemd)[1461]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:31.198208 systemd[1461]: Queued start job for default target default.target. May 10 00:46:31.199419 systemd[1461]: Reached target paths.target. May 10 00:46:31.199464 systemd[1461]: Reached target sockets.target. May 10 00:46:31.199488 systemd[1461]: Reached target timers.target. May 10 00:46:31.199511 systemd[1461]: Reached target basic.target. May 10 00:46:31.199728 systemd[1]: Started user@500.service. May 10 00:46:31.201212 systemd[1]: Started session-1.scope. May 10 00:46:31.205388 systemd[1461]: Reached target default.target. May 10 00:46:31.206272 systemd[1461]: Startup finished in 135ms. May 10 00:46:31.434404 systemd[1]: Started sshd@1-10.128.0.77:22-147.75.109.163:53464.service. May 10 00:46:31.707714 instance-setup[1457]: INFO Running google_set_multiqueue. May 10 00:46:31.729810 instance-setup[1457]: INFO Set channels for eth0 to 2. May 10 00:46:31.735203 instance-setup[1457]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. May 10 00:46:31.736634 sshd[1470]: Accepted publickey for core from 147.75.109.163 port 53464 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:46:31.737830 instance-setup[1457]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 May 10 00:46:31.738014 instance-setup[1457]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. May 10 00:46:31.738488 sshd[1470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:31.741961 instance-setup[1457]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 May 10 00:46:31.742318 instance-setup[1457]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. May 10 00:46:31.744983 instance-setup[1457]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 May 10 00:46:31.745311 instance-setup[1457]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. May 10 00:46:31.747402 systemd[1]: Started session-2.scope. May 10 00:46:31.747980 instance-setup[1457]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 May 10 00:46:31.749149 systemd-logind[1318]: New session 2 of user core. May 10 00:46:31.769742 instance-setup[1457]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus May 10 00:46:31.770376 instance-setup[1457]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus May 10 00:46:31.810116 systemd-nspawn[1450]: + /usr/bin/google_metadata_script_runner --script-type startup May 10 00:46:31.956014 sshd[1470]: pam_unix(sshd:session): session closed for user core May 10 00:46:31.963112 systemd-logind[1318]: Session 2 logged out. Waiting for processes to exit. May 10 00:46:31.966466 systemd[1]: sshd@1-10.128.0.77:22-147.75.109.163:53464.service: Deactivated successfully. May 10 00:46:31.967715 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:46:31.969907 systemd-logind[1318]: Removed session 2. May 10 00:46:32.003359 systemd[1]: Started sshd@2-10.128.0.77:22-147.75.109.163:53470.service. May 10 00:46:32.167567 startup-script[1504]: INFO Starting startup scripts. May 10 00:46:32.179407 startup-script[1504]: INFO No startup scripts found in metadata. May 10 00:46:32.179567 startup-script[1504]: INFO Finished running startup scripts. May 10 00:46:32.217437 systemd-nspawn[1450]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM May 10 00:46:32.217437 systemd-nspawn[1450]: + daemon_pids=() May 10 00:46:32.217437 systemd-nspawn[1450]: + for d in accounts clock_skew network May 10 00:46:32.217437 systemd-nspawn[1450]: + daemon_pids+=($!) May 10 00:46:32.217437 systemd-nspawn[1450]: + /usr/bin/google_accounts_daemon May 10 00:46:32.217437 systemd-nspawn[1450]: + for d in accounts clock_skew network May 10 00:46:32.218152 systemd-nspawn[1450]: + daemon_pids+=($!) May 10 00:46:32.218152 systemd-nspawn[1450]: + for d in accounts clock_skew network May 10 00:46:32.218152 systemd-nspawn[1450]: + daemon_pids+=($!) May 10 00:46:32.218152 systemd-nspawn[1450]: + NOTIFY_SOCKET=/run/systemd/notify May 10 00:46:32.218152 systemd-nspawn[1450]: + /usr/bin/systemd-notify --ready May 10 00:46:32.218704 systemd-nspawn[1450]: + /usr/bin/google_clock_skew_daemon May 10 00:46:32.218815 systemd-nspawn[1450]: + /usr/bin/google_network_daemon May 10 00:46:32.277655 systemd-nspawn[1450]: + wait -n 36 37 38 May 10 00:46:32.311437 sshd[1508]: Accepted publickey for core from 147.75.109.163 port 53470 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:46:32.313055 sshd[1508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:32.321785 systemd[1]: Started session-3.scope. May 10 00:46:32.323879 systemd-logind[1318]: New session 3 of user core. May 10 00:46:32.526388 sshd[1508]: pam_unix(sshd:session): session closed for user core May 10 00:46:32.536173 systemd[1]: sshd@2-10.128.0.77:22-147.75.109.163:53470.service: Deactivated successfully. May 10 00:46:32.537960 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:46:32.537960 systemd-logind[1318]: Session 3 logged out. Waiting for processes to exit. May 10 00:46:32.540142 systemd-logind[1318]: Removed session 3. May 10 00:46:32.569340 systemd[1]: Started sshd@3-10.128.0.77:22-147.75.109.163:53482.service. May 10 00:46:32.886051 sshd[1521]: Accepted publickey for core from 147.75.109.163 port 53482 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:46:32.886567 sshd[1521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:32.896646 systemd-logind[1318]: New session 4 of user core. May 10 00:46:32.897592 systemd[1]: Started session-4.scope. May 10 00:46:32.953775 google-networking[1514]: INFO Starting Google Networking daemon. May 10 00:46:33.038841 groupadd[1532]: group added to /etc/group: name=google-sudoers, GID=1000 May 10 00:46:33.042220 groupadd[1532]: group added to /etc/gshadow: name=google-sudoers May 10 00:46:33.057627 groupadd[1532]: new group: name=google-sudoers, GID=1000 May 10 00:46:33.083368 google-accounts[1512]: INFO Starting Google Accounts daemon. May 10 00:46:33.108425 sshd[1521]: pam_unix(sshd:session): session closed for user core May 10 00:46:33.113591 systemd[1]: sshd@3-10.128.0.77:22-147.75.109.163:53482.service: Deactivated successfully. May 10 00:46:33.115178 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:46:33.117654 systemd-logind[1318]: Session 4 logged out. Waiting for processes to exit. May 10 00:46:33.122150 systemd-logind[1318]: Removed session 4. May 10 00:46:33.125470 google-clock-skew[1513]: INFO Starting Google Clock Skew daemon. May 10 00:46:33.141972 google-clock-skew[1513]: INFO Clock drift token has changed: 0. May 10 00:46:33.145638 google-accounts[1512]: WARNING OS Login not installed. May 10 00:46:33.147129 google-accounts[1512]: INFO Creating a new user account for 0. May 10 00:46:33.149664 systemd-nspawn[1450]: hwclock: Cannot access the Hardware Clock via any known method. May 10 00:46:33.149664 systemd-nspawn[1450]: hwclock: Use the --verbose option to see the details of our search for an access method. May 10 00:46:33.153010 google-clock-skew[1513]: WARNING Failed to sync system time with hardware clock. May 10 00:46:33.154044 systemd[1]: Started sshd@4-10.128.0.77:22-147.75.109.163:53484.service. May 10 00:46:33.162432 systemd-nspawn[1450]: useradd: invalid user name '0': use --badname to ignore May 10 00:46:33.163738 google-accounts[1512]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. May 10 00:46:33.458313 sshd[1545]: Accepted publickey for core from 147.75.109.163 port 53484 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:46:33.459973 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:46:33.467604 systemd[1]: Started session-5.scope. May 10 00:46:33.467966 systemd-logind[1318]: New session 5 of user core. May 10 00:46:33.664271 sudo[1550]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:46:33.664747 sudo[1550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 00:46:33.703766 systemd[1]: Starting docker.service... May 10 00:46:33.763287 env[1560]: time="2025-05-10T00:46:33.763126666Z" level=info msg="Starting up" May 10 00:46:33.765870 env[1560]: time="2025-05-10T00:46:33.765837325Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:46:33.766041 env[1560]: time="2025-05-10T00:46:33.766022881Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:46:33.766737 env[1560]: time="2025-05-10T00:46:33.766691397Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:46:33.766737 env[1560]: time="2025-05-10T00:46:33.766731056Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:46:33.769310 env[1560]: time="2025-05-10T00:46:33.769263805Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:46:33.769310 env[1560]: time="2025-05-10T00:46:33.769286399Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:46:33.769501 env[1560]: time="2025-05-10T00:46:33.769328865Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:46:33.769501 env[1560]: time="2025-05-10T00:46:33.769343936Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:46:33.781228 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1179821778-merged.mount: Deactivated successfully. May 10 00:46:34.284666 env[1560]: time="2025-05-10T00:46:34.284569872Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 10 00:46:34.284666 env[1560]: time="2025-05-10T00:46:34.284620839Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 10 00:46:34.285110 env[1560]: time="2025-05-10T00:46:34.285045725Z" level=info msg="Loading containers: start." May 10 00:46:34.464098 kernel: Initializing XFRM netlink socket May 10 00:46:34.510881 env[1560]: time="2025-05-10T00:46:34.510808147Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 10 00:46:34.597933 systemd-networkd[1075]: docker0: Link UP May 10 00:46:34.617113 env[1560]: time="2025-05-10T00:46:34.617033659Z" level=info msg="Loading containers: done." May 10 00:46:34.636946 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck481850552-merged.mount: Deactivated successfully. May 10 00:46:34.640701 env[1560]: time="2025-05-10T00:46:34.640636263Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:46:34.640995 env[1560]: time="2025-05-10T00:46:34.640953174Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 10 00:46:34.641176 env[1560]: time="2025-05-10T00:46:34.641133442Z" level=info msg="Daemon has completed initialization" May 10 00:46:34.662817 systemd[1]: Started docker.service. May 10 00:46:34.676913 env[1560]: time="2025-05-10T00:46:34.676820574Z" level=info msg="API listen on /run/docker.sock" May 10 00:46:35.865992 env[1332]: time="2025-05-10T00:46:35.865901960Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 10 00:46:36.292112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:46:36.292455 systemd[1]: Stopped kubelet.service. May 10 00:46:36.295732 systemd[1]: Starting kubelet.service... May 10 00:46:36.494140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344833193.mount: Deactivated successfully. May 10 00:46:36.552993 systemd[1]: Started kubelet.service. May 10 00:46:36.647976 kubelet[1697]: E0510 00:46:36.647913 1697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:46:36.653857 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:46:36.654203 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:46:38.450069 env[1332]: time="2025-05-10T00:46:38.449960685Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:38.452826 env[1332]: time="2025-05-10T00:46:38.452765302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:38.455378 env[1332]: time="2025-05-10T00:46:38.455327684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:38.457447 env[1332]: time="2025-05-10T00:46:38.457404105Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:38.458657 env[1332]: time="2025-05-10T00:46:38.458601595Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 10 00:46:38.475677 env[1332]: time="2025-05-10T00:46:38.475613786Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 00:46:40.315141 env[1332]: time="2025-05-10T00:46:40.315047960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:40.317946 env[1332]: time="2025-05-10T00:46:40.317897923Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:40.320360 env[1332]: time="2025-05-10T00:46:40.320317508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:40.322475 env[1332]: time="2025-05-10T00:46:40.322433024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:40.323462 env[1332]: time="2025-05-10T00:46:40.323406601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 10 00:46:40.339241 env[1332]: time="2025-05-10T00:46:40.339197909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 00:46:41.579867 env[1332]: time="2025-05-10T00:46:41.579770495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:41.582748 env[1332]: time="2025-05-10T00:46:41.582701653Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:41.585364 env[1332]: time="2025-05-10T00:46:41.585322699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:41.588128 env[1332]: time="2025-05-10T00:46:41.588015649Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:41.591046 env[1332]: time="2025-05-10T00:46:41.590981316Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 10 00:46:41.607796 env[1332]: time="2025-05-10T00:46:41.607728531Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 00:46:42.680775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount745903526.mount: Deactivated successfully. May 10 00:46:43.386567 env[1332]: time="2025-05-10T00:46:43.386486443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:43.389313 env[1332]: time="2025-05-10T00:46:43.389253559Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:43.391398 env[1332]: time="2025-05-10T00:46:43.391351025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:43.393042 env[1332]: time="2025-05-10T00:46:43.393004467Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:43.393639 env[1332]: time="2025-05-10T00:46:43.393591042Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 10 00:46:43.411669 env[1332]: time="2025-05-10T00:46:43.411606980Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:46:43.787656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731533007.mount: Deactivated successfully. May 10 00:46:44.952126 env[1332]: time="2025-05-10T00:46:44.952033171Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:44.954823 env[1332]: time="2025-05-10T00:46:44.954772399Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:44.957360 env[1332]: time="2025-05-10T00:46:44.957317134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:44.959404 env[1332]: time="2025-05-10T00:46:44.959362683Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:44.960541 env[1332]: time="2025-05-10T00:46:44.960490689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 10 00:46:44.975018 env[1332]: time="2025-05-10T00:46:44.974959792Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 00:46:45.341614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983069349.mount: Deactivated successfully. May 10 00:46:45.347508 env[1332]: time="2025-05-10T00:46:45.347445588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:45.349677 env[1332]: time="2025-05-10T00:46:45.349629823Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:45.351774 env[1332]: time="2025-05-10T00:46:45.351732422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:45.353993 env[1332]: time="2025-05-10T00:46:45.353953512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:45.354783 env[1332]: time="2025-05-10T00:46:45.354736760Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 10 00:46:45.368469 env[1332]: time="2025-05-10T00:46:45.368412102Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 00:46:45.778395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2961422818.mount: Deactivated successfully. May 10 00:46:46.791875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:46:46.792208 systemd[1]: Stopped kubelet.service. May 10 00:46:46.794899 systemd[1]: Starting kubelet.service... May 10 00:46:47.041132 systemd[1]: Started kubelet.service. May 10 00:46:47.148225 kubelet[1744]: E0510 00:46:47.147756 1744 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:46:47.150370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:46:47.150661 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:46:48.544737 env[1332]: time="2025-05-10T00:46:48.544644602Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:48.547804 env[1332]: time="2025-05-10T00:46:48.547751957Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:48.550385 env[1332]: time="2025-05-10T00:46:48.550341105Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:48.559705 env[1332]: time="2025-05-10T00:46:48.559625177Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 10 00:46:48.561470 env[1332]: time="2025-05-10T00:46:48.561426717Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:51.984360 systemd[1]: Stopped kubelet.service. May 10 00:46:51.987758 systemd[1]: Starting kubelet.service... May 10 00:46:52.021917 systemd[1]: Reloading. May 10 00:46:52.178434 /usr/lib/systemd/system-generators/torcx-generator[1841]: time="2025-05-10T00:46:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:46:52.178490 /usr/lib/systemd/system-generators/torcx-generator[1841]: time="2025-05-10T00:46:52Z" level=info msg="torcx already run" May 10 00:46:52.312748 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:46:52.312778 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:46:52.337049 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:46:52.472865 systemd[1]: Started kubelet.service. May 10 00:46:52.478254 systemd[1]: Stopping kubelet.service... May 10 00:46:52.483464 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:46:52.483883 systemd[1]: Stopped kubelet.service. May 10 00:46:52.488877 systemd[1]: Starting kubelet.service... May 10 00:46:52.705581 systemd[1]: Started kubelet.service. May 10 00:46:52.784909 kubelet[1902]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:46:52.784909 kubelet[1902]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:46:52.784909 kubelet[1902]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:46:52.787239 kubelet[1902]: I0510 00:46:52.787147 1902 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:46:53.258382 kubelet[1902]: I0510 00:46:53.258317 1902 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:46:53.258382 kubelet[1902]: I0510 00:46:53.258356 1902 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:46:53.259617 kubelet[1902]: I0510 00:46:53.259559 1902 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:46:53.284458 kubelet[1902]: I0510 00:46:53.283676 1902 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:46:53.285258 kubelet[1902]: E0510 00:46:53.285228 1902 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:53.299857 kubelet[1902]: I0510 00:46:53.299810 1902 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:46:53.302974 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 10 00:46:53.307890 kubelet[1902]: I0510 00:46:53.307830 1902 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:46:53.308373 kubelet[1902]: I0510 00:46:53.308033 1902 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:46:53.308666 kubelet[1902]: I0510 00:46:53.308645 1902 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:46:53.308789 kubelet[1902]: I0510 00:46:53.308774 1902 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:46:53.309132 kubelet[1902]: I0510 00:46:53.309042 1902 state_mem.go:36] "Initialized new in-memory state store" May 10 00:46:53.310792 kubelet[1902]: I0510 00:46:53.310768 1902 kubelet.go:400] "Attempting to sync node with API server" May 10 00:46:53.310952 kubelet[1902]: I0510 00:46:53.310933 1902 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:46:53.311214 kubelet[1902]: I0510 00:46:53.311185 1902 kubelet.go:312] "Adding apiserver pod source" May 10 00:46:53.315328 kubelet[1902]: I0510 00:46:53.315302 1902 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:46:53.334241 kubelet[1902]: W0510 00:46:53.315216 1902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403&limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:53.334402 kubelet[1902]: E0510 00:46:53.334384 1902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403&limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:53.337428 kubelet[1902]: W0510 00:46:53.337354 1902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:53.337621 kubelet[1902]: E0510 00:46:53.337603 1902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:53.337862 kubelet[1902]: I0510 00:46:53.337841 1902 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:46:53.346755 kubelet[1902]: I0510 00:46:53.346715 1902 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:46:53.346866 kubelet[1902]: W0510 00:46:53.346805 1902 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:46:53.347793 kubelet[1902]: I0510 00:46:53.347608 1902 server.go:1264] "Started kubelet" May 10 00:46:53.351975 kubelet[1902]: I0510 00:46:53.351929 1902 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:46:53.353376 kubelet[1902]: I0510 00:46:53.353321 1902 server.go:455] "Adding debug handlers to kubelet server" May 10 00:46:53.372383 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 10 00:46:53.372562 kubelet[1902]: I0510 00:46:53.372538 1902 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:46:53.372687 kubelet[1902]: I0510 00:46:53.363560 1902 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:46:53.373044 kubelet[1902]: I0510 00:46:53.373021 1902 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:46:53.375294 kubelet[1902]: E0510 00:46:53.375146 1902 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403.183e03eb01bb3621 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,UID:ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,},FirstTimestamp:2025-05-10 00:46:53.347575329 +0000 UTC m=+0.627829695,LastTimestamp:2025-05-10 00:46:53.347575329 +0000 UTC m=+0.627829695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,}" May 10 00:46:53.381037 kubelet[1902]: I0510 00:46:53.381008 1902 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:46:53.383995 kubelet[1902]: I0510 00:46:53.383962 1902 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:46:53.384120 kubelet[1902]: I0510 00:46:53.384097 1902 reconciler.go:26] "Reconciler: start to sync state" May 10 00:46:53.385408 kubelet[1902]: W0510 00:46:53.385351 1902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:53.385529 kubelet[1902]: E0510 00:46:53.385426 1902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:53.386200 kubelet[1902]: E0510 00:46:53.384975 1902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403?timeout=10s\": dial tcp 10.128.0.77:6443: connect: connection refused" interval="200ms" May 10 00:46:53.389724 kubelet[1902]: I0510 00:46:53.389695 1902 factory.go:221] Registration of the containerd container factory successfully May 10 00:46:53.389724 kubelet[1902]: I0510 00:46:53.389724 1902 factory.go:221] Registration of the systemd container factory successfully May 10 00:46:53.389881 kubelet[1902]: I0510 00:46:53.389831 1902 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:46:53.412820 kubelet[1902]: E0510 00:46:53.412765 1902 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:46:53.418357 kubelet[1902]: I0510 00:46:53.418296 1902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:46:53.420221 kubelet[1902]: I0510 00:46:53.420171 1902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:46:53.420221 kubelet[1902]: I0510 00:46:53.420201 1902 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:46:53.420393 kubelet[1902]: I0510 00:46:53.420231 1902 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:46:53.420393 kubelet[1902]: E0510 00:46:53.420294 1902 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:46:53.431871 kubelet[1902]: W0510 00:46:53.431808 1902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:53.431993 kubelet[1902]: E0510 00:46:53.431874 1902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:53.441022 kubelet[1902]: I0510 00:46:53.440986 1902 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:46:53.441022 kubelet[1902]: I0510 00:46:53.441007 1902 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:46:53.441191 kubelet[1902]: I0510 00:46:53.441032 1902 state_mem.go:36] "Initialized new in-memory state store" May 10 00:46:53.444029 kubelet[1902]: I0510 00:46:53.443996 1902 policy_none.go:49] "None policy: Start" May 10 00:46:53.444921 kubelet[1902]: I0510 00:46:53.444879 1902 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:46:53.444921 kubelet[1902]: I0510 00:46:53.444916 1902 state_mem.go:35] "Initializing new in-memory state store" May 10 00:46:53.450747 kubelet[1902]: I0510 00:46:53.450704 1902 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:46:53.450947 kubelet[1902]: I0510 00:46:53.450894 1902 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:46:53.451087 kubelet[1902]: I0510 00:46:53.451051 1902 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:46:53.453683 kubelet[1902]: E0510 00:46:53.453650 1902 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" not found" May 10 00:46:53.487919 kubelet[1902]: I0510 00:46:53.487873 1902 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.488313 kubelet[1902]: E0510 00:46:53.488278 1902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.77:6443/api/v1/nodes\": dial tcp 10.128.0.77:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.520799 kubelet[1902]: I0510 00:46:53.520421 1902 topology_manager.go:215] "Topology Admit Handler" podUID="0543b449ae7f45551c54f33014105b37" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.525787 kubelet[1902]: I0510 00:46:53.525746 1902 topology_manager.go:215] "Topology Admit Handler" podUID="c6f83aafb2c99bc5641fe9cca1f6bef0" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.530134 kubelet[1902]: I0510 00:46:53.530102 1902 topology_manager.go:215] "Topology Admit Handler" podUID="7c4900b0a9dbca89ee62cac9266527ec" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.584551 kubelet[1902]: I0510 00:46:53.584485 1902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0543b449ae7f45551c54f33014105b37-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"0543b449ae7f45551c54f33014105b37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.584551 kubelet[1902]: I0510 00:46:53.584547 1902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0543b449ae7f45551c54f33014105b37-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"0543b449ae7f45551c54f33014105b37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.584832 kubelet[1902]: I0510 00:46:53.584582 1902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6f83aafb2c99bc5641fe9cca1f6bef0-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"c6f83aafb2c99bc5641fe9cca1f6bef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.584832 kubelet[1902]: I0510 00:46:53.584616 1902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c4900b0a9dbca89ee62cac9266527ec-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"7c4900b0a9dbca89ee62cac9266527ec\") " pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.584832 kubelet[1902]: I0510 00:46:53.584641 1902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0543b449ae7f45551c54f33014105b37-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"0543b449ae7f45551c54f33014105b37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.584832 kubelet[1902]: I0510 00:46:53.584667 1902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6f83aafb2c99bc5641fe9cca1f6bef0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"c6f83aafb2c99bc5641fe9cca1f6bef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.585008 kubelet[1902]: I0510 00:46:53.584693 1902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6f83aafb2c99bc5641fe9cca1f6bef0-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"c6f83aafb2c99bc5641fe9cca1f6bef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.585008 kubelet[1902]: I0510 00:46:53.584721 1902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6f83aafb2c99bc5641fe9cca1f6bef0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"c6f83aafb2c99bc5641fe9cca1f6bef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.585008 kubelet[1902]: I0510 00:46:53.584753 1902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6f83aafb2c99bc5641fe9cca1f6bef0-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"c6f83aafb2c99bc5641fe9cca1f6bef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.587737 kubelet[1902]: E0510 00:46:53.587671 1902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403?timeout=10s\": dial tcp 10.128.0.77:6443: connect: connection refused" interval="400ms" May 10 00:46:53.692780 kubelet[1902]: I0510 00:46:53.692728 1902 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.693191 kubelet[1902]: E0510 00:46:53.693143 1902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.77:6443/api/v1/nodes\": dial tcp 10.128.0.77:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:53.834241 env[1332]: time="2025-05-10T00:46:53.834175488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,Uid:0543b449ae7f45551c54f33014105b37,Namespace:kube-system,Attempt:0,}" May 10 00:46:53.838833 env[1332]: time="2025-05-10T00:46:53.838783807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,Uid:c6f83aafb2c99bc5641fe9cca1f6bef0,Namespace:kube-system,Attempt:0,}" May 10 00:46:53.844276 env[1332]: time="2025-05-10T00:46:53.844216433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,Uid:7c4900b0a9dbca89ee62cac9266527ec,Namespace:kube-system,Attempt:0,}" May 10 00:46:53.989440 kubelet[1902]: E0510 00:46:53.989370 1902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403?timeout=10s\": dial tcp 10.128.0.77:6443: connect: connection refused" interval="800ms" May 10 00:46:54.098601 kubelet[1902]: I0510 00:46:54.098460 1902 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:54.099248 kubelet[1902]: E0510 00:46:54.099184 1902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.77:6443/api/v1/nodes\": dial tcp 10.128.0.77:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:54.217330 kubelet[1902]: W0510 00:46:54.217242 1902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403&limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:54.217330 kubelet[1902]: E0510 00:46:54.217331 1902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403&limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:54.267874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269488195.mount: Deactivated successfully. May 10 00:46:54.279326 env[1332]: time="2025-05-10T00:46:54.279277125Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.284916 env[1332]: time="2025-05-10T00:46:54.284869047Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.286404 env[1332]: time="2025-05-10T00:46:54.286367044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.288039 env[1332]: time="2025-05-10T00:46:54.287995233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.291130 env[1332]: time="2025-05-10T00:46:54.291096910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.293580 env[1332]: time="2025-05-10T00:46:54.293546147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.296081 env[1332]: time="2025-05-10T00:46:54.296013779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.298150 env[1332]: time="2025-05-10T00:46:54.298103963Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.299414 env[1332]: time="2025-05-10T00:46:54.299377071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.303194 env[1332]: time="2025-05-10T00:46:54.303145837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.307078 env[1332]: time="2025-05-10T00:46:54.307023264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.312816 env[1332]: time="2025-05-10T00:46:54.312760482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:54.362385 env[1332]: time="2025-05-10T00:46:54.361460642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:54.362678 env[1332]: time="2025-05-10T00:46:54.362621507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:54.362867 env[1332]: time="2025-05-10T00:46:54.362817681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:54.363341 env[1332]: time="2025-05-10T00:46:54.363278601Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e66bb89550c46fb1b96f55fe846ba13dddfca5e430e8763787aaa73cf02337e2 pid=1952 runtime=io.containerd.runc.v2 May 10 00:46:54.364154 env[1332]: time="2025-05-10T00:46:54.364067066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:54.364255 env[1332]: time="2025-05-10T00:46:54.364180452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:54.364318 env[1332]: time="2025-05-10T00:46:54.364225982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:54.364538 env[1332]: time="2025-05-10T00:46:54.364488213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3aac689c670e6a73025174d41c3e26f97eaa9e584beba8c997b51db977c88101 pid=1951 runtime=io.containerd.runc.v2 May 10 00:46:54.378503 kubelet[1902]: W0510 00:46:54.378402 1902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:54.378503 kubelet[1902]: E0510 00:46:54.378460 1902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:54.384412 env[1332]: time="2025-05-10T00:46:54.384309018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:54.384595 env[1332]: time="2025-05-10T00:46:54.384443294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:54.384595 env[1332]: time="2025-05-10T00:46:54.384487749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:54.385844 env[1332]: time="2025-05-10T00:46:54.385775376Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd53f27a7ef60c1685174a2dbe33e5fd69b9c414f223c11764c82941864515ab pid=1982 runtime=io.containerd.runc.v2 May 10 00:46:54.498562 kubelet[1902]: W0510 00:46:54.498401 1902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:54.498562 kubelet[1902]: E0510 00:46:54.498514 1902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:54.518120 env[1332]: time="2025-05-10T00:46:54.518052009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,Uid:c6f83aafb2c99bc5641fe9cca1f6bef0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e66bb89550c46fb1b96f55fe846ba13dddfca5e430e8763787aaa73cf02337e2\"" May 10 00:46:54.521425 kubelet[1902]: E0510 00:46:54.520916 1902 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4" May 10 00:46:54.524558 env[1332]: time="2025-05-10T00:46:54.524510261Z" level=info msg="CreateContainer within sandbox \"e66bb89550c46fb1b96f55fe846ba13dddfca5e430e8763787aaa73cf02337e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:46:54.535557 env[1332]: time="2025-05-10T00:46:54.535386049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,Uid:7c4900b0a9dbca89ee62cac9266527ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aac689c670e6a73025174d41c3e26f97eaa9e584beba8c997b51db977c88101\"" May 10 00:46:54.537764 kubelet[1902]: E0510 00:46:54.537721 1902 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6" May 10 00:46:54.539323 env[1332]: time="2025-05-10T00:46:54.539282908Z" level=info msg="CreateContainer within sandbox \"3aac689c670e6a73025174d41c3e26f97eaa9e584beba8c997b51db977c88101\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:46:54.549204 env[1332]: time="2025-05-10T00:46:54.549147351Z" level=info msg="CreateContainer within sandbox \"e66bb89550c46fb1b96f55fe846ba13dddfca5e430e8763787aaa73cf02337e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"70c8ab6fcd641f7413ebe5bd53a4c216148ac53803e079ce55d5e9e0b078b165\"" May 10 00:46:54.550321 env[1332]: time="2025-05-10T00:46:54.550283312Z" level=info msg="StartContainer for \"70c8ab6fcd641f7413ebe5bd53a4c216148ac53803e079ce55d5e9e0b078b165\"" May 10 00:46:54.564565 env[1332]: time="2025-05-10T00:46:54.564507167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,Uid:0543b449ae7f45551c54f33014105b37,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd53f27a7ef60c1685174a2dbe33e5fd69b9c414f223c11764c82941864515ab\"" May 10 00:46:54.567233 kubelet[1902]: E0510 00:46:54.566832 1902 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6" May 10 00:46:54.569373 env[1332]: time="2025-05-10T00:46:54.569322445Z" level=info msg="CreateContainer within sandbox \"bd53f27a7ef60c1685174a2dbe33e5fd69b9c414f223c11764c82941864515ab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:46:54.574552 env[1332]: time="2025-05-10T00:46:54.574503008Z" level=info msg="CreateContainer within sandbox \"3aac689c670e6a73025174d41c3e26f97eaa9e584beba8c997b51db977c88101\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f37355118fe06919869e090c2f1d2a59ce8ef0f969d6e348594cedbeb67d2b69\"" May 10 00:46:54.576197 env[1332]: time="2025-05-10T00:46:54.575974464Z" level=info msg="StartContainer for \"f37355118fe06919869e090c2f1d2a59ce8ef0f969d6e348594cedbeb67d2b69\"" May 10 00:46:54.591813 env[1332]: time="2025-05-10T00:46:54.591074798Z" level=info msg="CreateContainer within sandbox \"bd53f27a7ef60c1685174a2dbe33e5fd69b9c414f223c11764c82941864515ab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f88ab3761c9522bbd40cf48c459503d0d259ea2297e2a9531f2cd1fdaa005799\"" May 10 00:46:54.596097 env[1332]: time="2025-05-10T00:46:54.595713639Z" level=info msg="StartContainer for \"f88ab3761c9522bbd40cf48c459503d0d259ea2297e2a9531f2cd1fdaa005799\"" May 10 00:46:54.657564 kubelet[1902]: W0510 00:46:54.657404 1902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:54.657564 kubelet[1902]: E0510 00:46:54.657486 1902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.77:6443: connect: connection refused May 10 00:46:54.711932 env[1332]: time="2025-05-10T00:46:54.711873293Z" level=info msg="StartContainer for \"70c8ab6fcd641f7413ebe5bd53a4c216148ac53803e079ce55d5e9e0b078b165\" returns successfully" May 10 00:46:54.761588 env[1332]: time="2025-05-10T00:46:54.761527222Z" level=info msg="StartContainer for \"f88ab3761c9522bbd40cf48c459503d0d259ea2297e2a9531f2cd1fdaa005799\" returns successfully" May 10 00:46:54.790287 kubelet[1902]: E0510 00:46:54.790230 1902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403?timeout=10s\": dial tcp 10.128.0.77:6443: connect: connection refused" interval="1.6s" May 10 00:46:54.835575 env[1332]: time="2025-05-10T00:46:54.835505378Z" level=info msg="StartContainer for \"f37355118fe06919869e090c2f1d2a59ce8ef0f969d6e348594cedbeb67d2b69\" returns successfully" May 10 00:46:54.906388 kubelet[1902]: I0510 00:46:54.906345 1902 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:58.888560 kubelet[1902]: E0510 00:46:58.888439 1902 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" not found" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:59.012159 kubelet[1902]: I0510 00:46:59.012076 1902 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:46:59.340507 kubelet[1902]: I0510 00:46:59.340447 1902 apiserver.go:52] "Watching apiserver" May 10 00:46:59.384430 kubelet[1902]: I0510 00:46:59.384371 1902 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:47:01.182499 systemd[1]: Reloading. May 10 00:47:01.269003 /usr/lib/systemd/system-generators/torcx-generator[2194]: time="2025-05-10T00:47:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:47:01.269783 /usr/lib/systemd/system-generators/torcx-generator[2194]: time="2025-05-10T00:47:01Z" level=info msg="torcx already run" May 10 00:47:01.425291 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:47:01.425319 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:47:01.449753 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:47:01.586091 systemd[1]: Stopping kubelet.service... May 10 00:47:01.586619 kubelet[1902]: E0510 00:47:01.585988 1902 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403.183e03eb01bb3621 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,UID:ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,},FirstTimestamp:2025-05-10 00:46:53.347575329 +0000 UTC m=+0.627829695,LastTimestamp:2025-05-10 00:46:53.347575329 +0000 UTC m=+0.627829695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403,}" May 10 00:47:01.587399 kubelet[1902]: I0510 00:47:01.587280 1902 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:47:01.603037 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:47:01.603830 systemd[1]: Stopped kubelet.service. May 10 00:47:01.608577 systemd[1]: Starting kubelet.service... May 10 00:47:01.871945 systemd[1]: Started kubelet.service. May 10 00:47:01.975467 kubelet[2253]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:47:01.976052 kubelet[2253]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:47:01.976345 kubelet[2253]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:47:01.976593 kubelet[2253]: I0510 00:47:01.976546 2253 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:47:01.983175 kubelet[2253]: I0510 00:47:01.983110 2253 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:47:01.983355 kubelet[2253]: I0510 00:47:01.983338 2253 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:47:01.983631 kubelet[2253]: I0510 00:47:01.983616 2253 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:47:01.985768 kubelet[2253]: I0510 00:47:01.985744 2253 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:47:01.987626 kubelet[2253]: I0510 00:47:01.987598 2253 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:47:01.996993 kubelet[2253]: I0510 00:47:01.996941 2253 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:47:01.997648 kubelet[2253]: I0510 00:47:01.997596 2253 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:47:01.997866 kubelet[2253]: I0510 00:47:01.997632 2253 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:47:01.998108 kubelet[2253]: I0510 00:47:01.997878 2253 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:47:01.998108 kubelet[2253]: I0510 00:47:01.997897 2253 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:47:01.998108 kubelet[2253]: I0510 00:47:01.997958 2253 state_mem.go:36] "Initialized new in-memory state store" May 10 00:47:01.998304 kubelet[2253]: I0510 00:47:01.998129 2253 kubelet.go:400] "Attempting to sync node with API server" May 10 00:47:01.998304 kubelet[2253]: I0510 00:47:01.998149 2253 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:47:01.998304 kubelet[2253]: I0510 00:47:01.998183 2253 kubelet.go:312] "Adding apiserver pod source" May 10 00:47:01.998304 kubelet[2253]: I0510 00:47:01.998208 2253 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:47:02.008519 kubelet[2253]: I0510 00:47:02.008487 2253 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:47:02.009396 kubelet[2253]: I0510 00:47:02.009376 2253 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:47:02.017269 kubelet[2253]: I0510 00:47:02.017237 2253 server.go:1264] "Started kubelet" May 10 00:47:02.022974 kubelet[2253]: I0510 00:47:02.022937 2253 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:47:02.025608 kubelet[2253]: I0510 00:47:02.025565 2253 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:47:02.027332 kubelet[2253]: I0510 00:47:02.027305 2253 server.go:455] "Adding debug handlers to kubelet server" May 10 00:47:02.027962 kubelet[2253]: I0510 00:47:02.027934 2253 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:47:02.031987 kubelet[2253]: I0510 00:47:02.031955 2253 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:47:02.032236 kubelet[2253]: I0510 00:47:02.032213 2253 reconciler.go:26] "Reconciler: start to sync state" May 10 00:47:02.033582 kubelet[2253]: I0510 00:47:02.032532 2253 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:47:02.033582 kubelet[2253]: I0510 00:47:02.032825 2253 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:47:02.034383 kubelet[2253]: I0510 00:47:02.034354 2253 factory.go:221] Registration of the systemd container factory successfully May 10 00:47:02.034519 kubelet[2253]: I0510 00:47:02.034489 2253 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:47:02.042032 kubelet[2253]: I0510 00:47:02.041999 2253 factory.go:221] Registration of the containerd container factory successfully May 10 00:47:02.047548 kubelet[2253]: I0510 00:47:02.047465 2253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:47:02.054421 kubelet[2253]: I0510 00:47:02.054387 2253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:47:02.054632 kubelet[2253]: I0510 00:47:02.054617 2253 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:47:02.054725 kubelet[2253]: I0510 00:47:02.054713 2253 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:47:02.054847 kubelet[2253]: E0510 00:47:02.054828 2253 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:47:02.136685 kubelet[2253]: I0510 00:47:02.136548 2253 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:47:02.136685 kubelet[2253]: I0510 00:47:02.136576 2253 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:47:02.136685 kubelet[2253]: I0510 00:47:02.136606 2253 state_mem.go:36] "Initialized new in-memory state store" May 10 00:47:02.138735 kubelet[2253]: I0510 00:47:02.138686 2253 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:47:02.138735 kubelet[2253]: I0510 00:47:02.138710 2253 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:47:02.138735 kubelet[2253]: I0510 00:47:02.138742 2253 policy_none.go:49] "None policy: Start" May 10 00:47:02.140026 kubelet[2253]: I0510 00:47:02.140003 2253 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:47:02.140226 kubelet[2253]: I0510 00:47:02.140209 2253 state_mem.go:35] "Initializing new in-memory state store" May 10 00:47:02.140497 kubelet[2253]: I0510 00:47:02.140472 2253 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.140740 kubelet[2253]: I0510 00:47:02.140719 2253 state_mem.go:75] "Updated machine memory state" May 10 00:47:02.142746 kubelet[2253]: I0510 00:47:02.142718 2253 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:47:02.143251 kubelet[2253]: I0510 00:47:02.143204 2253 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:47:02.143494 kubelet[2253]: I0510 00:47:02.143476 2253 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:47:02.151466 kubelet[2253]: I0510 00:47:02.151443 2253 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.151735 kubelet[2253]: I0510 00:47:02.151715 2253 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.155321 kubelet[2253]: I0510 00:47:02.155283 2253 topology_manager.go:215] "Topology Admit Handler" podUID="7c4900b0a9dbca89ee62cac9266527ec" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.161463 kubelet[2253]: I0510 00:47:02.161436 2253 topology_manager.go:215] "Topology Admit Handler" podUID="0543b449ae7f45551c54f33014105b37" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.167926 kubelet[2253]: I0510 00:47:02.167893 2253 topology_manager.go:215] "Topology Admit Handler" podUID="c6f83aafb2c99bc5641fe9cca1f6bef0" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.187497 kubelet[2253]: W0510 00:47:02.186867 2253 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 10 00:47:02.187497 kubelet[2253]: W0510 00:47:02.187392 2253 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 10 00:47:02.191914 kubelet[2253]: W0510 00:47:02.188416 2253 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 10 00:47:02.204626 sudo[2284]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 00:47:02.205084 sudo[2284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 10 00:47:02.332793 kubelet[2253]: I0510 00:47:02.332739 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0543b449ae7f45551c54f33014105b37-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"0543b449ae7f45551c54f33014105b37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.333019 kubelet[2253]: I0510 00:47:02.332802 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0543b449ae7f45551c54f33014105b37-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"0543b449ae7f45551c54f33014105b37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.333019 kubelet[2253]: I0510 00:47:02.332841 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0543b449ae7f45551c54f33014105b37-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"0543b449ae7f45551c54f33014105b37\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.333019 kubelet[2253]: I0510 00:47:02.332872 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6f83aafb2c99bc5641fe9cca1f6bef0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"c6f83aafb2c99bc5641fe9cca1f6bef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.333019 kubelet[2253]: I0510 00:47:02.332899 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6f83aafb2c99bc5641fe9cca1f6bef0-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"c6f83aafb2c99bc5641fe9cca1f6bef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.333309 kubelet[2253]: I0510 00:47:02.332926 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c4900b0a9dbca89ee62cac9266527ec-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"7c4900b0a9dbca89ee62cac9266527ec\") " pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.333309 kubelet[2253]: I0510 00:47:02.332951 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6f83aafb2c99bc5641fe9cca1f6bef0-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"c6f83aafb2c99bc5641fe9cca1f6bef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.333309 kubelet[2253]: I0510 00:47:02.332978 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6f83aafb2c99bc5641fe9cca1f6bef0-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"c6f83aafb2c99bc5641fe9cca1f6bef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.333309 kubelet[2253]: I0510 00:47:02.333007 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6f83aafb2c99bc5641fe9cca1f6bef0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" (UID: \"c6f83aafb2c99bc5641fe9cca1f6bef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" May 10 00:47:02.932631 sudo[2284]: pam_unix(sudo:session): session closed for user root May 10 00:47:03.007103 kubelet[2253]: I0510 00:47:03.007030 2253 apiserver.go:52] "Watching apiserver" May 10 00:47:03.032805 kubelet[2253]: I0510 00:47:03.032754 2253 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:47:03.138688 kubelet[2253]: I0510 00:47:03.138579 2253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" podStartSLOduration=1.138551998 podStartE2EDuration="1.138551998s" podCreationTimestamp="2025-05-10 00:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:47:03.125538077 +0000 UTC m=+1.232856398" watchObservedRunningTime="2025-05-10 00:47:03.138551998 +0000 UTC m=+1.245870311" May 10 00:47:03.155948 kubelet[2253]: I0510 00:47:03.155870 2253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" podStartSLOduration=1.155818612 podStartE2EDuration="1.155818612s" podCreationTimestamp="2025-05-10 00:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:47:03.153782092 +0000 UTC m=+1.261100414" watchObservedRunningTime="2025-05-10 00:47:03.155818612 +0000 UTC m=+1.263136924" May 10 00:47:03.156430 kubelet[2253]: I0510 00:47:03.156374 2253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" podStartSLOduration=1.156360561 podStartE2EDuration="1.156360561s" podCreationTimestamp="2025-05-10 00:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:47:03.141158561 +0000 UTC m=+1.248476874" watchObservedRunningTime="2025-05-10 00:47:03.156360561 +0000 UTC m=+1.263678883" May 10 00:47:04.727149 sudo[1550]: pam_unix(sudo:session): session closed for user root May 10 00:47:04.771916 sshd[1545]: pam_unix(sshd:session): session closed for user core May 10 00:47:04.778275 systemd[1]: sshd@4-10.128.0.77:22-147.75.109.163:53484.service: Deactivated successfully. May 10 00:47:04.779804 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:47:04.780345 systemd-logind[1318]: Session 5 logged out. Waiting for processes to exit. May 10 00:47:04.782726 systemd-logind[1318]: Removed session 5. May 10 00:47:07.758873 update_engine[1319]: I0510 00:47:07.758785 1319 update_attempter.cc:509] Updating boot flags... May 10 00:47:16.852464 kubelet[2253]: I0510 00:47:16.852405 2253 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:47:16.853362 env[1332]: time="2025-05-10T00:47:16.853027206Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:47:16.853901 kubelet[2253]: I0510 00:47:16.853378 2253 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:47:17.727175 kubelet[2253]: I0510 00:47:17.727114 2253 topology_manager.go:215] "Topology Admit Handler" podUID="dd5b0f50-4e7f-4b17-ba1b-034deaf7a31b" podNamespace="kube-system" podName="kube-proxy-xsvpn" May 10 00:47:17.735998 kubelet[2253]: I0510 00:47:17.735945 2253 topology_manager.go:215] "Topology Admit Handler" podUID="3fe866e9-5df7-4c04-9eac-5731ca781012" podNamespace="kube-system" podName="cilium-cnn5n" May 10 00:47:17.828158 kubelet[2253]: I0510 00:47:17.828087 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-host-proc-sys-kernel\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.828550 kubelet[2253]: I0510 00:47:17.828509 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd5b0f50-4e7f-4b17-ba1b-034deaf7a31b-kube-proxy\") pod \"kube-proxy-xsvpn\" (UID: \"dd5b0f50-4e7f-4b17-ba1b-034deaf7a31b\") " pod="kube-system/kube-proxy-xsvpn" May 10 00:47:17.828715 kubelet[2253]: I0510 00:47:17.828695 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-hostproc\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.829524 kubelet[2253]: I0510 00:47:17.829478 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-host-proc-sys-net\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.829677 kubelet[2253]: I0510 00:47:17.829529 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-cgroup\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.829677 kubelet[2253]: I0510 00:47:17.829563 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfh2r\" (UniqueName: \"kubernetes.io/projected/3fe866e9-5df7-4c04-9eac-5731ca781012-kube-api-access-pfh2r\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.829677 kubelet[2253]: I0510 00:47:17.829595 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4vr8\" (UniqueName: \"kubernetes.io/projected/dd5b0f50-4e7f-4b17-ba1b-034deaf7a31b-kube-api-access-d4vr8\") pod \"kube-proxy-xsvpn\" (UID: \"dd5b0f50-4e7f-4b17-ba1b-034deaf7a31b\") " pod="kube-system/kube-proxy-xsvpn" May 10 00:47:17.829677 kubelet[2253]: I0510 00:47:17.829627 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-etc-cni-netd\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.829677 kubelet[2253]: I0510 00:47:17.829655 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fe866e9-5df7-4c04-9eac-5731ca781012-hubble-tls\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.829965 kubelet[2253]: I0510 00:47:17.829684 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-bpf-maps\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.829965 kubelet[2253]: I0510 00:47:17.829721 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cni-path\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.829965 kubelet[2253]: I0510 00:47:17.829750 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd5b0f50-4e7f-4b17-ba1b-034deaf7a31b-xtables-lock\") pod \"kube-proxy-xsvpn\" (UID: \"dd5b0f50-4e7f-4b17-ba1b-034deaf7a31b\") " pod="kube-system/kube-proxy-xsvpn" May 10 00:47:17.829965 kubelet[2253]: I0510 00:47:17.829776 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-run\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.829965 kubelet[2253]: I0510 00:47:17.829807 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-config-path\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.829965 kubelet[2253]: I0510 00:47:17.829859 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-lib-modules\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.830307 kubelet[2253]: I0510 00:47:17.829898 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-xtables-lock\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.830307 kubelet[2253]: I0510 00:47:17.829930 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fe866e9-5df7-4c04-9eac-5731ca781012-clustermesh-secrets\") pod \"cilium-cnn5n\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " pod="kube-system/cilium-cnn5n" May 10 00:47:17.830307 kubelet[2253]: I0510 00:47:17.829962 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd5b0f50-4e7f-4b17-ba1b-034deaf7a31b-lib-modules\") pod \"kube-proxy-xsvpn\" (UID: \"dd5b0f50-4e7f-4b17-ba1b-034deaf7a31b\") " pod="kube-system/kube-proxy-xsvpn" May 10 00:47:17.868646 kubelet[2253]: I0510 00:47:17.868578 2253 topology_manager.go:215] "Topology Admit Handler" podUID="eefaac8c-31a6-4209-bcc4-adfef94244d7" podNamespace="kube-system" podName="cilium-operator-599987898-7z9t4" May 10 00:47:17.931088 kubelet[2253]: I0510 00:47:17.931000 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eefaac8c-31a6-4209-bcc4-adfef94244d7-cilium-config-path\") pod \"cilium-operator-599987898-7z9t4\" (UID: \"eefaac8c-31a6-4209-bcc4-adfef94244d7\") " pod="kube-system/cilium-operator-599987898-7z9t4" May 10 00:47:17.931404 kubelet[2253]: I0510 00:47:17.931176 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpzbn\" (UniqueName: \"kubernetes.io/projected/eefaac8c-31a6-4209-bcc4-adfef94244d7-kube-api-access-mpzbn\") pod \"cilium-operator-599987898-7z9t4\" (UID: \"eefaac8c-31a6-4209-bcc4-adfef94244d7\") " pod="kube-system/cilium-operator-599987898-7z9t4" May 10 00:47:18.043221 env[1332]: time="2025-05-10T00:47:18.043123417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnn5n,Uid:3fe866e9-5df7-4c04-9eac-5731ca781012,Namespace:kube-system,Attempt:0,}" May 10 00:47:18.049453 env[1332]: time="2025-05-10T00:47:18.049362665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsvpn,Uid:dd5b0f50-4e7f-4b17-ba1b-034deaf7a31b,Namespace:kube-system,Attempt:0,}" May 10 00:47:18.097701 env[1332]: time="2025-05-10T00:47:18.092548714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:18.097701 env[1332]: time="2025-05-10T00:47:18.092596518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:18.097701 env[1332]: time="2025-05-10T00:47:18.092614044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:18.097701 env[1332]: time="2025-05-10T00:47:18.092838085Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d pid=2361 runtime=io.containerd.runc.v2 May 10 00:47:18.100205 env[1332]: time="2025-05-10T00:47:18.089688192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:18.100205 env[1332]: time="2025-05-10T00:47:18.089748607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:18.100205 env[1332]: time="2025-05-10T00:47:18.089767827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:18.100205 env[1332]: time="2025-05-10T00:47:18.089988753Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/221d9ad954f59007b56b557a693d454c6b602af8c4ada4b553c57e8ccd4f97ef pid=2357 runtime=io.containerd.runc.v2 May 10 00:47:18.175422 env[1332]: time="2025-05-10T00:47:18.175356673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7z9t4,Uid:eefaac8c-31a6-4209-bcc4-adfef94244d7,Namespace:kube-system,Attempt:0,}" May 10 00:47:18.215930 env[1332]: time="2025-05-10T00:47:18.215848438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsvpn,Uid:dd5b0f50-4e7f-4b17-ba1b-034deaf7a31b,Namespace:kube-system,Attempt:0,} returns sandbox id \"221d9ad954f59007b56b557a693d454c6b602af8c4ada4b553c57e8ccd4f97ef\"" May 10 00:47:18.224374 env[1332]: time="2025-05-10T00:47:18.224311001Z" level=info msg="CreateContainer within sandbox \"221d9ad954f59007b56b557a693d454c6b602af8c4ada4b553c57e8ccd4f97ef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:47:18.239119 env[1332]: time="2025-05-10T00:47:18.239031110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnn5n,Uid:3fe866e9-5df7-4c04-9eac-5731ca781012,Namespace:kube-system,Attempt:0,} returns sandbox id \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\"" May 10 00:47:18.245190 env[1332]: time="2025-05-10T00:47:18.245029171Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 00:47:18.250709 env[1332]: time="2025-05-10T00:47:18.250462623Z" level=info msg="CreateContainer within sandbox \"221d9ad954f59007b56b557a693d454c6b602af8c4ada4b553c57e8ccd4f97ef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ceecda1cf67dae7163e42a9ad4130fba50743efac6d8793c3db19b3649a99f18\"" May 10 00:47:18.251797 env[1332]: time="2025-05-10T00:47:18.251675122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:18.252101 env[1332]: time="2025-05-10T00:47:18.251761874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:18.252101 env[1332]: time="2025-05-10T00:47:18.251795196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:18.252101 env[1332]: time="2025-05-10T00:47:18.252033213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28 pid=2438 runtime=io.containerd.runc.v2 May 10 00:47:18.255255 env[1332]: time="2025-05-10T00:47:18.253088867Z" level=info msg="StartContainer for \"ceecda1cf67dae7163e42a9ad4130fba50743efac6d8793c3db19b3649a99f18\"" May 10 00:47:18.396181 env[1332]: time="2025-05-10T00:47:18.389896322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7z9t4,Uid:eefaac8c-31a6-4209-bcc4-adfef94244d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\"" May 10 00:47:18.396181 env[1332]: time="2025-05-10T00:47:18.391780998Z" level=info msg="StartContainer for \"ceecda1cf67dae7163e42a9ad4130fba50743efac6d8793c3db19b3649a99f18\" returns successfully" May 10 00:47:19.155151 kubelet[2253]: I0510 00:47:19.152544 2253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xsvpn" podStartSLOduration=2.152483968 podStartE2EDuration="2.152483968s" podCreationTimestamp="2025-05-10 00:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:47:19.149292962 +0000 UTC m=+17.256611284" watchObservedRunningTime="2025-05-10 00:47:19.152483968 +0000 UTC m=+17.259802291" May 10 00:47:24.177849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157932468.mount: Deactivated successfully. May 10 00:47:27.698829 env[1332]: time="2025-05-10T00:47:27.698737518Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:27.702089 env[1332]: time="2025-05-10T00:47:27.702019903Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:27.708288 env[1332]: time="2025-05-10T00:47:27.708214404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:27.708740 env[1332]: time="2025-05-10T00:47:27.708683288Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 00:47:27.711964 env[1332]: time="2025-05-10T00:47:27.711911537Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 00:47:27.716137 env[1332]: time="2025-05-10T00:47:27.715248029Z" level=info msg="CreateContainer within sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:47:27.737342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount647236348.mount: Deactivated successfully. May 10 00:47:27.747075 env[1332]: time="2025-05-10T00:47:27.747000965Z" level=info msg="CreateContainer within sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8\"" May 10 00:47:27.748652 env[1332]: time="2025-05-10T00:47:27.748577217Z" level=info msg="StartContainer for \"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8\"" May 10 00:47:27.864481 env[1332]: time="2025-05-10T00:47:27.864388861Z" level=info msg="StartContainer for \"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8\" returns successfully" May 10 00:47:28.730234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8-rootfs.mount: Deactivated successfully. May 10 00:47:30.021012 env[1332]: time="2025-05-10T00:47:30.020927376Z" level=info msg="shim disconnected" id=64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8 May 10 00:47:30.021012 env[1332]: time="2025-05-10T00:47:30.021010689Z" level=warning msg="cleaning up after shim disconnected" id=64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8 namespace=k8s.io May 10 00:47:30.021012 env[1332]: time="2025-05-10T00:47:30.021027125Z" level=info msg="cleaning up dead shim" May 10 00:47:30.036267 env[1332]: time="2025-05-10T00:47:30.036189017Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2683 runtime=io.containerd.runc.v2\n" May 10 00:47:30.172016 env[1332]: time="2025-05-10T00:47:30.171596712Z" level=info msg="CreateContainer within sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:47:30.228031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630724946.mount: Deactivated successfully. May 10 00:47:30.236599 env[1332]: time="2025-05-10T00:47:30.236517932Z" level=info msg="CreateContainer within sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626\"" May 10 00:47:30.239596 env[1332]: time="2025-05-10T00:47:30.238049199Z" level=info msg="StartContainer for \"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626\"" May 10 00:47:30.392189 env[1332]: time="2025-05-10T00:47:30.392118950Z" level=info msg="StartContainer for \"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626\" returns successfully" May 10 00:47:30.400244 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:47:30.404233 systemd[1]: Stopped systemd-sysctl.service. May 10 00:47:30.404538 systemd[1]: Stopping systemd-sysctl.service... May 10 00:47:30.410229 systemd[1]: Starting systemd-sysctl.service... May 10 00:47:30.439175 systemd[1]: Finished systemd-sysctl.service. May 10 00:47:30.523441 env[1332]: time="2025-05-10T00:47:30.523358490Z" level=info msg="shim disconnected" id=4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626 May 10 00:47:30.523441 env[1332]: time="2025-05-10T00:47:30.523441949Z" level=warning msg="cleaning up after shim disconnected" id=4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626 namespace=k8s.io May 10 00:47:30.523900 env[1332]: time="2025-05-10T00:47:30.523458475Z" level=info msg="cleaning up dead shim" May 10 00:47:30.546675 env[1332]: time="2025-05-10T00:47:30.546593219Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2750 runtime=io.containerd.runc.v2\n" May 10 00:47:31.184203 env[1332]: time="2025-05-10T00:47:31.184135116Z" level=info msg="CreateContainer within sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:47:31.215340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626-rootfs.mount: Deactivated successfully. May 10 00:47:31.227493 env[1332]: time="2025-05-10T00:47:31.227405948Z" level=info msg="CreateContainer within sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa\"" May 10 00:47:31.230582 env[1332]: time="2025-05-10T00:47:31.229869244Z" level=info msg="StartContainer for \"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa\"" May 10 00:47:31.382546 env[1332]: time="2025-05-10T00:47:31.382472476Z" level=info msg="StartContainer for \"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa\" returns successfully" May 10 00:47:31.432580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa-rootfs.mount: Deactivated successfully. May 10 00:47:31.588338 env[1332]: time="2025-05-10T00:47:31.588259693Z" level=info msg="shim disconnected" id=ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa May 10 00:47:31.588729 env[1332]: time="2025-05-10T00:47:31.588676221Z" level=warning msg="cleaning up after shim disconnected" id=ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa namespace=k8s.io May 10 00:47:31.588729 env[1332]: time="2025-05-10T00:47:31.588704932Z" level=info msg="cleaning up dead shim" May 10 00:47:31.612789 env[1332]: time="2025-05-10T00:47:31.612711671Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2807 runtime=io.containerd.runc.v2\n" May 10 00:47:31.616043 env[1332]: time="2025-05-10T00:47:31.615978944Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:31.618281 env[1332]: time="2025-05-10T00:47:31.618233883Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:31.620568 env[1332]: time="2025-05-10T00:47:31.620525322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:31.621308 env[1332]: time="2025-05-10T00:47:31.621255653Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 00:47:31.627189 env[1332]: time="2025-05-10T00:47:31.627145742Z" level=info msg="CreateContainer within sandbox \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 00:47:31.645374 env[1332]: time="2025-05-10T00:47:31.645283271Z" level=info msg="CreateContainer within sandbox \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\"" May 10 00:47:31.648553 env[1332]: time="2025-05-10T00:47:31.646270272Z" level=info msg="StartContainer for \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\"" May 10 00:47:31.729864 env[1332]: time="2025-05-10T00:47:31.729794663Z" level=info msg="StartContainer for \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\" returns successfully" May 10 00:47:32.185519 env[1332]: time="2025-05-10T00:47:32.185002715Z" level=info msg="CreateContainer within sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:47:32.208732 env[1332]: time="2025-05-10T00:47:32.208639239Z" level=info msg="CreateContainer within sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70\"" May 10 00:47:32.210301 env[1332]: time="2025-05-10T00:47:32.210242912Z" level=info msg="StartContainer for \"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70\"" May 10 00:47:32.324261 kubelet[2253]: I0510 00:47:32.324155 2253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-7z9t4" podStartSLOduration=2.093325853 podStartE2EDuration="15.324124863s" podCreationTimestamp="2025-05-10 00:47:17 +0000 UTC" firstStartedPulling="2025-05-10 00:47:18.391988098 +0000 UTC m=+16.499306398" lastFinishedPulling="2025-05-10 00:47:31.622787095 +0000 UTC m=+29.730105408" observedRunningTime="2025-05-10 00:47:32.323714218 +0000 UTC m=+30.431032539" watchObservedRunningTime="2025-05-10 00:47:32.324124863 +0000 UTC m=+30.431443192" May 10 00:47:32.375316 systemd[1]: run-containerd-runc-k8s.io-fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70-runc.csXxbb.mount: Deactivated successfully. May 10 00:47:32.555162 env[1332]: time="2025-05-10T00:47:32.555074861Z" level=info msg="StartContainer for \"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70\" returns successfully" May 10 00:47:32.631721 env[1332]: time="2025-05-10T00:47:32.631636466Z" level=info msg="shim disconnected" id=fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70 May 10 00:47:32.631721 env[1332]: time="2025-05-10T00:47:32.631723515Z" level=warning msg="cleaning up after shim disconnected" id=fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70 namespace=k8s.io May 10 00:47:32.632193 env[1332]: time="2025-05-10T00:47:32.631741226Z" level=info msg="cleaning up dead shim" May 10 00:47:32.660679 env[1332]: time="2025-05-10T00:47:32.660600238Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:47:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2896 runtime=io.containerd.runc.v2\n" May 10 00:47:33.194759 env[1332]: time="2025-05-10T00:47:33.194687870Z" level=info msg="CreateContainer within sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:47:33.215263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70-rootfs.mount: Deactivated successfully. May 10 00:47:33.234223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3668541691.mount: Deactivated successfully. May 10 00:47:33.244947 env[1332]: time="2025-05-10T00:47:33.244858929Z" level=info msg="CreateContainer within sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\"" May 10 00:47:33.246116 env[1332]: time="2025-05-10T00:47:33.246028357Z" level=info msg="StartContainer for \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\"" May 10 00:47:33.371602 env[1332]: time="2025-05-10T00:47:33.371525661Z" level=info msg="StartContainer for \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\" returns successfully" May 10 00:47:33.539087 kubelet[2253]: I0510 00:47:33.538915 2253 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 00:47:33.586015 kubelet[2253]: I0510 00:47:33.585935 2253 topology_manager.go:215] "Topology Admit Handler" podUID="2c4eb160-117f-4216-927c-f99a696dbe03" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ksrqv" May 10 00:47:33.662778 kubelet[2253]: I0510 00:47:33.662704 2253 topology_manager.go:215] "Topology Admit Handler" podUID="616c645f-2c79-4282-9f24-23fca94555a4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kqjd4" May 10 00:47:33.763783 kubelet[2253]: I0510 00:47:33.763704 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c4eb160-117f-4216-927c-f99a696dbe03-config-volume\") pod \"coredns-7db6d8ff4d-ksrqv\" (UID: \"2c4eb160-117f-4216-927c-f99a696dbe03\") " pod="kube-system/coredns-7db6d8ff4d-ksrqv" May 10 00:47:33.763783 kubelet[2253]: I0510 00:47:33.763782 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwz5b\" (UniqueName: \"kubernetes.io/projected/2c4eb160-117f-4216-927c-f99a696dbe03-kube-api-access-qwz5b\") pod \"coredns-7db6d8ff4d-ksrqv\" (UID: \"2c4eb160-117f-4216-927c-f99a696dbe03\") " pod="kube-system/coredns-7db6d8ff4d-ksrqv" May 10 00:47:33.764188 kubelet[2253]: I0510 00:47:33.763821 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vbt\" (UniqueName: \"kubernetes.io/projected/616c645f-2c79-4282-9f24-23fca94555a4-kube-api-access-96vbt\") pod \"coredns-7db6d8ff4d-kqjd4\" (UID: \"616c645f-2c79-4282-9f24-23fca94555a4\") " pod="kube-system/coredns-7db6d8ff4d-kqjd4" May 10 00:47:33.764188 kubelet[2253]: I0510 00:47:33.763860 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616c645f-2c79-4282-9f24-23fca94555a4-config-volume\") pod \"coredns-7db6d8ff4d-kqjd4\" (UID: \"616c645f-2c79-4282-9f24-23fca94555a4\") " pod="kube-system/coredns-7db6d8ff4d-kqjd4" May 10 00:47:33.937922 env[1332]: time="2025-05-10T00:47:33.937842150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ksrqv,Uid:2c4eb160-117f-4216-927c-f99a696dbe03,Namespace:kube-system,Attempt:0,}" May 10 00:47:33.981940 env[1332]: time="2025-05-10T00:47:33.981017998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kqjd4,Uid:616c645f-2c79-4282-9f24-23fca94555a4,Namespace:kube-system,Attempt:0,}" May 10 00:47:34.228470 kubelet[2253]: I0510 00:47:34.228273 2253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cnn5n" podStartSLOduration=7.7596692560000005 podStartE2EDuration="17.228239728s" podCreationTimestamp="2025-05-10 00:47:17 +0000 UTC" firstStartedPulling="2025-05-10 00:47:18.241907887 +0000 UTC m=+16.349226182" lastFinishedPulling="2025-05-10 00:47:27.710478361 +0000 UTC m=+25.817796654" observedRunningTime="2025-05-10 00:47:34.226042492 +0000 UTC m=+32.333360814" watchObservedRunningTime="2025-05-10 00:47:34.228239728 +0000 UTC m=+32.335558091" May 10 00:47:36.009627 systemd-networkd[1075]: cilium_host: Link UP May 10 00:47:36.019087 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 10 00:47:36.021156 systemd-networkd[1075]: cilium_net: Link UP May 10 00:47:36.028346 systemd-networkd[1075]: cilium_net: Gained carrier May 10 00:47:36.031761 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 10 00:47:36.029701 systemd-networkd[1075]: cilium_host: Gained carrier May 10 00:47:36.032395 systemd-networkd[1075]: cilium_net: Gained IPv6LL May 10 00:47:36.197644 systemd-networkd[1075]: cilium_vxlan: Link UP May 10 00:47:36.197657 systemd-networkd[1075]: cilium_vxlan: Gained carrier May 10 00:47:36.497131 kernel: NET: Registered PF_ALG protocol family May 10 00:47:36.844326 systemd-networkd[1075]: cilium_host: Gained IPv6LL May 10 00:47:37.425897 systemd-networkd[1075]: lxc_health: Link UP May 10 00:47:37.480122 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:47:37.481506 systemd-networkd[1075]: lxc_health: Gained carrier May 10 00:47:37.868818 systemd-networkd[1075]: cilium_vxlan: Gained IPv6LL May 10 00:47:38.013897 systemd-networkd[1075]: lxc12a421415d54: Link UP May 10 00:47:38.024105 kernel: eth0: renamed from tmp166f6 May 10 00:47:38.039100 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc12a421415d54: link becomes ready May 10 00:47:38.049344 systemd-networkd[1075]: lxc12a421415d54: Gained carrier May 10 00:47:38.088300 systemd-networkd[1075]: lxc4461b83dd4c4: Link UP May 10 00:47:38.109578 kernel: eth0: renamed from tmpd4a1f May 10 00:47:38.144124 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4461b83dd4c4: link becomes ready May 10 00:47:38.147415 systemd-networkd[1075]: lxc4461b83dd4c4: Gained carrier May 10 00:47:38.636288 systemd-networkd[1075]: lxc_health: Gained IPv6LL May 10 00:47:39.212290 systemd-networkd[1075]: lxc12a421415d54: Gained IPv6LL May 10 00:47:39.532645 systemd-networkd[1075]: lxc4461b83dd4c4: Gained IPv6LL May 10 00:47:43.232910 env[1332]: time="2025-05-10T00:47:43.232812956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:43.233781 env[1332]: time="2025-05-10T00:47:43.233715524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:43.234012 env[1332]: time="2025-05-10T00:47:43.233961270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:43.237553 env[1332]: time="2025-05-10T00:47:43.237473779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:43.237789 env[1332]: time="2025-05-10T00:47:43.237736201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:43.237980 env[1332]: time="2025-05-10T00:47:43.237929163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:43.238464 env[1332]: time="2025-05-10T00:47:43.238409839Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/166f662f2b9ee1c666868fceafaea83c86724dc7f4661c20114b7364c9f39b27 pid=3436 runtime=io.containerd.runc.v2 May 10 00:47:43.238801 env[1332]: time="2025-05-10T00:47:43.238749188Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4a1f6a7dc53312e02187e2c28a0af04c32d2e57dff0e33efbab78cb39a4fbae pid=3441 runtime=io.containerd.runc.v2 May 10 00:47:43.340116 systemd[1]: run-containerd-runc-k8s.io-d4a1f6a7dc53312e02187e2c28a0af04c32d2e57dff0e33efbab78cb39a4fbae-runc.O39lsE.mount: Deactivated successfully. May 10 00:47:43.467225 env[1332]: time="2025-05-10T00:47:43.467131786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kqjd4,Uid:616c645f-2c79-4282-9f24-23fca94555a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4a1f6a7dc53312e02187e2c28a0af04c32d2e57dff0e33efbab78cb39a4fbae\"" May 10 00:47:43.474607 env[1332]: time="2025-05-10T00:47:43.474541541Z" level=info msg="CreateContainer within sandbox \"d4a1f6a7dc53312e02187e2c28a0af04c32d2e57dff0e33efbab78cb39a4fbae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:47:43.495963 env[1332]: time="2025-05-10T00:47:43.495795707Z" level=info msg="CreateContainer within sandbox \"d4a1f6a7dc53312e02187e2c28a0af04c32d2e57dff0e33efbab78cb39a4fbae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab93b2cafe58220a4205bf043a6e3ab437974f43cb66f91d0a8e1cc729766bc0\"" May 10 00:47:43.498019 env[1332]: time="2025-05-10T00:47:43.497969253Z" level=info msg="StartContainer for \"ab93b2cafe58220a4205bf043a6e3ab437974f43cb66f91d0a8e1cc729766bc0\"" May 10 00:47:43.550383 env[1332]: time="2025-05-10T00:47:43.550293204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ksrqv,Uid:2c4eb160-117f-4216-927c-f99a696dbe03,Namespace:kube-system,Attempt:0,} returns sandbox id \"166f662f2b9ee1c666868fceafaea83c86724dc7f4661c20114b7364c9f39b27\"" May 10 00:47:43.555946 env[1332]: time="2025-05-10T00:47:43.555859145Z" level=info msg="CreateContainer within sandbox \"166f662f2b9ee1c666868fceafaea83c86724dc7f4661c20114b7364c9f39b27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:47:43.584378 env[1332]: time="2025-05-10T00:47:43.584304306Z" level=info msg="CreateContainer within sandbox \"166f662f2b9ee1c666868fceafaea83c86724dc7f4661c20114b7364c9f39b27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82565a3a7316c2f31690a949e3e0abd593c02b3960dfa7e6bbaccb31a282f33e\"" May 10 00:47:43.585753 env[1332]: time="2025-05-10T00:47:43.585705331Z" level=info msg="StartContainer for \"82565a3a7316c2f31690a949e3e0abd593c02b3960dfa7e6bbaccb31a282f33e\"" May 10 00:47:43.646159 env[1332]: time="2025-05-10T00:47:43.646076287Z" level=info msg="StartContainer for \"ab93b2cafe58220a4205bf043a6e3ab437974f43cb66f91d0a8e1cc729766bc0\" returns successfully" May 10 00:47:43.713205 env[1332]: time="2025-05-10T00:47:43.713126205Z" level=info msg="StartContainer for \"82565a3a7316c2f31690a949e3e0abd593c02b3960dfa7e6bbaccb31a282f33e\" returns successfully" May 10 00:47:44.251006 systemd[1]: run-containerd-runc-k8s.io-166f662f2b9ee1c666868fceafaea83c86724dc7f4661c20114b7364c9f39b27-runc.aEfsHY.mount: Deactivated successfully. May 10 00:47:44.280432 kubelet[2253]: I0510 00:47:44.280294 2253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kqjd4" podStartSLOduration=27.280256066 podStartE2EDuration="27.280256066s" podCreationTimestamp="2025-05-10 00:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:47:44.254265304 +0000 UTC m=+42.361583655" watchObservedRunningTime="2025-05-10 00:47:44.280256066 +0000 UTC m=+42.387574388" May 10 00:47:44.306363 kubelet[2253]: I0510 00:47:44.306268 2253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ksrqv" podStartSLOduration=27.306228492 podStartE2EDuration="27.306228492s" podCreationTimestamp="2025-05-10 00:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:47:44.282715473 +0000 UTC m=+42.390033794" watchObservedRunningTime="2025-05-10 00:47:44.306228492 +0000 UTC m=+42.413546817" May 10 00:47:57.922206 systemd[1]: Started sshd@5-10.128.0.77:22-147.75.109.163:39052.service. May 10 00:47:58.208577 sshd[3595]: Accepted publickey for core from 147.75.109.163 port 39052 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:47:58.211536 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:58.220240 systemd-logind[1318]: New session 6 of user core. May 10 00:47:58.220557 systemd[1]: Started session-6.scope. May 10 00:47:58.539819 sshd[3595]: pam_unix(sshd:session): session closed for user core May 10 00:47:58.548593 systemd[1]: sshd@5-10.128.0.77:22-147.75.109.163:39052.service: Deactivated successfully. May 10 00:47:58.550340 systemd-logind[1318]: Session 6 logged out. Waiting for processes to exit. May 10 00:47:58.550980 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:47:58.553023 systemd-logind[1318]: Removed session 6. May 10 00:48:03.585964 systemd[1]: Started sshd@6-10.128.0.77:22-147.75.109.163:39068.service. May 10 00:48:03.876730 sshd[3611]: Accepted publickey for core from 147.75.109.163 port 39068 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:03.879087 sshd[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:03.886964 systemd[1]: Started session-7.scope. May 10 00:48:03.888379 systemd-logind[1318]: New session 7 of user core. May 10 00:48:04.175546 sshd[3611]: pam_unix(sshd:session): session closed for user core May 10 00:48:04.181250 systemd[1]: sshd@6-10.128.0.77:22-147.75.109.163:39068.service: Deactivated successfully. May 10 00:48:04.182887 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:48:04.186323 systemd-logind[1318]: Session 7 logged out. Waiting for processes to exit. May 10 00:48:04.188275 systemd-logind[1318]: Removed session 7. May 10 00:48:09.222831 systemd[1]: Started sshd@7-10.128.0.77:22-147.75.109.163:39678.service. May 10 00:48:09.518210 sshd[3625]: Accepted publickey for core from 147.75.109.163 port 39678 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:09.520632 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:09.528859 systemd[1]: Started session-8.scope. May 10 00:48:09.530535 systemd-logind[1318]: New session 8 of user core. May 10 00:48:09.815993 sshd[3625]: pam_unix(sshd:session): session closed for user core May 10 00:48:09.822181 systemd[1]: sshd@7-10.128.0.77:22-147.75.109.163:39678.service: Deactivated successfully. May 10 00:48:09.825003 systemd-logind[1318]: Session 8 logged out. Waiting for processes to exit. May 10 00:48:09.825120 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:48:09.826971 systemd-logind[1318]: Removed session 8. May 10 00:48:14.861212 systemd[1]: Started sshd@8-10.128.0.77:22-147.75.109.163:39680.service. May 10 00:48:15.149032 sshd[3638]: Accepted publickey for core from 147.75.109.163 port 39680 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:15.151184 sshd[3638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:15.159302 systemd[1]: Started session-9.scope. May 10 00:48:15.160596 systemd-logind[1318]: New session 9 of user core. May 10 00:48:15.445194 sshd[3638]: pam_unix(sshd:session): session closed for user core May 10 00:48:15.450731 systemd[1]: sshd@8-10.128.0.77:22-147.75.109.163:39680.service: Deactivated successfully. May 10 00:48:15.453626 systemd-logind[1318]: Session 9 logged out. Waiting for processes to exit. May 10 00:48:15.454826 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:48:15.457200 systemd-logind[1318]: Removed session 9. May 10 00:48:20.491688 systemd[1]: Started sshd@9-10.128.0.77:22-147.75.109.163:36568.service. May 10 00:48:20.781201 sshd[3654]: Accepted publickey for core from 147.75.109.163 port 36568 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:20.783434 sshd[3654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:20.791991 systemd[1]: Started session-10.scope. May 10 00:48:20.793411 systemd-logind[1318]: New session 10 of user core. May 10 00:48:21.074471 sshd[3654]: pam_unix(sshd:session): session closed for user core May 10 00:48:21.080743 systemd-logind[1318]: Session 10 logged out. Waiting for processes to exit. May 10 00:48:21.082207 systemd[1]: sshd@9-10.128.0.77:22-147.75.109.163:36568.service: Deactivated successfully. May 10 00:48:21.083645 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:48:21.085631 systemd-logind[1318]: Removed session 10. May 10 00:48:21.124776 systemd[1]: Started sshd@10-10.128.0.77:22-147.75.109.163:36580.service. May 10 00:48:21.424081 sshd[3668]: Accepted publickey for core from 147.75.109.163 port 36580 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:21.425948 sshd[3668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:21.433829 systemd[1]: Started session-11.scope. May 10 00:48:21.434838 systemd-logind[1318]: New session 11 of user core. May 10 00:48:21.771400 sshd[3668]: pam_unix(sshd:session): session closed for user core May 10 00:48:21.777692 systemd-logind[1318]: Session 11 logged out. Waiting for processes to exit. May 10 00:48:21.778214 systemd[1]: sshd@10-10.128.0.77:22-147.75.109.163:36580.service: Deactivated successfully. May 10 00:48:21.779722 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:48:21.782402 systemd-logind[1318]: Removed session 11. May 10 00:48:21.814854 systemd[1]: Started sshd@11-10.128.0.77:22-147.75.109.163:36584.service. May 10 00:48:22.102879 sshd[3678]: Accepted publickey for core from 147.75.109.163 port 36584 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:22.105662 sshd[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:22.113162 systemd-logind[1318]: New session 12 of user core. May 10 00:48:22.113622 systemd[1]: Started session-12.scope. May 10 00:48:22.398377 sshd[3678]: pam_unix(sshd:session): session closed for user core May 10 00:48:22.404836 systemd[1]: sshd@11-10.128.0.77:22-147.75.109.163:36584.service: Deactivated successfully. May 10 00:48:22.406138 systemd-logind[1318]: Session 12 logged out. Waiting for processes to exit. May 10 00:48:22.407369 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:48:22.408611 systemd-logind[1318]: Removed session 12. May 10 00:48:27.445455 systemd[1]: Started sshd@12-10.128.0.77:22-147.75.109.163:59882.service. May 10 00:48:27.739373 sshd[3691]: Accepted publickey for core from 147.75.109.163 port 59882 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:27.741814 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:27.750222 systemd[1]: Started session-13.scope. May 10 00:48:27.751371 systemd-logind[1318]: New session 13 of user core. May 10 00:48:28.043779 sshd[3691]: pam_unix(sshd:session): session closed for user core May 10 00:48:28.049134 systemd[1]: sshd@12-10.128.0.77:22-147.75.109.163:59882.service: Deactivated successfully. May 10 00:48:28.051147 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:48:28.051284 systemd-logind[1318]: Session 13 logged out. Waiting for processes to exit. May 10 00:48:28.054384 systemd-logind[1318]: Removed session 13. May 10 00:48:33.088533 systemd[1]: Started sshd@13-10.128.0.77:22-147.75.109.163:59892.service. May 10 00:48:33.376300 sshd[3704]: Accepted publickey for core from 147.75.109.163 port 59892 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:33.378426 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:33.387316 systemd[1]: Started session-14.scope. May 10 00:48:33.388707 systemd-logind[1318]: New session 14 of user core. May 10 00:48:33.669982 sshd[3704]: pam_unix(sshd:session): session closed for user core May 10 00:48:33.675733 systemd[1]: sshd@13-10.128.0.77:22-147.75.109.163:59892.service: Deactivated successfully. May 10 00:48:33.678343 systemd-logind[1318]: Session 14 logged out. Waiting for processes to exit. May 10 00:48:33.679401 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:48:33.681713 systemd-logind[1318]: Removed session 14. May 10 00:48:33.716901 systemd[1]: Started sshd@14-10.128.0.77:22-147.75.109.163:59902.service. May 10 00:48:34.014342 sshd[3717]: Accepted publickey for core from 147.75.109.163 port 59902 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:34.017051 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:34.024885 systemd-logind[1318]: New session 15 of user core. May 10 00:48:34.026006 systemd[1]: Started session-15.scope. May 10 00:48:34.393708 sshd[3717]: pam_unix(sshd:session): session closed for user core May 10 00:48:34.400017 systemd[1]: sshd@14-10.128.0.77:22-147.75.109.163:59902.service: Deactivated successfully. May 10 00:48:34.401186 systemd-logind[1318]: Session 15 logged out. Waiting for processes to exit. May 10 00:48:34.402453 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:48:34.403554 systemd-logind[1318]: Removed session 15. May 10 00:48:34.438708 systemd[1]: Started sshd@15-10.128.0.77:22-147.75.109.163:59908.service. May 10 00:48:34.727950 sshd[3728]: Accepted publickey for core from 147.75.109.163 port 59908 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:34.730852 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:34.739079 systemd[1]: Started session-16.scope. May 10 00:48:34.740665 systemd-logind[1318]: New session 16 of user core. May 10 00:48:36.761461 sshd[3728]: pam_unix(sshd:session): session closed for user core May 10 00:48:36.768318 systemd-logind[1318]: Session 16 logged out. Waiting for processes to exit. May 10 00:48:36.768680 systemd[1]: sshd@15-10.128.0.77:22-147.75.109.163:59908.service: Deactivated successfully. May 10 00:48:36.770863 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:48:36.771779 systemd-logind[1318]: Removed session 16. May 10 00:48:36.805556 systemd[1]: Started sshd@16-10.128.0.77:22-147.75.109.163:51204.service. May 10 00:48:37.094819 sshd[3746]: Accepted publickey for core from 147.75.109.163 port 51204 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:37.097272 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:37.105802 systemd[1]: Started session-17.scope. May 10 00:48:37.106261 systemd-logind[1318]: New session 17 of user core. May 10 00:48:37.561924 sshd[3746]: pam_unix(sshd:session): session closed for user core May 10 00:48:37.568034 systemd[1]: sshd@16-10.128.0.77:22-147.75.109.163:51204.service: Deactivated successfully. May 10 00:48:37.569893 systemd-logind[1318]: Session 17 logged out. Waiting for processes to exit. May 10 00:48:37.570026 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:48:37.573570 systemd-logind[1318]: Removed session 17. May 10 00:48:37.605890 systemd[1]: Started sshd@17-10.128.0.77:22-147.75.109.163:51214.service. May 10 00:48:37.892352 sshd[3757]: Accepted publickey for core from 147.75.109.163 port 51214 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:37.895120 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:37.903240 systemd[1]: Started session-18.scope. May 10 00:48:37.905128 systemd-logind[1318]: New session 18 of user core. May 10 00:48:38.191744 sshd[3757]: pam_unix(sshd:session): session closed for user core May 10 00:48:38.197324 systemd[1]: sshd@17-10.128.0.77:22-147.75.109.163:51214.service: Deactivated successfully. May 10 00:48:38.199351 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:48:38.199976 systemd-logind[1318]: Session 18 logged out. Waiting for processes to exit. May 10 00:48:38.202100 systemd-logind[1318]: Removed session 18. May 10 00:48:43.239104 systemd[1]: Started sshd@18-10.128.0.77:22-147.75.109.163:51220.service. May 10 00:48:43.530435 sshd[3772]: Accepted publickey for core from 147.75.109.163 port 51220 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:43.532897 sshd[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:43.541217 systemd-logind[1318]: New session 19 of user core. May 10 00:48:43.541918 systemd[1]: Started session-19.scope. May 10 00:48:43.827319 sshd[3772]: pam_unix(sshd:session): session closed for user core May 10 00:48:43.833106 systemd[1]: sshd@18-10.128.0.77:22-147.75.109.163:51220.service: Deactivated successfully. May 10 00:48:43.836231 systemd-logind[1318]: Session 19 logged out. Waiting for processes to exit. May 10 00:48:43.836698 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:48:43.839221 systemd-logind[1318]: Removed session 19. May 10 00:48:48.873870 systemd[1]: Started sshd@19-10.128.0.77:22-147.75.109.163:36634.service. May 10 00:48:49.162695 sshd[3787]: Accepted publickey for core from 147.75.109.163 port 36634 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:49.164813 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:49.172817 systemd[1]: Started session-20.scope. May 10 00:48:49.174443 systemd-logind[1318]: New session 20 of user core. May 10 00:48:49.463330 sshd[3787]: pam_unix(sshd:session): session closed for user core May 10 00:48:49.470380 systemd[1]: sshd@19-10.128.0.77:22-147.75.109.163:36634.service: Deactivated successfully. May 10 00:48:49.472362 systemd-logind[1318]: Session 20 logged out. Waiting for processes to exit. May 10 00:48:49.472474 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:48:49.477405 systemd-logind[1318]: Removed session 20. May 10 00:48:54.509051 systemd[1]: Started sshd@20-10.128.0.77:22-147.75.109.163:36640.service. May 10 00:48:54.796573 sshd[3800]: Accepted publickey for core from 147.75.109.163 port 36640 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:54.799576 sshd[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:54.808175 systemd[1]: Started session-21.scope. May 10 00:48:54.809136 systemd-logind[1318]: New session 21 of user core. May 10 00:48:55.085424 sshd[3800]: pam_unix(sshd:session): session closed for user core May 10 00:48:55.091027 systemd[1]: sshd@20-10.128.0.77:22-147.75.109.163:36640.service: Deactivated successfully. May 10 00:48:55.095115 systemd-logind[1318]: Session 21 logged out. Waiting for processes to exit. May 10 00:48:55.096384 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:48:55.098388 systemd-logind[1318]: Removed session 21. May 10 00:48:55.131753 systemd[1]: Started sshd@21-10.128.0.77:22-147.75.109.163:36650.service. May 10 00:48:55.421131 sshd[3813]: Accepted publickey for core from 147.75.109.163 port 36650 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:55.423924 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:55.433029 systemd[1]: Started session-22.scope. May 10 00:48:55.433436 systemd-logind[1318]: New session 22 of user core. May 10 00:48:57.379282 env[1332]: time="2025-05-10T00:48:57.379196256Z" level=info msg="StopContainer for \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\" with timeout 30 (s)" May 10 00:48:57.380905 env[1332]: time="2025-05-10T00:48:57.380859935Z" level=info msg="Stop container \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\" with signal terminated" May 10 00:48:57.443658 systemd[1]: run-containerd-runc-k8s.io-0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4-runc.n53P82.mount: Deactivated successfully. May 10 00:48:57.484948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff-rootfs.mount: Deactivated successfully. May 10 00:48:57.490778 env[1332]: time="2025-05-10T00:48:57.490683953Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:48:57.503272 env[1332]: time="2025-05-10T00:48:57.502486457Z" level=info msg="StopContainer for \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\" with timeout 2 (s)" May 10 00:48:57.503829 env[1332]: time="2025-05-10T00:48:57.503777607Z" level=info msg="Stop container \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\" with signal terminated" May 10 00:48:57.519226 systemd-networkd[1075]: lxc_health: Link DOWN May 10 00:48:57.519238 systemd-networkd[1075]: lxc_health: Lost carrier May 10 00:48:57.524588 env[1332]: time="2025-05-10T00:48:57.524462451Z" level=info msg="shim disconnected" id=fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff May 10 00:48:57.524588 env[1332]: time="2025-05-10T00:48:57.524561244Z" level=warning msg="cleaning up after shim disconnected" id=fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff namespace=k8s.io May 10 00:48:57.524889 env[1332]: time="2025-05-10T00:48:57.524631049Z" level=info msg="cleaning up dead shim" May 10 00:48:57.560108 env[1332]: time="2025-05-10T00:48:57.559983156Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3867 runtime=io.containerd.runc.v2\n" May 10 00:48:57.562900 env[1332]: time="2025-05-10T00:48:57.562832512Z" level=info msg="StopContainer for \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\" returns successfully" May 10 00:48:57.564024 env[1332]: time="2025-05-10T00:48:57.563970991Z" level=info msg="StopPodSandbox for \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\"" May 10 00:48:57.564205 env[1332]: time="2025-05-10T00:48:57.564119291Z" level=info msg="Container to stop \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:48:57.568551 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28-shm.mount: Deactivated successfully. May 10 00:48:57.625364 env[1332]: time="2025-05-10T00:48:57.625259948Z" level=info msg="shim disconnected" id=0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4 May 10 00:48:57.625364 env[1332]: time="2025-05-10T00:48:57.625370215Z" level=warning msg="cleaning up after shim disconnected" id=0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4 namespace=k8s.io May 10 00:48:57.625364 env[1332]: time="2025-05-10T00:48:57.625386983Z" level=info msg="cleaning up dead shim" May 10 00:48:57.651594 env[1332]: time="2025-05-10T00:48:57.651392508Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3912 runtime=io.containerd.runc.v2\n" May 10 00:48:57.654376 env[1332]: time="2025-05-10T00:48:57.654301896Z" level=info msg="shim disconnected" id=1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28 May 10 00:48:57.654575 env[1332]: time="2025-05-10T00:48:57.654380718Z" level=warning msg="cleaning up after shim disconnected" id=1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28 namespace=k8s.io May 10 00:48:57.654575 env[1332]: time="2025-05-10T00:48:57.654397834Z" level=info msg="cleaning up dead shim" May 10 00:48:57.658046 env[1332]: time="2025-05-10T00:48:57.657987727Z" level=info msg="StopContainer for \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\" returns successfully" May 10 00:48:57.658815 env[1332]: time="2025-05-10T00:48:57.658772205Z" level=info msg="StopPodSandbox for \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\"" May 10 00:48:57.659413 env[1332]: time="2025-05-10T00:48:57.659041712Z" level=info msg="Container to stop \"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:48:57.659683 env[1332]: time="2025-05-10T00:48:57.659636359Z" level=info msg="Container to stop \"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:48:57.659864 env[1332]: time="2025-05-10T00:48:57.659822858Z" level=info msg="Container to stop \"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:48:57.659996 env[1332]: time="2025-05-10T00:48:57.659968738Z" level=info msg="Container to stop \"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:48:57.660144 env[1332]: time="2025-05-10T00:48:57.660118208Z" level=info msg="Container to stop \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:48:57.687435 env[1332]: time="2025-05-10T00:48:57.687351742Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3931 runtime=io.containerd.runc.v2\n" May 10 00:48:57.688586 env[1332]: time="2025-05-10T00:48:57.688529569Z" level=info msg="TearDown network for sandbox \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\" successfully" May 10 00:48:57.688853 env[1332]: time="2025-05-10T00:48:57.688794204Z" level=info msg="StopPodSandbox for \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\" returns successfully" May 10 00:48:57.724432 env[1332]: time="2025-05-10T00:48:57.724324762Z" level=info msg="shim disconnected" id=e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d May 10 00:48:57.724860 env[1332]: time="2025-05-10T00:48:57.724821010Z" level=warning msg="cleaning up after shim disconnected" id=e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d namespace=k8s.io May 10 00:48:57.725136 env[1332]: time="2025-05-10T00:48:57.725105847Z" level=info msg="cleaning up dead shim" May 10 00:48:57.740730 env[1332]: time="2025-05-10T00:48:57.740655305Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3966 runtime=io.containerd.runc.v2\n" May 10 00:48:57.741348 env[1332]: time="2025-05-10T00:48:57.741283844Z" level=info msg="TearDown network for sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" successfully" May 10 00:48:57.741348 env[1332]: time="2025-05-10T00:48:57.741329088Z" level=info msg="StopPodSandbox for \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" returns successfully" May 10 00:48:57.774226 kubelet[2253]: I0510 00:48:57.774142 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpzbn\" (UniqueName: \"kubernetes.io/projected/eefaac8c-31a6-4209-bcc4-adfef94244d7-kube-api-access-mpzbn\") pod \"eefaac8c-31a6-4209-bcc4-adfef94244d7\" (UID: \"eefaac8c-31a6-4209-bcc4-adfef94244d7\") " May 10 00:48:57.775264 kubelet[2253]: I0510 00:48:57.775224 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eefaac8c-31a6-4209-bcc4-adfef94244d7-cilium-config-path\") pod \"eefaac8c-31a6-4209-bcc4-adfef94244d7\" (UID: \"eefaac8c-31a6-4209-bcc4-adfef94244d7\") " May 10 00:48:57.780048 kubelet[2253]: I0510 00:48:57.779938 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eefaac8c-31a6-4209-bcc4-adfef94244d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eefaac8c-31a6-4209-bcc4-adfef94244d7" (UID: "eefaac8c-31a6-4209-bcc4-adfef94244d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:48:57.781026 kubelet[2253]: I0510 00:48:57.780974 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eefaac8c-31a6-4209-bcc4-adfef94244d7-kube-api-access-mpzbn" (OuterVolumeSpecName: "kube-api-access-mpzbn") pod "eefaac8c-31a6-4209-bcc4-adfef94244d7" (UID: "eefaac8c-31a6-4209-bcc4-adfef94244d7"). InnerVolumeSpecName "kube-api-access-mpzbn". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:48:57.876622 kubelet[2253]: I0510 00:48:57.876548 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-run\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.876622 kubelet[2253]: I0510 00:48:57.876631 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-host-proc-sys-kernel\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877003 kubelet[2253]: I0510 00:48:57.876672 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fe866e9-5df7-4c04-9eac-5731ca781012-clustermesh-secrets\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877003 kubelet[2253]: I0510 00:48:57.876701 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-etc-cni-netd\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877003 kubelet[2253]: I0510 00:48:57.876725 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-bpf-maps\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877003 kubelet[2253]: I0510 00:48:57.876759 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cni-path\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877003 kubelet[2253]: I0510 00:48:57.876789 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fe866e9-5df7-4c04-9eac-5731ca781012-hubble-tls\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877003 kubelet[2253]: I0510 00:48:57.876816 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-lib-modules\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877524 kubelet[2253]: I0510 00:48:57.876839 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-hostproc\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877524 kubelet[2253]: I0510 00:48:57.876874 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfh2r\" (UniqueName: \"kubernetes.io/projected/3fe866e9-5df7-4c04-9eac-5731ca781012-kube-api-access-pfh2r\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877524 kubelet[2253]: I0510 00:48:57.876901 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-config-path\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877524 kubelet[2253]: I0510 00:48:57.876924 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-xtables-lock\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877524 kubelet[2253]: I0510 00:48:57.876951 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-host-proc-sys-net\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877524 kubelet[2253]: I0510 00:48:57.876978 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-cgroup\") pod \"3fe866e9-5df7-4c04-9eac-5731ca781012\" (UID: \"3fe866e9-5df7-4c04-9eac-5731ca781012\") " May 10 00:48:57.877839 kubelet[2253]: I0510 00:48:57.877051 2253 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eefaac8c-31a6-4209-bcc4-adfef94244d7-cilium-config-path\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.877839 kubelet[2253]: I0510 00:48:57.877190 2253 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mpzbn\" (UniqueName: \"kubernetes.io/projected/eefaac8c-31a6-4209-bcc4-adfef94244d7-kube-api-access-mpzbn\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.877839 kubelet[2253]: I0510 00:48:57.877269 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:48:57.877839 kubelet[2253]: I0510 00:48:57.877331 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:48:57.877839 kubelet[2253]: I0510 00:48:57.877362 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:48:57.878161 kubelet[2253]: I0510 00:48:57.877921 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:48:57.878161 kubelet[2253]: I0510 00:48:57.877967 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:48:57.878161 kubelet[2253]: I0510 00:48:57.877993 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:48:57.878161 kubelet[2253]: I0510 00:48:57.878016 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cni-path" (OuterVolumeSpecName: "cni-path") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:48:57.879173 kubelet[2253]: I0510 00:48:57.879119 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:48:57.879331 kubelet[2253]: I0510 00:48:57.879177 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:48:57.879871 kubelet[2253]: I0510 00:48:57.879829 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-hostproc" (OuterVolumeSpecName: "hostproc") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:48:57.882535 kubelet[2253]: I0510 00:48:57.882490 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:48:57.887732 kubelet[2253]: I0510 00:48:57.887688 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe866e9-5df7-4c04-9eac-5731ca781012-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:48:57.888391 kubelet[2253]: I0510 00:48:57.888352 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe866e9-5df7-4c04-9eac-5731ca781012-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:48:57.890740 kubelet[2253]: I0510 00:48:57.890682 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe866e9-5df7-4c04-9eac-5731ca781012-kube-api-access-pfh2r" (OuterVolumeSpecName: "kube-api-access-pfh2r") pod "3fe866e9-5df7-4c04-9eac-5731ca781012" (UID: "3fe866e9-5df7-4c04-9eac-5731ca781012"). InnerVolumeSpecName "kube-api-access-pfh2r". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:48:57.978645 kubelet[2253]: I0510 00:48:57.978264 2253 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-host-proc-sys-kernel\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.978645 kubelet[2253]: I0510 00:48:57.978338 2253 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fe866e9-5df7-4c04-9eac-5731ca781012-clustermesh-secrets\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.978645 kubelet[2253]: I0510 00:48:57.978358 2253 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-etc-cni-netd\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.978645 kubelet[2253]: I0510 00:48:57.978385 2253 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-bpf-maps\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.978645 kubelet[2253]: I0510 00:48:57.978407 2253 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cni-path\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.978645 kubelet[2253]: I0510 00:48:57.978423 2253 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fe866e9-5df7-4c04-9eac-5731ca781012-hubble-tls\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.978645 kubelet[2253]: I0510 00:48:57.978439 2253 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-lib-modules\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.979331 kubelet[2253]: I0510 00:48:57.978461 2253 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-hostproc\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.979331 kubelet[2253]: I0510 00:48:57.978480 2253 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pfh2r\" (UniqueName: \"kubernetes.io/projected/3fe866e9-5df7-4c04-9eac-5731ca781012-kube-api-access-pfh2r\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.979331 kubelet[2253]: I0510 00:48:57.978499 2253 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-config-path\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.979331 kubelet[2253]: I0510 00:48:57.978516 2253 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-xtables-lock\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.979331 kubelet[2253]: I0510 00:48:57.978533 2253 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-host-proc-sys-net\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.979331 kubelet[2253]: I0510 00:48:57.978548 2253 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-cgroup\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:57.979331 kubelet[2253]: I0510 00:48:57.978565 2253 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fe866e9-5df7-4c04-9eac-5731ca781012-cilium-run\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:48:58.418799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4-rootfs.mount: Deactivated successfully. May 10 00:48:58.419105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28-rootfs.mount: Deactivated successfully. May 10 00:48:58.419279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d-rootfs.mount: Deactivated successfully. May 10 00:48:58.419449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d-shm.mount: Deactivated successfully. May 10 00:48:58.419622 systemd[1]: var-lib-kubelet-pods-eefaac8c\x2d31a6\x2d4209\x2dbcc4\x2dadfef94244d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmpzbn.mount: Deactivated successfully. May 10 00:48:58.419836 systemd[1]: var-lib-kubelet-pods-3fe866e9\x2d5df7\x2d4c04\x2d9eac\x2d5731ca781012-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpfh2r.mount: Deactivated successfully. May 10 00:48:58.420023 systemd[1]: var-lib-kubelet-pods-3fe866e9\x2d5df7\x2d4c04\x2d9eac\x2d5731ca781012-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:48:58.420256 systemd[1]: var-lib-kubelet-pods-3fe866e9\x2d5df7\x2d4c04\x2d9eac\x2d5731ca781012-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:48:58.454688 kubelet[2253]: I0510 00:48:58.454639 2253 scope.go:117] "RemoveContainer" containerID="fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff" May 10 00:48:58.457274 env[1332]: time="2025-05-10T00:48:58.457207829Z" level=info msg="RemoveContainer for \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\"" May 10 00:48:58.480732 env[1332]: time="2025-05-10T00:48:58.480654890Z" level=info msg="RemoveContainer for \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\" returns successfully" May 10 00:48:58.481747 kubelet[2253]: I0510 00:48:58.481549 2253 scope.go:117] "RemoveContainer" containerID="fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff" May 10 00:48:58.482367 env[1332]: time="2025-05-10T00:48:58.482252092Z" level=error msg="ContainerStatus for \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\": not found" May 10 00:48:58.482777 kubelet[2253]: E0510 00:48:58.482735 2253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\": not found" containerID="fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff" May 10 00:48:58.482990 kubelet[2253]: I0510 00:48:58.482792 2253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff"} err="failed to get container status \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc9573dbbd1bd515ee68c4cf1b008e0cef0dffd0d746f35d3586418d4bc25cff\": not found" May 10 00:48:58.482990 kubelet[2253]: I0510 00:48:58.482909 2253 scope.go:117] "RemoveContainer" containerID="0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4" May 10 00:48:58.485670 env[1332]: time="2025-05-10T00:48:58.485624454Z" level=info msg="RemoveContainer for \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\"" May 10 00:48:58.495021 env[1332]: time="2025-05-10T00:48:58.494945993Z" level=info msg="RemoveContainer for \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\" returns successfully" May 10 00:48:58.495305 kubelet[2253]: I0510 00:48:58.495254 2253 scope.go:117] "RemoveContainer" containerID="fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70" May 10 00:48:58.496984 env[1332]: time="2025-05-10T00:48:58.496935516Z" level=info msg="RemoveContainer for \"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70\"" May 10 00:48:58.502139 env[1332]: time="2025-05-10T00:48:58.501189343Z" level=info msg="RemoveContainer for \"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70\" returns successfully" May 10 00:48:58.502311 kubelet[2253]: I0510 00:48:58.502266 2253 scope.go:117] "RemoveContainer" containerID="ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa" May 10 00:48:58.504838 env[1332]: time="2025-05-10T00:48:58.504776120Z" level=info msg="RemoveContainer for \"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa\"" May 10 00:48:58.509137 env[1332]: time="2025-05-10T00:48:58.509039947Z" level=info msg="RemoveContainer for \"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa\" returns successfully" May 10 00:48:58.509663 kubelet[2253]: I0510 00:48:58.509521 2253 scope.go:117] "RemoveContainer" containerID="4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626" May 10 00:48:58.511326 env[1332]: time="2025-05-10T00:48:58.511276050Z" level=info msg="RemoveContainer for \"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626\"" May 10 00:48:58.515005 env[1332]: time="2025-05-10T00:48:58.514956632Z" level=info msg="RemoveContainer for \"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626\" returns successfully" May 10 00:48:58.515244 kubelet[2253]: I0510 00:48:58.515197 2253 scope.go:117] "RemoveContainer" containerID="64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8" May 10 00:48:58.516699 env[1332]: time="2025-05-10T00:48:58.516644895Z" level=info msg="RemoveContainer for \"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8\"" May 10 00:48:58.520329 env[1332]: time="2025-05-10T00:48:58.520282245Z" level=info msg="RemoveContainer for \"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8\" returns successfully" May 10 00:48:58.520513 kubelet[2253]: I0510 00:48:58.520484 2253 scope.go:117] "RemoveContainer" containerID="0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4" May 10 00:48:58.521357 env[1332]: time="2025-05-10T00:48:58.520742214Z" level=error msg="ContainerStatus for \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\": not found" May 10 00:48:58.521357 env[1332]: time="2025-05-10T00:48:58.521291701Z" level=error msg="ContainerStatus for \"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70\": not found" May 10 00:48:58.521540 kubelet[2253]: E0510 00:48:58.520960 2253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\": not found" containerID="0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4" May 10 00:48:58.521540 kubelet[2253]: I0510 00:48:58.520997 2253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4"} err="failed to get container status \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\": rpc error: code = NotFound desc = an error occurred when try to find container \"0606ad09df4f8342e073c6499dddd6da1662b4b7b86b2172db6d1a3ad21ddff4\": not found" May 10 00:48:58.521540 kubelet[2253]: I0510 00:48:58.521032 2253 scope.go:117] "RemoveContainer" containerID="fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70" May 10 00:48:58.521540 kubelet[2253]: E0510 00:48:58.521486 2253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70\": not found" containerID="fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70" May 10 00:48:58.521540 kubelet[2253]: I0510 00:48:58.521520 2253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70"} err="failed to get container status \"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70\": rpc error: code = NotFound desc = an error occurred when try to find container \"fba315dfe8422cced95d65e8d5ce316704686193c2b16fe65aeccfd0736acc70\": not found" May 10 00:48:58.521837 kubelet[2253]: I0510 00:48:58.521555 2253 scope.go:117] "RemoveContainer" containerID="ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa" May 10 00:48:58.521921 env[1332]: time="2025-05-10T00:48:58.521779607Z" level=error msg="ContainerStatus for \"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa\": not found" May 10 00:48:58.521988 kubelet[2253]: E0510 00:48:58.521959 2253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa\": not found" containerID="ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa" May 10 00:48:58.522046 kubelet[2253]: I0510 00:48:58.521994 2253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa"} err="failed to get container status \"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae9e2cf35371275b1d87286810f6d876870fc9a91445f30c363a0b2116c7e1fa\": not found" May 10 00:48:58.522046 kubelet[2253]: I0510 00:48:58.522037 2253 scope.go:117] "RemoveContainer" containerID="4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626" May 10 00:48:58.522846 env[1332]: time="2025-05-10T00:48:58.522759829Z" level=error msg="ContainerStatus for \"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626\": not found" May 10 00:48:58.523012 kubelet[2253]: E0510 00:48:58.522969 2253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626\": not found" containerID="4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626" May 10 00:48:58.523131 kubelet[2253]: I0510 00:48:58.523016 2253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626"} err="failed to get container status \"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626\": rpc error: code = NotFound desc = an error occurred when try to find container \"4cea9404750c4fb32f52ceccad5a891644268983dd207c431afdc5ca592c0626\": not found" May 10 00:48:58.523131 kubelet[2253]: I0510 00:48:58.523042 2253 scope.go:117] "RemoveContainer" containerID="64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8" May 10 00:48:58.523381 env[1332]: time="2025-05-10T00:48:58.523297751Z" level=error msg="ContainerStatus for \"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8\": not found" May 10 00:48:58.523522 kubelet[2253]: E0510 00:48:58.523488 2253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8\": not found" containerID="64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8" May 10 00:48:58.523623 kubelet[2253]: I0510 00:48:58.523530 2253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8"} err="failed to get container status \"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"64fcca34bdb84e37c475c1aee748d822fad4cf8c8196c41546f0628f4ec303d8\": not found" May 10 00:48:59.349531 sshd[3813]: pam_unix(sshd:session): session closed for user core May 10 00:48:59.355923 systemd[1]: sshd@21-10.128.0.77:22-147.75.109.163:36650.service: Deactivated successfully. May 10 00:48:59.359287 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:48:59.361803 systemd-logind[1318]: Session 22 logged out. Waiting for processes to exit. May 10 00:48:59.364905 systemd-logind[1318]: Removed session 22. May 10 00:48:59.394694 systemd[1]: Started sshd@22-10.128.0.77:22-147.75.109.163:55640.service. May 10 00:48:59.682979 sshd[3985]: Accepted publickey for core from 147.75.109.163 port 55640 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:48:59.685249 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:59.694090 systemd[1]: Started session-23.scope. May 10 00:48:59.694503 systemd-logind[1318]: New session 23 of user core. May 10 00:49:00.060382 kubelet[2253]: I0510 00:49:00.060322 2253 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe866e9-5df7-4c04-9eac-5731ca781012" path="/var/lib/kubelet/pods/3fe866e9-5df7-4c04-9eac-5731ca781012/volumes" May 10 00:49:00.061554 kubelet[2253]: I0510 00:49:00.061518 2253 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eefaac8c-31a6-4209-bcc4-adfef94244d7" path="/var/lib/kubelet/pods/eefaac8c-31a6-4209-bcc4-adfef94244d7/volumes" May 10 00:49:00.868066 kubelet[2253]: I0510 00:49:00.867985 2253 topology_manager.go:215] "Topology Admit Handler" podUID="2d967515-0207-42f4-afe4-edaf345591bc" podNamespace="kube-system" podName="cilium-c628n" May 10 00:49:00.868622 kubelet[2253]: E0510 00:49:00.868594 2253 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fe866e9-5df7-4c04-9eac-5731ca781012" containerName="apply-sysctl-overwrites" May 10 00:49:00.868819 kubelet[2253]: E0510 00:49:00.868799 2253 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fe866e9-5df7-4c04-9eac-5731ca781012" containerName="mount-bpf-fs" May 10 00:49:00.868968 kubelet[2253]: E0510 00:49:00.868949 2253 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eefaac8c-31a6-4209-bcc4-adfef94244d7" containerName="cilium-operator" May 10 00:49:00.869194 kubelet[2253]: E0510 00:49:00.869152 2253 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fe866e9-5df7-4c04-9eac-5731ca781012" containerName="clean-cilium-state" May 10 00:49:00.869366 kubelet[2253]: E0510 00:49:00.869347 2253 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fe866e9-5df7-4c04-9eac-5731ca781012" containerName="cilium-agent" May 10 00:49:00.869508 kubelet[2253]: E0510 00:49:00.869485 2253 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fe866e9-5df7-4c04-9eac-5731ca781012" containerName="mount-cgroup" May 10 00:49:00.869722 kubelet[2253]: I0510 00:49:00.869680 2253 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe866e9-5df7-4c04-9eac-5731ca781012" containerName="cilium-agent" May 10 00:49:00.869865 kubelet[2253]: I0510 00:49:00.869845 2253 memory_manager.go:354] "RemoveStaleState removing state" podUID="eefaac8c-31a6-4209-bcc4-adfef94244d7" containerName="cilium-operator" May 10 00:49:00.888905 sshd[3985]: pam_unix(sshd:session): session closed for user core May 10 00:49:00.894507 systemd[1]: sshd@22-10.128.0.77:22-147.75.109.163:55640.service: Deactivated successfully. May 10 00:49:00.896036 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:49:00.901669 systemd-logind[1318]: Session 23 logged out. Waiting for processes to exit. May 10 00:49:00.909215 systemd-logind[1318]: Removed session 23. May 10 00:49:00.936551 systemd[1]: Started sshd@23-10.128.0.77:22-147.75.109.163:55646.service. May 10 00:49:01.001264 kubelet[2253]: I0510 00:49:01.001199 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-hostproc\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001264 kubelet[2253]: I0510 00:49:01.001276 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cilium-cgroup\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001645 kubelet[2253]: I0510 00:49:01.001314 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75lll\" (UniqueName: \"kubernetes.io/projected/2d967515-0207-42f4-afe4-edaf345591bc-kube-api-access-75lll\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001645 kubelet[2253]: I0510 00:49:01.001346 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-etc-cni-netd\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001645 kubelet[2253]: I0510 00:49:01.001375 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-host-proc-sys-net\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001645 kubelet[2253]: I0510 00:49:01.001408 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-bpf-maps\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001645 kubelet[2253]: I0510 00:49:01.001438 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cni-path\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001645 kubelet[2253]: I0510 00:49:01.001470 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d967515-0207-42f4-afe4-edaf345591bc-clustermesh-secrets\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001989 kubelet[2253]: I0510 00:49:01.001498 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cilium-run\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001989 kubelet[2253]: I0510 00:49:01.001526 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-xtables-lock\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001989 kubelet[2253]: I0510 00:49:01.001556 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-lib-modules\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001989 kubelet[2253]: I0510 00:49:01.001587 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d967515-0207-42f4-afe4-edaf345591bc-cilium-config-path\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001989 kubelet[2253]: I0510 00:49:01.001620 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d967515-0207-42f4-afe4-edaf345591bc-hubble-tls\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.001989 kubelet[2253]: I0510 00:49:01.001659 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2d967515-0207-42f4-afe4-edaf345591bc-cilium-ipsec-secrets\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.002471 kubelet[2253]: I0510 00:49:01.001693 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-host-proc-sys-kernel\") pod \"cilium-c628n\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " pod="kube-system/cilium-c628n" May 10 00:49:01.184862 env[1332]: time="2025-05-10T00:49:01.184678568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c628n,Uid:2d967515-0207-42f4-afe4-edaf345591bc,Namespace:kube-system,Attempt:0,}" May 10 00:49:01.215368 env[1332]: time="2025-05-10T00:49:01.215262286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:49:01.215368 env[1332]: time="2025-05-10T00:49:01.215321304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:49:01.215691 env[1332]: time="2025-05-10T00:49:01.215341268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:49:01.215691 env[1332]: time="2025-05-10T00:49:01.215575270Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69816b5952cdc2ca5e955d6be61867cef141f8cac28688340532b6e778fe3126 pid=4010 runtime=io.containerd.runc.v2 May 10 00:49:01.261179 sshd[3996]: Accepted publickey for core from 147.75.109.163 port 55646 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:49:01.271573 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:01.280943 systemd-logind[1318]: New session 24 of user core. May 10 00:49:01.282877 systemd[1]: Started session-24.scope. May 10 00:49:01.321688 env[1332]: time="2025-05-10T00:49:01.321618794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c628n,Uid:2d967515-0207-42f4-afe4-edaf345591bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"69816b5952cdc2ca5e955d6be61867cef141f8cac28688340532b6e778fe3126\"" May 10 00:49:01.327476 env[1332]: time="2025-05-10T00:49:01.327411240Z" level=info msg="CreateContainer within sandbox \"69816b5952cdc2ca5e955d6be61867cef141f8cac28688340532b6e778fe3126\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:49:01.344432 env[1332]: time="2025-05-10T00:49:01.344345411Z" level=info msg="CreateContainer within sandbox \"69816b5952cdc2ca5e955d6be61867cef141f8cac28688340532b6e778fe3126\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"edf98626875f2b470ca08454a1afc69725c945c1be161b1b0fadb4909c6888bc\"" May 10 00:49:01.347168 env[1332]: time="2025-05-10T00:49:01.345633205Z" level=info msg="StartContainer for \"edf98626875f2b470ca08454a1afc69725c945c1be161b1b0fadb4909c6888bc\"" May 10 00:49:01.457710 env[1332]: time="2025-05-10T00:49:01.457554012Z" level=info msg="StartContainer for \"edf98626875f2b470ca08454a1afc69725c945c1be161b1b0fadb4909c6888bc\" returns successfully" May 10 00:49:01.548105 env[1332]: time="2025-05-10T00:49:01.548009437Z" level=info msg="shim disconnected" id=edf98626875f2b470ca08454a1afc69725c945c1be161b1b0fadb4909c6888bc May 10 00:49:01.548640 env[1332]: time="2025-05-10T00:49:01.548590310Z" level=warning msg="cleaning up after shim disconnected" id=edf98626875f2b470ca08454a1afc69725c945c1be161b1b0fadb4909c6888bc namespace=k8s.io May 10 00:49:01.548909 env[1332]: time="2025-05-10T00:49:01.548881403Z" level=info msg="cleaning up dead shim" May 10 00:49:01.576407 env[1332]: time="2025-05-10T00:49:01.576330792Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4101 runtime=io.containerd.runc.v2\n" May 10 00:49:01.654220 sshd[3996]: pam_unix(sshd:session): session closed for user core May 10 00:49:01.660317 systemd[1]: sshd@23-10.128.0.77:22-147.75.109.163:55646.service: Deactivated successfully. May 10 00:49:01.662265 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:49:01.663087 systemd-logind[1318]: Session 24 logged out. Waiting for processes to exit. May 10 00:49:01.664908 systemd-logind[1318]: Removed session 24. May 10 00:49:01.700099 systemd[1]: Started sshd@24-10.128.0.77:22-147.75.109.163:55652.service. May 10 00:49:01.991107 sshd[4117]: Accepted publickey for core from 147.75.109.163 port 55652 ssh2: RSA SHA256:euqJNJC5P+Wq5j+dl78lhvyXKvYvUXQDTjxbGHC2Bdk May 10 00:49:01.994160 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:02.004821 systemd[1]: Started session-25.scope. May 10 00:49:02.005548 systemd-logind[1318]: New session 25 of user core. May 10 00:49:02.074888 env[1332]: time="2025-05-10T00:49:02.074818995Z" level=info msg="StopPodSandbox for \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\"" May 10 00:49:02.075221 env[1332]: time="2025-05-10T00:49:02.074975610Z" level=info msg="TearDown network for sandbox \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\" successfully" May 10 00:49:02.075221 env[1332]: time="2025-05-10T00:49:02.075032452Z" level=info msg="StopPodSandbox for \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\" returns successfully" May 10 00:49:02.075902 env[1332]: time="2025-05-10T00:49:02.075838767Z" level=info msg="RemovePodSandbox for \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\"" May 10 00:49:02.076049 env[1332]: time="2025-05-10T00:49:02.075894486Z" level=info msg="Forcibly stopping sandbox \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\"" May 10 00:49:02.076049 env[1332]: time="2025-05-10T00:49:02.076008035Z" level=info msg="TearDown network for sandbox \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\" successfully" May 10 00:49:02.088595 env[1332]: time="2025-05-10T00:49:02.088528322Z" level=info msg="RemovePodSandbox \"1bba67bd82636e2ac7f71003a08af055ad75975479a82cedce3f2485dfdf9c28\" returns successfully" May 10 00:49:02.089577 env[1332]: time="2025-05-10T00:49:02.089521421Z" level=info msg="StopPodSandbox for \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\"" May 10 00:49:02.089727 env[1332]: time="2025-05-10T00:49:02.089644311Z" level=info msg="TearDown network for sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" successfully" May 10 00:49:02.089727 env[1332]: time="2025-05-10T00:49:02.089701018Z" level=info msg="StopPodSandbox for \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" returns successfully" May 10 00:49:02.090220 env[1332]: time="2025-05-10T00:49:02.090177040Z" level=info msg="RemovePodSandbox for \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\"" May 10 00:49:02.090327 env[1332]: time="2025-05-10T00:49:02.090216794Z" level=info msg="Forcibly stopping sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\"" May 10 00:49:02.090393 env[1332]: time="2025-05-10T00:49:02.090325288Z" level=info msg="TearDown network for sandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" successfully" May 10 00:49:02.094454 env[1332]: time="2025-05-10T00:49:02.094405855Z" level=info msg="RemovePodSandbox \"e70b2e7f7c47dbbf68d635590cbf0c7f30a076ee3315e46521ee997c2008993d\" returns successfully" May 10 00:49:02.178613 kubelet[2253]: E0510 00:49:02.178551 2253 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:49:02.486851 env[1332]: time="2025-05-10T00:49:02.481993513Z" level=info msg="StopPodSandbox for \"69816b5952cdc2ca5e955d6be61867cef141f8cac28688340532b6e778fe3126\"" May 10 00:49:02.486851 env[1332]: time="2025-05-10T00:49:02.482145833Z" level=info msg="Container to stop \"edf98626875f2b470ca08454a1afc69725c945c1be161b1b0fadb4909c6888bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:49:02.487638 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69816b5952cdc2ca5e955d6be61867cef141f8cac28688340532b6e778fe3126-shm.mount: Deactivated successfully. May 10 00:49:02.563897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69816b5952cdc2ca5e955d6be61867cef141f8cac28688340532b6e778fe3126-rootfs.mount: Deactivated successfully. May 10 00:49:02.572319 env[1332]: time="2025-05-10T00:49:02.572246050Z" level=info msg="shim disconnected" id=69816b5952cdc2ca5e955d6be61867cef141f8cac28688340532b6e778fe3126 May 10 00:49:02.574255 env[1332]: time="2025-05-10T00:49:02.574195570Z" level=warning msg="cleaning up after shim disconnected" id=69816b5952cdc2ca5e955d6be61867cef141f8cac28688340532b6e778fe3126 namespace=k8s.io May 10 00:49:02.574753 env[1332]: time="2025-05-10T00:49:02.574714343Z" level=info msg="cleaning up dead shim" May 10 00:49:02.600397 env[1332]: time="2025-05-10T00:49:02.600323242Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4151 runtime=io.containerd.runc.v2\n" May 10 00:49:02.601271 env[1332]: time="2025-05-10T00:49:02.601218421Z" level=info msg="TearDown network for sandbox \"69816b5952cdc2ca5e955d6be61867cef141f8cac28688340532b6e778fe3126\" successfully" May 10 00:49:02.601488 env[1332]: time="2025-05-10T00:49:02.601457757Z" level=info msg="StopPodSandbox for \"69816b5952cdc2ca5e955d6be61867cef141f8cac28688340532b6e778fe3126\" returns successfully" May 10 00:49:02.727014 kubelet[2253]: I0510 00:49:02.726946 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cni-path\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727384 kubelet[2253]: I0510 00:49:02.727030 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d967515-0207-42f4-afe4-edaf345591bc-clustermesh-secrets\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727384 kubelet[2253]: I0510 00:49:02.727087 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-host-proc-sys-kernel\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727384 kubelet[2253]: I0510 00:49:02.727124 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cilium-cgroup\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727384 kubelet[2253]: I0510 00:49:02.727149 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-xtables-lock\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727384 kubelet[2253]: I0510 00:49:02.727178 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-host-proc-sys-net\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727384 kubelet[2253]: I0510 00:49:02.727209 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d967515-0207-42f4-afe4-edaf345591bc-cilium-config-path\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727721 kubelet[2253]: I0510 00:49:02.727237 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-hostproc\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727721 kubelet[2253]: I0510 00:49:02.727263 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-bpf-maps\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727721 kubelet[2253]: I0510 00:49:02.727291 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2d967515-0207-42f4-afe4-edaf345591bc-cilium-ipsec-secrets\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727721 kubelet[2253]: I0510 00:49:02.727318 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75lll\" (UniqueName: \"kubernetes.io/projected/2d967515-0207-42f4-afe4-edaf345591bc-kube-api-access-75lll\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727721 kubelet[2253]: I0510 00:49:02.727344 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-etc-cni-netd\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.727721 kubelet[2253]: I0510 00:49:02.727372 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cilium-run\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.728036 kubelet[2253]: I0510 00:49:02.727397 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-lib-modules\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.728036 kubelet[2253]: I0510 00:49:02.727425 2253 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d967515-0207-42f4-afe4-edaf345591bc-hubble-tls\") pod \"2d967515-0207-42f4-afe4-edaf345591bc\" (UID: \"2d967515-0207-42f4-afe4-edaf345591bc\") " May 10 00:49:02.731242 kubelet[2253]: I0510 00:49:02.731172 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d967515-0207-42f4-afe4-edaf345591bc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:49:02.731443 kubelet[2253]: I0510 00:49:02.731259 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cni-path" (OuterVolumeSpecName: "cni-path") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:02.733714 kubelet[2253]: I0510 00:49:02.733668 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:02.733937 kubelet[2253]: I0510 00:49:02.733669 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-hostproc" (OuterVolumeSpecName: "hostproc") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:02.734285 kubelet[2253]: I0510 00:49:02.733709 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:02.734483 kubelet[2253]: I0510 00:49:02.734453 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:02.734717 kubelet[2253]: I0510 00:49:02.734638 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:02.734717 kubelet[2253]: I0510 00:49:02.734661 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:02.734875 kubelet[2253]: I0510 00:49:02.734745 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:02.744796 systemd[1]: var-lib-kubelet-pods-2d967515\x2d0207\x2d42f4\x2dafe4\x2dedaf345591bc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:49:02.749830 systemd[1]: var-lib-kubelet-pods-2d967515\x2d0207\x2d42f4\x2dafe4\x2dedaf345591bc-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 10 00:49:02.754083 kubelet[2253]: I0510 00:49:02.754009 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:02.754696 kubelet[2253]: I0510 00:49:02.754303 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:02.754859 kubelet[2253]: I0510 00:49:02.754502 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d967515-0207-42f4-afe4-edaf345591bc-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:49:02.755028 kubelet[2253]: I0510 00:49:02.754602 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d967515-0207-42f4-afe4-edaf345591bc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:49:02.755293 kubelet[2253]: I0510 00:49:02.755262 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d967515-0207-42f4-afe4-edaf345591bc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:49:02.756765 kubelet[2253]: I0510 00:49:02.756722 2253 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d967515-0207-42f4-afe4-edaf345591bc-kube-api-access-75lll" (OuterVolumeSpecName: "kube-api-access-75lll") pod "2d967515-0207-42f4-afe4-edaf345591bc" (UID: "2d967515-0207-42f4-afe4-edaf345591bc"). InnerVolumeSpecName "kube-api-access-75lll". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:49:02.841382 kubelet[2253]: I0510 00:49:02.841327 2253 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cni-path\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.841920 kubelet[2253]: I0510 00:49:02.841854 2253 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d967515-0207-42f4-afe4-edaf345591bc-clustermesh-secrets\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.842176 kubelet[2253]: I0510 00:49:02.842149 2253 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-host-proc-sys-kernel\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.842397 kubelet[2253]: I0510 00:49:02.842344 2253 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cilium-cgroup\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.842568 kubelet[2253]: I0510 00:49:02.842547 2253 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-xtables-lock\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.842791 kubelet[2253]: I0510 00:49:02.842765 2253 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d967515-0207-42f4-afe4-edaf345591bc-cilium-config-path\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.842970 kubelet[2253]: I0510 00:49:02.842947 2253 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-hostproc\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.843491 kubelet[2253]: I0510 00:49:02.843446 2253 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-host-proc-sys-net\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.843677 kubelet[2253]: I0510 00:49:02.843654 2253 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-bpf-maps\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.843887 kubelet[2253]: I0510 00:49:02.843865 2253 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2d967515-0207-42f4-afe4-edaf345591bc-cilium-ipsec-secrets\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.844050 kubelet[2253]: I0510 00:49:02.844029 2253 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-75lll\" (UniqueName: \"kubernetes.io/projected/2d967515-0207-42f4-afe4-edaf345591bc-kube-api-access-75lll\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.848179 kubelet[2253]: I0510 00:49:02.848149 2253 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-etc-cni-netd\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.848305 kubelet[2253]: I0510 00:49:02.848199 2253 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-cilium-run\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.848305 kubelet[2253]: I0510 00:49:02.848217 2253 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d967515-0207-42f4-afe4-edaf345591bc-lib-modules\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:02.848305 kubelet[2253]: I0510 00:49:02.848234 2253 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d967515-0207-42f4-afe4-edaf345591bc-hubble-tls\") on node \"ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403\" DevicePath \"\"" May 10 00:49:03.116287 systemd[1]: var-lib-kubelet-pods-2d967515\x2d0207\x2d42f4\x2dafe4\x2dedaf345591bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d75lll.mount: Deactivated successfully. May 10 00:49:03.116553 systemd[1]: var-lib-kubelet-pods-2d967515\x2d0207\x2d42f4\x2dafe4\x2dedaf345591bc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:49:03.486875 kubelet[2253]: I0510 00:49:03.486447 2253 scope.go:117] "RemoveContainer" containerID="edf98626875f2b470ca08454a1afc69725c945c1be161b1b0fadb4909c6888bc" May 10 00:49:03.495994 env[1332]: time="2025-05-10T00:49:03.495930150Z" level=info msg="RemoveContainer for \"edf98626875f2b470ca08454a1afc69725c945c1be161b1b0fadb4909c6888bc\"" May 10 00:49:03.502167 env[1332]: time="2025-05-10T00:49:03.501985552Z" level=info msg="RemoveContainer for \"edf98626875f2b470ca08454a1afc69725c945c1be161b1b0fadb4909c6888bc\" returns successfully" May 10 00:49:03.546366 kubelet[2253]: I0510 00:49:03.546301 2253 topology_manager.go:215] "Topology Admit Handler" podUID="7a9aff01-8221-4326-891e-66c2d31cd940" podNamespace="kube-system" podName="cilium-zgwnw" May 10 00:49:03.546798 kubelet[2253]: E0510 00:49:03.546769 2253 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d967515-0207-42f4-afe4-edaf345591bc" containerName="mount-cgroup" May 10 00:49:03.547017 kubelet[2253]: I0510 00:49:03.546992 2253 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d967515-0207-42f4-afe4-edaf345591bc" containerName="mount-cgroup" May 10 00:49:03.656664 kubelet[2253]: I0510 00:49:03.656593 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a9aff01-8221-4326-891e-66c2d31cd940-clustermesh-secrets\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.656664 kubelet[2253]: I0510 00:49:03.656663 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a9aff01-8221-4326-891e-66c2d31cd940-etc-cni-netd\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657180 kubelet[2253]: I0510 00:49:03.656701 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a9aff01-8221-4326-891e-66c2d31cd940-hostproc\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657180 kubelet[2253]: I0510 00:49:03.656731 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a9aff01-8221-4326-891e-66c2d31cd940-host-proc-sys-net\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657180 kubelet[2253]: I0510 00:49:03.656761 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4flj\" (UniqueName: \"kubernetes.io/projected/7a9aff01-8221-4326-891e-66c2d31cd940-kube-api-access-v4flj\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657180 kubelet[2253]: I0510 00:49:03.656826 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7a9aff01-8221-4326-891e-66c2d31cd940-cilium-ipsec-secrets\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657180 kubelet[2253]: I0510 00:49:03.656852 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a9aff01-8221-4326-891e-66c2d31cd940-bpf-maps\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657180 kubelet[2253]: I0510 00:49:03.656879 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a9aff01-8221-4326-891e-66c2d31cd940-cilium-cgroup\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657519 kubelet[2253]: I0510 00:49:03.656910 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a9aff01-8221-4326-891e-66c2d31cd940-cilium-config-path\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657519 kubelet[2253]: I0510 00:49:03.656943 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a9aff01-8221-4326-891e-66c2d31cd940-host-proc-sys-kernel\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657519 kubelet[2253]: I0510 00:49:03.656969 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a9aff01-8221-4326-891e-66c2d31cd940-hubble-tls\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657519 kubelet[2253]: I0510 00:49:03.657007 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a9aff01-8221-4326-891e-66c2d31cd940-cni-path\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657519 kubelet[2253]: I0510 00:49:03.657049 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a9aff01-8221-4326-891e-66c2d31cd940-cilium-run\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657519 kubelet[2253]: I0510 00:49:03.657140 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a9aff01-8221-4326-891e-66c2d31cd940-xtables-lock\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.657763 kubelet[2253]: I0510 00:49:03.657172 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a9aff01-8221-4326-891e-66c2d31cd940-lib-modules\") pod \"cilium-zgwnw\" (UID: \"7a9aff01-8221-4326-891e-66c2d31cd940\") " pod="kube-system/cilium-zgwnw" May 10 00:49:03.861224 env[1332]: time="2025-05-10T00:49:03.861151602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zgwnw,Uid:7a9aff01-8221-4326-891e-66c2d31cd940,Namespace:kube-system,Attempt:0,}" May 10 00:49:03.893032 env[1332]: time="2025-05-10T00:49:03.892903893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:49:03.893032 env[1332]: time="2025-05-10T00:49:03.892978552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:49:03.893481 env[1332]: time="2025-05-10T00:49:03.892997609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:49:03.893481 env[1332]: time="2025-05-10T00:49:03.893382875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8 pid=4182 runtime=io.containerd.runc.v2 May 10 00:49:03.958606 env[1332]: time="2025-05-10T00:49:03.958540310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zgwnw,Uid:7a9aff01-8221-4326-891e-66c2d31cd940,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8\"" May 10 00:49:03.967531 env[1332]: time="2025-05-10T00:49:03.967456713Z" level=info msg="CreateContainer within sandbox \"7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:49:03.988587 env[1332]: time="2025-05-10T00:49:03.988527314Z" level=info msg="CreateContainer within sandbox \"7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac68a2d5e65051ee60bf9898d78af2d4acc02eab4667eeb350b1f2dd2ce9954e\"" May 10 00:49:03.991408 env[1332]: time="2025-05-10T00:49:03.991362275Z" level=info msg="StartContainer for \"ac68a2d5e65051ee60bf9898d78af2d4acc02eab4667eeb350b1f2dd2ce9954e\"" May 10 00:49:04.060105 kubelet[2253]: I0510 00:49:04.059861 2253 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d967515-0207-42f4-afe4-edaf345591bc" path="/var/lib/kubelet/pods/2d967515-0207-42f4-afe4-edaf345591bc/volumes" May 10 00:49:04.080249 env[1332]: time="2025-05-10T00:49:04.080177727Z" level=info msg="StartContainer for \"ac68a2d5e65051ee60bf9898d78af2d4acc02eab4667eeb350b1f2dd2ce9954e\" returns successfully" May 10 00:49:04.134206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac68a2d5e65051ee60bf9898d78af2d4acc02eab4667eeb350b1f2dd2ce9954e-rootfs.mount: Deactivated successfully. May 10 00:49:04.141593 env[1332]: time="2025-05-10T00:49:04.141515070Z" level=info msg="shim disconnected" id=ac68a2d5e65051ee60bf9898d78af2d4acc02eab4667eeb350b1f2dd2ce9954e May 10 00:49:04.141889 env[1332]: time="2025-05-10T00:49:04.141596810Z" level=warning msg="cleaning up after shim disconnected" id=ac68a2d5e65051ee60bf9898d78af2d4acc02eab4667eeb350b1f2dd2ce9954e namespace=k8s.io May 10 00:49:04.141889 env[1332]: time="2025-05-10T00:49:04.141614533Z" level=info msg="cleaning up dead shim" May 10 00:49:04.155833 env[1332]: time="2025-05-10T00:49:04.155744748Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4266 runtime=io.containerd.runc.v2\n" May 10 00:49:04.391593 kubelet[2253]: I0510 00:49:04.391107 2253 setters.go:580] "Node became not ready" node="ci-3510-3-7-nightly-20250509-2100-c62e4d80dae2c6317403" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:49:04Z","lastTransitionTime":"2025-05-10T00:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:49:04.496448 env[1332]: time="2025-05-10T00:49:04.496372767Z" level=info msg="CreateContainer within sandbox \"7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:49:04.530399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354628200.mount: Deactivated successfully. May 10 00:49:04.536681 env[1332]: time="2025-05-10T00:49:04.536608318Z" level=info msg="CreateContainer within sandbox \"7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d4df6cbda24be5b8d39967480fd240598fe951f10de9bc90d52a1accab02af1b\"" May 10 00:49:04.541366 env[1332]: time="2025-05-10T00:49:04.539498602Z" level=info msg="StartContainer for \"d4df6cbda24be5b8d39967480fd240598fe951f10de9bc90d52a1accab02af1b\"" May 10 00:49:04.677140 env[1332]: time="2025-05-10T00:49:04.676850225Z" level=info msg="StartContainer for \"d4df6cbda24be5b8d39967480fd240598fe951f10de9bc90d52a1accab02af1b\" returns successfully" May 10 00:49:04.726446 env[1332]: time="2025-05-10T00:49:04.726356946Z" level=info msg="shim disconnected" id=d4df6cbda24be5b8d39967480fd240598fe951f10de9bc90d52a1accab02af1b May 10 00:49:04.726446 env[1332]: time="2025-05-10T00:49:04.726445894Z" level=warning msg="cleaning up after shim disconnected" id=d4df6cbda24be5b8d39967480fd240598fe951f10de9bc90d52a1accab02af1b namespace=k8s.io May 10 00:49:04.726446 env[1332]: time="2025-05-10T00:49:04.726462612Z" level=info msg="cleaning up dead shim" May 10 00:49:04.741842 env[1332]: time="2025-05-10T00:49:04.741676265Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4330 runtime=io.containerd.runc.v2\n" May 10 00:49:05.116365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4df6cbda24be5b8d39967480fd240598fe951f10de9bc90d52a1accab02af1b-rootfs.mount: Deactivated successfully. May 10 00:49:05.515729 env[1332]: time="2025-05-10T00:49:05.515197481Z" level=info msg="CreateContainer within sandbox \"7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:49:05.544703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878286631.mount: Deactivated successfully. May 10 00:49:05.562629 env[1332]: time="2025-05-10T00:49:05.562545978Z" level=info msg="CreateContainer within sandbox \"7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d73d4d0ea679be0996eb2a3d1c411d2c19a3c9b654de384e1ee4fcd8b78d1b2e\"" May 10 00:49:05.564148 env[1332]: time="2025-05-10T00:49:05.564095174Z" level=info msg="StartContainer for \"d73d4d0ea679be0996eb2a3d1c411d2c19a3c9b654de384e1ee4fcd8b78d1b2e\"" May 10 00:49:05.661426 env[1332]: time="2025-05-10T00:49:05.661351083Z" level=info msg="StartContainer for \"d73d4d0ea679be0996eb2a3d1c411d2c19a3c9b654de384e1ee4fcd8b78d1b2e\" returns successfully" May 10 00:49:05.700034 env[1332]: time="2025-05-10T00:49:05.699950160Z" level=info msg="shim disconnected" id=d73d4d0ea679be0996eb2a3d1c411d2c19a3c9b654de384e1ee4fcd8b78d1b2e May 10 00:49:05.700034 env[1332]: time="2025-05-10T00:49:05.700036389Z" level=warning msg="cleaning up after shim disconnected" id=d73d4d0ea679be0996eb2a3d1c411d2c19a3c9b654de384e1ee4fcd8b78d1b2e namespace=k8s.io May 10 00:49:05.700661 env[1332]: time="2025-05-10T00:49:05.700277283Z" level=info msg="cleaning up dead shim" May 10 00:49:05.715655 env[1332]: time="2025-05-10T00:49:05.715573909Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4388 runtime=io.containerd.runc.v2\n" May 10 00:49:06.116452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d73d4d0ea679be0996eb2a3d1c411d2c19a3c9b654de384e1ee4fcd8b78d1b2e-rootfs.mount: Deactivated successfully. May 10 00:49:06.520400 env[1332]: time="2025-05-10T00:49:06.519822449Z" level=info msg="CreateContainer within sandbox \"7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:49:06.551302 env[1332]: time="2025-05-10T00:49:06.551211131Z" level=info msg="CreateContainer within sandbox \"7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"debe50c2ddd1e30262643382fb9d5ef778936e184251afedc417c1932aa0f185\"" May 10 00:49:06.553294 env[1332]: time="2025-05-10T00:49:06.553226881Z" level=info msg="StartContainer for \"debe50c2ddd1e30262643382fb9d5ef778936e184251afedc417c1932aa0f185\"" May 10 00:49:06.556081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185114581.mount: Deactivated successfully. May 10 00:49:06.644020 env[1332]: time="2025-05-10T00:49:06.643948852Z" level=info msg="StartContainer for \"debe50c2ddd1e30262643382fb9d5ef778936e184251afedc417c1932aa0f185\" returns successfully" May 10 00:49:06.676230 env[1332]: time="2025-05-10T00:49:06.676136658Z" level=info msg="shim disconnected" id=debe50c2ddd1e30262643382fb9d5ef778936e184251afedc417c1932aa0f185 May 10 00:49:06.676697 env[1332]: time="2025-05-10T00:49:06.676650234Z" level=warning msg="cleaning up after shim disconnected" id=debe50c2ddd1e30262643382fb9d5ef778936e184251afedc417c1932aa0f185 namespace=k8s.io May 10 00:49:06.677115 env[1332]: time="2025-05-10T00:49:06.677077099Z" level=info msg="cleaning up dead shim" May 10 00:49:06.692382 env[1332]: time="2025-05-10T00:49:06.692299282Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4442 runtime=io.containerd.runc.v2\n" May 10 00:49:07.116668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-debe50c2ddd1e30262643382fb9d5ef778936e184251afedc417c1932aa0f185-rootfs.mount: Deactivated successfully. May 10 00:49:07.181110 kubelet[2253]: E0510 00:49:07.181024 2253 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:49:07.527241 env[1332]: time="2025-05-10T00:49:07.526556908Z" level=info msg="CreateContainer within sandbox \"7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:49:07.552415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345515663.mount: Deactivated successfully. May 10 00:49:07.574232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800176097.mount: Deactivated successfully. May 10 00:49:07.577404 env[1332]: time="2025-05-10T00:49:07.577328564Z" level=info msg="CreateContainer within sandbox \"7e546c30d0c9d94b5e29dba60eff12c89487cadb1b721a8e2c70fe9e66ab85a8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0265c6f119c6b2ef0eee193e48244b0cd498d1197ebeb16c0ea135eceb7b70f\"" May 10 00:49:07.579535 env[1332]: time="2025-05-10T00:49:07.579486598Z" level=info msg="StartContainer for \"d0265c6f119c6b2ef0eee193e48244b0cd498d1197ebeb16c0ea135eceb7b70f\"" May 10 00:49:07.672513 env[1332]: time="2025-05-10T00:49:07.672431725Z" level=info msg="StartContainer for \"d0265c6f119c6b2ef0eee193e48244b0cd498d1197ebeb16c0ea135eceb7b70f\" returns successfully" May 10 00:49:08.164126 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 10 00:49:10.663951 systemd[1]: run-containerd-runc-k8s.io-d0265c6f119c6b2ef0eee193e48244b0cd498d1197ebeb16c0ea135eceb7b70f-runc.DnET7f.mount: Deactivated successfully. May 10 00:49:11.559452 systemd-networkd[1075]: lxc_health: Link UP May 10 00:49:11.575125 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:49:11.577402 systemd-networkd[1075]: lxc_health: Gained carrier May 10 00:49:11.902588 kubelet[2253]: I0510 00:49:11.902352 2253 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zgwnw" podStartSLOduration=8.902313252 podStartE2EDuration="8.902313252s" podCreationTimestamp="2025-05-10 00:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:49:08.566307589 +0000 UTC m=+126.673625911" watchObservedRunningTime="2025-05-10 00:49:11.902313252 +0000 UTC m=+130.009631573" May 10 00:49:13.100456 systemd-networkd[1075]: lxc_health: Gained IPv6LL May 10 00:49:17.739630 systemd[1]: run-containerd-runc-k8s.io-d0265c6f119c6b2ef0eee193e48244b0cd498d1197ebeb16c0ea135eceb7b70f-runc.iPwtLn.mount: Deactivated successfully. May 10 00:49:17.983433 sshd[4117]: pam_unix(sshd:session): session closed for user core May 10 00:49:17.991373 systemd-logind[1318]: Session 25 logged out. Waiting for processes to exit. May 10 00:49:17.991665 systemd[1]: sshd@24-10.128.0.77:22-147.75.109.163:55652.service: Deactivated successfully. May 10 00:49:17.994318 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:49:17.995249 systemd-logind[1318]: Removed session 25.