May 17 00:40:38.159900 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:40:38.159957 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:40:38.159982 kernel: BIOS-provided physical RAM map: May 17 00:40:38.160001 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved May 17 00:40:38.160020 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable May 17 00:40:38.160039 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved May 17 00:40:38.160066 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable May 17 00:40:38.160085 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved May 17 00:40:38.160103 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd277fff] usable May 17 00:40:38.160120 kernel: BIOS-e820: [mem 0x00000000bd278000-0x00000000bd281fff] ACPI data May 17 00:40:38.160134 kernel: BIOS-e820: [mem 0x00000000bd282000-0x00000000bf8ecfff] usable May 17 00:40:38.160147 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved May 17 00:40:38.160160 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data May 17 00:40:38.160185 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS May 17 00:40:38.160207 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable May 17 00:40:38.160230 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved May 17 00:40:38.160249 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable May 17 00:40:38.160269 kernel: NX (Execute Disable) protection: active May 17 00:40:38.160288 kernel: efi: EFI v2.70 by EDK II May 17 00:40:38.160304 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd278018 May 17 00:40:38.160320 kernel: random: crng init done May 17 00:40:38.160335 kernel: SMBIOS 2.4 present. May 17 00:40:38.160355 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 May 17 00:40:38.160370 kernel: Hypervisor detected: KVM May 17 00:40:38.160385 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:40:38.160400 kernel: kvm-clock: cpu 0, msr 11c19a001, primary cpu clock May 17 00:40:38.160416 kernel: kvm-clock: using sched offset of 13660418190 cycles May 17 00:40:38.160432 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:40:38.160448 kernel: tsc: Detected 2299.998 MHz processor May 17 00:40:38.160464 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:40:38.160480 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:40:38.160496 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 May 17 00:40:38.160516 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:40:38.160532 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 May 17 00:40:38.160548 kernel: Using GB pages for direct mapping May 17 00:40:38.160564 kernel: Secure boot disabled May 17 00:40:38.160580 kernel: ACPI: Early table checksum verification disabled May 17 00:40:38.160596 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) May 17 00:40:38.160613 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) May 17 00:40:38.160650 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) May 17 00:40:38.160678 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) May 17 00:40:38.160697 kernel: ACPI: FACS 0x00000000BFBF2000 000040 May 17 00:40:38.160717 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) May 17 00:40:38.160738 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) May 17 00:40:38.160756 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) May 17 00:40:38.160773 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) May 17 00:40:38.160794 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) May 17 00:40:38.160812 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) May 17 00:40:38.160829 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] May 17 00:40:38.160848 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] May 17 00:40:38.160865 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] May 17 00:40:38.160882 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] May 17 00:40:38.160900 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] May 17 00:40:38.160919 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] May 17 00:40:38.160938 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] May 17 00:40:38.160959 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] May 17 00:40:38.160976 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] May 17 00:40:38.160997 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:40:38.161018 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:40:38.161035 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 17 00:40:38.161053 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] May 17 00:40:38.161071 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] May 17 00:40:38.161093 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] May 17 00:40:38.161111 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] May 17 00:40:38.161133 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] May 17 00:40:38.161151 kernel: Zone ranges: May 17 00:40:38.161287 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:40:38.161340 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:40:38.161362 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] May 17 00:40:38.161381 kernel: Movable zone start for each node May 17 00:40:38.161400 kernel: Early memory node ranges May 17 00:40:38.161417 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] May 17 00:40:38.161436 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] May 17 00:40:38.161464 kernel: node 0: [mem 0x0000000000100000-0x00000000bd277fff] May 17 00:40:38.161483 kernel: node 0: [mem 0x00000000bd282000-0x00000000bf8ecfff] May 17 00:40:38.161502 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] May 17 00:40:38.161540 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] May 17 00:40:38.161558 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] May 17 00:40:38.161576 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:40:38.161608 kernel: On node 0, zone DMA: 11 pages in unavailable ranges May 17 00:40:38.161671 kernel: On node 0, zone DMA: 104 pages in unavailable ranges May 17 00:40:38.161691 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges May 17 00:40:38.161716 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 17 00:40:38.161735 kernel: On node 0, zone Normal: 32 pages in unavailable ranges May 17 00:40:38.161754 kernel: ACPI: PM-Timer IO Port: 0xb008 May 17 00:40:38.161773 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:40:38.161791 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:40:38.161810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:40:38.161828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:40:38.161846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:40:38.161877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:40:38.161901 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:40:38.161920 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:40:38.161940 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 17 00:40:38.161959 kernel: Booting paravirtualized kernel on KVM May 17 00:40:38.161988 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:40:38.162011 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:40:38.162029 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:40:38.162048 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:40:38.162067 kernel: pcpu-alloc: [0] 0 1 May 17 00:40:38.162091 kernel: kvm-guest: PV spinlocks enabled May 17 00:40:38.162113 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:40:38.162137 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 May 17 00:40:38.162157 kernel: Policy zone: Normal May 17 00:40:38.162183 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:40:38.162206 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:40:38.162227 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 17 00:40:38.162246 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:40:38.162265 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:40:38.162288 kernel: Memory: 7515412K/7860544K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 344872K reserved, 0K cma-reserved) May 17 00:40:38.162305 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:40:38.162348 kernel: Kernel/User page tables isolation: enabled May 17 00:40:38.162366 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:40:38.162383 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:40:38.162401 kernel: rcu: Hierarchical RCU implementation. May 17 00:40:38.162421 kernel: rcu: RCU event tracing is enabled. May 17 00:40:38.162439 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:40:38.162461 kernel: Rude variant of Tasks RCU enabled. May 17 00:40:38.162493 kernel: Tracing variant of Tasks RCU enabled. May 17 00:40:38.162512 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:40:38.162534 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:40:38.162553 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:40:38.162572 kernel: Console: colour dummy device 80x25 May 17 00:40:38.162592 kernel: printk: console [ttyS0] enabled May 17 00:40:38.162611 kernel: ACPI: Core revision 20210730 May 17 00:40:38.162630 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:40:38.165856 kernel: x2apic enabled May 17 00:40:38.165889 kernel: Switched APIC routing to physical x2apic. May 17 00:40:38.165910 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 May 17 00:40:38.165929 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 17 00:40:38.165949 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) May 17 00:40:38.165968 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 May 17 00:40:38.165986 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 May 17 00:40:38.166007 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:40:38.166031 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 17 00:40:38.166051 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 17 00:40:38.166071 kernel: Spectre V2 : Mitigation: IBRS May 17 00:40:38.166091 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:40:38.166110 kernel: RETBleed: Mitigation: IBRS May 17 00:40:38.166130 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:40:38.166149 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl May 17 00:40:38.166169 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:40:38.166188 kernel: MDS: Mitigation: Clear CPU buffers May 17 00:40:38.166213 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:40:38.166232 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:40:38.166251 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:40:38.166271 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:40:38.166290 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:40:38.166310 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:40:38.166336 kernel: Freeing SMP alternatives memory: 32K May 17 00:40:38.166355 kernel: pid_max: default: 32768 minimum: 301 May 17 00:40:38.166374 kernel: LSM: Security Framework initializing May 17 00:40:38.166398 kernel: SELinux: Initializing. May 17 00:40:38.166417 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:40:38.166437 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:40:38.166456 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) May 17 00:40:38.166482 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. May 17 00:40:38.166502 kernel: signal: max sigframe size: 1776 May 17 00:40:38.166522 kernel: rcu: Hierarchical SRCU implementation. May 17 00:40:38.166541 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:40:38.166560 kernel: smp: Bringing up secondary CPUs ... May 17 00:40:38.166584 kernel: x86: Booting SMP configuration: May 17 00:40:38.166602 kernel: .... node #0, CPUs: #1 May 17 00:40:38.166641 kernel: kvm-clock: cpu 1, msr 11c19a041, secondary cpu clock May 17 00:40:38.166663 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 17 00:40:38.166684 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:40:38.166703 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:40:38.166723 kernel: smpboot: Max logical packages: 1 May 17 00:40:38.166742 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) May 17 00:40:38.166767 kernel: devtmpfs: initialized May 17 00:40:38.166787 kernel: x86/mm: Memory block size: 128MB May 17 00:40:38.166806 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) May 17 00:40:38.166826 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:40:38.166846 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:40:38.166865 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:40:38.166884 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:40:38.166904 kernel: audit: initializing netlink subsys (disabled) May 17 00:40:38.166923 kernel: audit: type=2000 audit(1747442436.973:1): state=initialized audit_enabled=0 res=1 May 17 00:40:38.166946 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:40:38.166966 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:40:38.166992 kernel: cpuidle: using governor menu May 17 00:40:38.167011 kernel: ACPI: bus type PCI registered May 17 00:40:38.167030 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:40:38.167050 kernel: dca service started, version 1.12.1 May 17 00:40:38.167069 kernel: PCI: Using configuration type 1 for base access May 17 00:40:38.167088 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:40:38.167107 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:40:38.167130 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:40:38.167151 kernel: ACPI: Added _OSI(Module Device) May 17 00:40:38.167170 kernel: ACPI: Added _OSI(Processor Device) May 17 00:40:38.167189 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:40:38.167210 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:40:38.167229 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:40:38.167249 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:40:38.167267 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:40:38.167287 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 17 00:40:38.167321 kernel: ACPI: Interpreter enabled May 17 00:40:38.167340 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:40:38.167359 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:40:38.167380 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:40:38.167399 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F May 17 00:40:38.167419 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:40:38.168303 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:40:38.168999 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 17 00:40:38.169039 kernel: PCI host bridge to bus 0000:00 May 17 00:40:38.169232 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:40:38.169418 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:40:38.169599 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:40:38.175440 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] May 17 00:40:38.175667 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:40:38.175870 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:40:38.176105 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 May 17 00:40:38.176322 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 17 00:40:38.176510 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 17 00:40:38.176754 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 May 17 00:40:38.176939 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 17 00:40:38.177131 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] May 17 00:40:38.177333 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:40:38.177538 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] May 17 00:40:38.177808 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] May 17 00:40:38.178027 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:40:38.178249 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] May 17 00:40:38.178463 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] May 17 00:40:38.178487 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:40:38.178518 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:40:38.178538 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:40:38.178564 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:40:38.178586 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:40:38.178607 kernel: iommu: Default domain type: Translated May 17 00:40:38.178647 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:40:38.178667 kernel: vgaarb: loaded May 17 00:40:38.178693 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:40:38.178713 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:40:38.178741 kernel: PTP clock support registered May 17 00:40:38.178762 kernel: Registered efivars operations May 17 00:40:38.178787 kernel: PCI: Using ACPI for IRQ routing May 17 00:40:38.178808 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:40:38.178827 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] May 17 00:40:38.178847 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] May 17 00:40:38.178868 kernel: e820: reserve RAM buffer [mem 0xbd278000-0xbfffffff] May 17 00:40:38.178891 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] May 17 00:40:38.178914 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] May 17 00:40:38.178937 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:40:38.178963 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:40:38.178983 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:40:38.179009 kernel: pnp: PnP ACPI init May 17 00:40:38.179028 kernel: pnp: PnP ACPI: found 7 devices May 17 00:40:38.179054 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:40:38.179075 kernel: NET: Registered PF_INET protocol family May 17 00:40:38.179107 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:40:38.179129 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 17 00:40:38.179154 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:40:38.179175 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:40:38.179201 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 00:40:38.179224 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 17 00:40:38.179244 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:40:38.179269 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:40:38.179290 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:40:38.179314 kernel: NET: Registered PF_XDP protocol family May 17 00:40:38.179514 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:40:38.179731 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:40:38.179924 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:40:38.180140 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] May 17 00:40:38.180355 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:40:38.180391 kernel: PCI: CLS 0 bytes, default 64 May 17 00:40:38.180418 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:40:38.180444 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) May 17 00:40:38.180476 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:40:38.180503 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 17 00:40:38.180529 kernel: clocksource: Switched to clocksource tsc May 17 00:40:38.180555 kernel: Initialise system trusted keyrings May 17 00:40:38.180581 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 17 00:40:38.180606 kernel: Key type asymmetric registered May 17 00:40:38.180932 kernel: Asymmetric key parser 'x509' registered May 17 00:40:38.180957 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:40:38.181151 kernel: io scheduler mq-deadline registered May 17 00:40:38.181178 kernel: io scheduler kyber registered May 17 00:40:38.181199 kernel: io scheduler bfq registered May 17 00:40:38.181381 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:40:38.181405 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 17 00:40:38.192056 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver May 17 00:40:38.192115 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 May 17 00:40:38.192331 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver May 17 00:40:38.192359 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 17 00:40:38.192572 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver May 17 00:40:38.192604 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:40:38.192641 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:40:38.192668 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:40:38.192687 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A May 17 00:40:38.192711 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A May 17 00:40:38.192918 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) May 17 00:40:38.192948 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:40:38.192976 kernel: i8042: Warning: Keylock active May 17 00:40:38.193000 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:40:38.193021 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:40:38.193228 kernel: rtc_cmos 00:00: RTC can wake from S4 May 17 00:40:38.193413 kernel: rtc_cmos 00:00: registered as rtc0 May 17 00:40:38.193601 kernel: rtc_cmos 00:00: setting system clock to 2025-05-17T00:40:37 UTC (1747442437) May 17 00:40:38.193800 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 17 00:40:38.193826 kernel: intel_pstate: CPU model not supported May 17 00:40:38.193853 kernel: pstore: Registered efi as persistent store backend May 17 00:40:38.193872 kernel: NET: Registered PF_INET6 protocol family May 17 00:40:38.193893 kernel: Segment Routing with IPv6 May 17 00:40:38.193919 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:40:38.193941 kernel: NET: Registered PF_PACKET protocol family May 17 00:40:38.193962 kernel: Key type dns_resolver registered May 17 00:40:38.193985 kernel: IPI shorthand broadcast: enabled May 17 00:40:38.194006 kernel: sched_clock: Marking stable (835712311, 170923145)->(1118105382, -111469926) May 17 00:40:38.194029 kernel: registered taskstats version 1 May 17 00:40:38.194054 kernel: Loading compiled-in X.509 certificates May 17 00:40:38.194076 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:40:38.194108 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:40:38.194128 kernel: Key type .fscrypt registered May 17 00:40:38.194147 kernel: Key type fscrypt-provisioning registered May 17 00:40:38.194167 kernel: pstore: Using crash dump compression: deflate May 17 00:40:38.194188 kernel: ima: Allocated hash algorithm: sha1 May 17 00:40:38.194214 kernel: ima: No architecture policies found May 17 00:40:38.194235 kernel: clk: Disabling unused clocks May 17 00:40:38.194263 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:40:38.194284 kernel: Write protecting the kernel read-only data: 28672k May 17 00:40:38.194309 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:40:38.194329 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:40:38.194349 kernel: Run /init as init process May 17 00:40:38.194374 kernel: with arguments: May 17 00:40:38.194395 kernel: /init May 17 00:40:38.194417 kernel: with environment: May 17 00:40:38.194437 kernel: HOME=/ May 17 00:40:38.194460 kernel: TERM=linux May 17 00:40:38.194481 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:40:38.194506 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:40:38.194538 systemd[1]: Detected virtualization kvm. May 17 00:40:38.194560 systemd[1]: Detected architecture x86-64. May 17 00:40:38.194585 systemd[1]: Running in initrd. May 17 00:40:38.194605 systemd[1]: No hostname configured, using default hostname. May 17 00:40:38.194648 systemd[1]: Hostname set to . May 17 00:40:38.194669 systemd[1]: Initializing machine ID from VM UUID. May 17 00:40:38.194695 systemd[1]: Queued start job for default target initrd.target. May 17 00:40:38.194717 systemd[1]: Started systemd-ask-password-console.path. May 17 00:40:38.194741 systemd[1]: Reached target cryptsetup.target. May 17 00:40:38.194763 systemd[1]: Reached target paths.target. May 17 00:40:38.194783 systemd[1]: Reached target slices.target. May 17 00:40:38.194803 systemd[1]: Reached target swap.target. May 17 00:40:38.194833 systemd[1]: Reached target timers.target. May 17 00:40:38.194858 systemd[1]: Listening on iscsid.socket. May 17 00:40:38.194881 systemd[1]: Listening on iscsiuio.socket. May 17 00:40:38.194905 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:40:38.194928 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:40:38.194951 systemd[1]: Listening on systemd-journald.socket. May 17 00:40:38.194974 systemd[1]: Listening on systemd-networkd.socket. May 17 00:40:38.194996 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:40:38.195026 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:40:38.195047 systemd[1]: Reached target sockets.target. May 17 00:40:38.195101 systemd[1]: Starting kmod-static-nodes.service... May 17 00:40:38.195129 systemd[1]: Finished network-cleanup.service. May 17 00:40:38.195156 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:40:38.195177 systemd[1]: Starting systemd-journald.service... May 17 00:40:38.195204 systemd[1]: Starting systemd-modules-load.service... May 17 00:40:38.195231 systemd[1]: Starting systemd-resolved.service... May 17 00:40:38.195257 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:40:38.195280 systemd[1]: Finished kmod-static-nodes.service. May 17 00:40:38.195305 kernel: audit: type=1130 audit(1747442438.168:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.195332 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:40:38.195353 kernel: audit: type=1130 audit(1747442438.180:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.195376 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:40:38.195402 systemd-journald[189]: Journal started May 17 00:40:38.195512 systemd-journald[189]: Runtime Journal (/run/log/journal/cd36fac3edfc5eb55f700f929dd2d8e7) is 8.0M, max 148.8M, 140.8M free. May 17 00:40:38.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.217702 systemd[1]: Started systemd-journald.service. May 17 00:40:38.217788 kernel: audit: type=1130 audit(1747442438.199:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.197990 systemd-modules-load[190]: Inserted module 'overlay' May 17 00:40:38.218408 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:40:38.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.224708 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:40:38.235655 kernel: audit: type=1130 audit(1747442438.211:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.250715 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:40:38.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.261677 kernel: audit: type=1130 audit(1747442438.249:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.271723 systemd-resolved[191]: Positive Trust Anchors: May 17 00:40:38.273122 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:40:38.273279 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:40:38.282782 systemd-resolved[191]: Defaulting to hostname 'linux'. May 17 00:40:38.311210 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:40:38.285191 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:40:38.327379 kernel: Bridge firewalling registered May 17 00:40:38.327430 kernel: audit: type=1130 audit(1747442438.316:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.317106 systemd-modules-load[190]: Inserted module 'br_netfilter' May 17 00:40:38.349800 kernel: audit: type=1130 audit(1747442438.331:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.317396 systemd[1]: Started systemd-resolved.service. May 17 00:40:38.332979 systemd[1]: Reached target nss-lookup.target. May 17 00:40:38.365307 dracut-cmdline[205]: dracut-dracut-053 May 17 00:40:38.344285 systemd[1]: Starting dracut-cmdline.service... May 17 00:40:38.382788 kernel: SCSI subsystem initialized May 17 00:40:38.382828 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:40:38.451797 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:40:38.451853 kernel: device-mapper: uevent: version 1.0.3 May 17 00:40:38.451885 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:40:38.429752 systemd-modules-load[190]: Inserted module 'dm_multipath' May 17 00:40:38.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.480679 kernel: audit: type=1130 audit(1747442438.458:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.430959 systemd[1]: Finished systemd-modules-load.service. May 17 00:40:38.540823 kernel: Loading iSCSI transport class v2.0-870. May 17 00:40:38.540867 kernel: audit: type=1130 audit(1747442438.510:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.461303 systemd[1]: Starting systemd-sysctl.service... May 17 00:40:38.497519 systemd[1]: Finished systemd-sysctl.service. May 17 00:40:38.559833 kernel: iscsi: registered transport (tcp) May 17 00:40:38.590241 kernel: iscsi: registered transport (qla4xxx) May 17 00:40:38.590336 kernel: QLogic iSCSI HBA Driver May 17 00:40:38.641649 systemd[1]: Finished dracut-cmdline.service. May 17 00:40:38.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:38.651465 systemd[1]: Starting dracut-pre-udev.service... May 17 00:40:38.716691 kernel: raid6: avx2x4 gen() 18055 MB/s May 17 00:40:38.737680 kernel: raid6: avx2x4 xor() 7972 MB/s May 17 00:40:38.758711 kernel: raid6: avx2x2 gen() 17771 MB/s May 17 00:40:38.779671 kernel: raid6: avx2x2 xor() 17714 MB/s May 17 00:40:38.800671 kernel: raid6: avx2x1 gen() 14055 MB/s May 17 00:40:38.821670 kernel: raid6: avx2x1 xor() 15595 MB/s May 17 00:40:38.842677 kernel: raid6: sse2x4 gen() 10933 MB/s May 17 00:40:38.863672 kernel: raid6: sse2x4 xor() 6596 MB/s May 17 00:40:38.884655 kernel: raid6: sse2x2 gen() 11878 MB/s May 17 00:40:38.905653 kernel: raid6: sse2x2 xor() 7281 MB/s May 17 00:40:38.926660 kernel: raid6: sse2x1 gen() 10322 MB/s May 17 00:40:38.952734 kernel: raid6: sse2x1 xor() 5151 MB/s May 17 00:40:38.952805 kernel: raid6: using algorithm avx2x4 gen() 18055 MB/s May 17 00:40:38.952835 kernel: raid6: .... xor() 7972 MB/s, rmw enabled May 17 00:40:38.957929 kernel: raid6: using avx2x2 recovery algorithm May 17 00:40:38.983677 kernel: xor: automatically using best checksumming function avx May 17 00:40:39.109664 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:40:39.123251 systemd[1]: Finished dracut-pre-udev.service. May 17 00:40:39.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:39.130000 audit: BPF prog-id=7 op=LOAD May 17 00:40:39.130000 audit: BPF prog-id=8 op=LOAD May 17 00:40:39.133499 systemd[1]: Starting systemd-udevd.service... May 17 00:40:39.151465 systemd-udevd[389]: Using default interface naming scheme 'v252'. May 17 00:40:39.160306 systemd[1]: Started systemd-udevd.service. May 17 00:40:39.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:39.179151 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:40:39.196917 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation May 17 00:40:39.238049 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:40:39.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:39.248111 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:40:39.328877 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:40:39.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:39.461653 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:40:39.519653 kernel: scsi host0: Virtio SCSI HBA May 17 00:40:39.538663 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 May 17 00:40:39.600663 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:40:39.623657 kernel: AES CTR mode by8 optimization enabled May 17 00:40:39.650505 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) May 17 00:40:39.715665 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks May 17 00:40:39.715956 kernel: sd 0:0:1:0: [sda] Write Protect is off May 17 00:40:39.716221 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 May 17 00:40:39.716467 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:40:39.716735 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:40:39.716769 kernel: GPT:17805311 != 25165823 May 17 00:40:39.716799 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:40:39.716832 kernel: GPT:17805311 != 25165823 May 17 00:40:39.716862 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:40:39.716892 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:40:39.716925 kernel: sd 0:0:1:0: [sda] Attached SCSI disk May 17 00:40:39.783663 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (429) May 17 00:40:39.791095 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:40:39.811819 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:40:39.818018 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:40:39.859106 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:40:39.864987 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:40:39.886095 systemd[1]: Starting disk-uuid.service... May 17 00:40:39.909000 disk-uuid[510]: Primary Header is updated. May 17 00:40:39.909000 disk-uuid[510]: Secondary Entries is updated. May 17 00:40:39.909000 disk-uuid[510]: Secondary Header is updated. May 17 00:40:39.933774 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:40:39.949663 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:40:39.978850 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:40:40.971664 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:40:40.971828 disk-uuid[511]: The operation has completed successfully. May 17 00:40:41.045387 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:40:41.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.045549 systemd[1]: Finished disk-uuid.service. May 17 00:40:41.066721 systemd[1]: Starting verity-setup.service... May 17 00:40:41.097781 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:40:41.181867 systemd[1]: Found device dev-mapper-usr.device. May 17 00:40:41.196217 systemd[1]: Finished verity-setup.service. May 17 00:40:41.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.197676 systemd[1]: Mounting sysusr-usr.mount... May 17 00:40:41.313422 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:40:41.313209 systemd[1]: Mounted sysusr-usr.mount. May 17 00:40:41.321169 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:40:41.322236 systemd[1]: Starting ignition-setup.service... May 17 00:40:41.373801 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:40:41.373850 kernel: BTRFS info (device sda6): using free space tree May 17 00:40:41.373892 kernel: BTRFS info (device sda6): has skinny extents May 17 00:40:41.368015 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:40:41.388815 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:40:41.402228 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:40:41.417761 systemd[1]: Finished ignition-setup.service. May 17 00:40:41.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.419596 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:40:41.501506 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:40:41.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.501000 audit: BPF prog-id=9 op=LOAD May 17 00:40:41.504392 systemd[1]: Starting systemd-networkd.service... May 17 00:40:41.545592 systemd-networkd[685]: lo: Link UP May 17 00:40:41.545612 systemd-networkd[685]: lo: Gained carrier May 17 00:40:41.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.546823 systemd-networkd[685]: Enumeration completed May 17 00:40:41.547005 systemd[1]: Started systemd-networkd.service. May 17 00:40:41.547471 systemd-networkd[685]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:40:41.550118 systemd-networkd[685]: eth0: Link UP May 17 00:40:41.550127 systemd-networkd[685]: eth0: Gained carrier May 17 00:40:41.554173 systemd[1]: Reached target network.target. May 17 00:40:41.564014 systemd-networkd[685]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2.c.flatcar-212911.internal' to 'ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2' May 17 00:40:41.564033 systemd-networkd[685]: eth0: DHCPv4 address 10.128.0.28/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 17 00:40:41.568685 systemd[1]: Starting iscsiuio.service... May 17 00:40:41.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.626208 systemd[1]: Started iscsiuio.service. May 17 00:40:41.700799 iscsid[695]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:40:41.700799 iscsid[695]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:40:41.700799 iscsid[695]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:40:41.700799 iscsid[695]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:40:41.700799 iscsid[695]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:40:41.700799 iscsid[695]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:40:41.700799 iscsid[695]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:40:41.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.670508 systemd[1]: Starting iscsid.service... May 17 00:40:41.737540 ignition[609]: Ignition 2.14.0 May 17 00:40:41.678929 systemd[1]: Started iscsid.service. May 17 00:40:41.737554 ignition[609]: Stage: fetch-offline May 17 00:40:41.687697 systemd[1]: Starting dracut-initqueue.service... May 17 00:40:41.737666 ignition[609]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:40:41.769265 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:40:41.737726 ignition[609]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:40:41.778467 systemd[1]: Finished dracut-initqueue.service. May 17 00:40:41.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.759465 ignition[609]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:40:41.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.803093 systemd[1]: Reached target remote-fs-pre.target. May 17 00:40:41.759752 ignition[609]: parsed url from cmdline: "" May 17 00:40:41.823826 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:40:41.759758 ignition[609]: no config URL provided May 17 00:40:41.841844 systemd[1]: Reached target remote-fs.target. May 17 00:40:42.024976 kernel: kauditd_printk_skb: 20 callbacks suppressed May 17 00:40:42.025034 kernel: audit: type=1130 audit(1747442441.991:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.759768 ignition[609]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:40:41.843239 systemd[1]: Starting dracut-pre-mount.service... May 17 00:40:42.077790 kernel: audit: type=1130 audit(1747442442.048:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:42.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:41.759783 ignition[609]: no config at "/usr/lib/ignition/user.ign" May 17 00:40:41.870443 systemd[1]: Starting ignition-fetch.service... May 17 00:40:41.759793 ignition[609]: failed to fetch config: resource requires networking May 17 00:40:41.884490 systemd[1]: Finished dracut-pre-mount.service. May 17 00:40:41.759991 ignition[609]: Ignition finished successfully May 17 00:40:41.907907 unknown[710]: fetched base config from "system" May 17 00:40:41.884703 ignition[710]: Ignition 2.14.0 May 17 00:40:41.907925 unknown[710]: fetched base config from "system" May 17 00:40:41.884714 ignition[710]: Stage: fetch May 17 00:40:41.907951 unknown[710]: fetched user config from "gcp" May 17 00:40:41.884857 ignition[710]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:40:41.927350 systemd[1]: Finished ignition-fetch.service. May 17 00:40:41.884899 ignition[710]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:40:41.943243 systemd[1]: Starting ignition-kargs.service... May 17 00:40:41.894517 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:40:41.978300 systemd[1]: Finished ignition-kargs.service. May 17 00:40:41.894965 ignition[710]: parsed url from cmdline: "" May 17 00:40:41.994742 systemd[1]: Starting ignition-disks.service... May 17 00:40:41.894973 ignition[710]: no config URL provided May 17 00:40:42.040054 systemd[1]: Finished ignition-disks.service. May 17 00:40:41.894983 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:40:42.050088 systemd[1]: Reached target initrd-root-device.target. May 17 00:40:41.895000 ignition[710]: no config at "/usr/lib/ignition/user.ign" May 17 00:40:42.095033 systemd[1]: Reached target local-fs-pre.target. May 17 00:40:41.895043 ignition[710]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 May 17 00:40:42.110041 systemd[1]: Reached target local-fs.target. May 17 00:40:41.901660 ignition[710]: GET result: OK May 17 00:40:42.133974 systemd[1]: Reached target sysinit.target. May 17 00:40:41.901767 ignition[710]: parsing config with SHA512: aeddbda18e97833a7c8494f6da69dcc5f797d15ff60b1d80f6838e30b7c246f515aceefd3a20869152280b489fb29639c636a7c4779cb7a3c385a6c63652bde7 May 17 00:40:42.141015 systemd[1]: Reached target basic.target. May 17 00:40:41.911090 ignition[710]: fetch: fetch complete May 17 00:40:42.166476 systemd[1]: Starting systemd-fsck-root.service... May 17 00:40:41.911104 ignition[710]: fetch: fetch passed May 17 00:40:41.911220 ignition[710]: Ignition finished successfully May 17 00:40:41.957489 ignition[716]: Ignition 2.14.0 May 17 00:40:41.957499 ignition[716]: Stage: kargs May 17 00:40:41.957669 ignition[716]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:40:41.957717 ignition[716]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:40:41.966212 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:40:41.967824 ignition[716]: kargs: kargs passed May 17 00:40:41.967880 ignition[716]: Ignition finished successfully May 17 00:40:42.028537 ignition[722]: Ignition 2.14.0 May 17 00:40:42.028546 ignition[722]: Stage: disks May 17 00:40:42.028726 ignition[722]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:40:42.028763 ignition[722]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:40:42.037138 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:40:42.038753 ignition[722]: disks: disks passed May 17 00:40:42.038812 ignition[722]: Ignition finished successfully May 17 00:40:42.207783 systemd-fsck[730]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks May 17 00:40:42.393724 systemd[1]: Finished systemd-fsck-root.service. May 17 00:40:42.427861 kernel: audit: type=1130 audit(1747442442.392:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:42.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:42.395506 systemd[1]: Mounting sysroot.mount... May 17 00:40:42.450896 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:40:42.445383 systemd[1]: Mounted sysroot.mount. May 17 00:40:42.458243 systemd[1]: Reached target initrd-root-fs.target. May 17 00:40:42.475620 systemd[1]: Mounting sysroot-usr.mount... May 17 00:40:42.490473 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 17 00:40:42.490539 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:40:42.490578 systemd[1]: Reached target ignition-diskful.target. May 17 00:40:42.577519 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (736) May 17 00:40:42.577570 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:40:42.577602 kernel: BTRFS info (device sda6): using free space tree May 17 00:40:42.577646 kernel: BTRFS info (device sda6): has skinny extents May 17 00:40:42.507269 systemd[1]: Mounted sysroot-usr.mount. May 17 00:40:42.531353 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:40:42.604844 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:40:42.604950 initrd-setup-root[741]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:40:42.542534 systemd[1]: Starting initrd-setup-root.service... May 17 00:40:42.641806 initrd-setup-root[749]: cut: /sysroot/etc/group: No such file or directory May 17 00:40:42.616598 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:40:42.660838 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:40:42.670769 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:40:42.684434 systemd[1]: Finished initrd-setup-root.service. May 17 00:40:42.718953 kernel: audit: type=1130 audit(1747442442.683:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:42.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:42.686350 systemd[1]: Starting ignition-mount.service... May 17 00:40:42.727149 systemd[1]: Starting sysroot-boot.service... May 17 00:40:42.741334 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:40:42.741514 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:40:42.766784 ignition[802]: INFO : Ignition 2.14.0 May 17 00:40:42.766784 ignition[802]: INFO : Stage: mount May 17 00:40:42.766784 ignition[802]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:40:42.766784 ignition[802]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:40:42.869833 kernel: audit: type=1130 audit(1747442442.791:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:42.870055 kernel: audit: type=1130 audit(1747442442.822:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:42.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:42.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:42.776436 systemd[1]: Finished sysroot-boot.service. May 17 00:40:42.883897 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:40:42.883897 ignition[802]: INFO : mount: mount passed May 17 00:40:42.883897 ignition[802]: INFO : Ignition finished successfully May 17 00:40:42.929873 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (811) May 17 00:40:42.929917 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:40:42.929936 kernel: BTRFS info (device sda6): using free space tree May 17 00:40:42.929953 kernel: BTRFS info (device sda6): has skinny extents May 17 00:40:42.793359 systemd[1]: Finished ignition-mount.service. May 17 00:40:42.956973 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:40:42.825834 systemd[1]: Starting ignition-files.service... May 17 00:40:42.881200 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:40:42.953191 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:40:42.989831 ignition[830]: INFO : Ignition 2.14.0 May 17 00:40:42.989831 ignition[830]: INFO : Stage: files May 17 00:40:42.989831 ignition[830]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:40:42.989831 ignition[830]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:40:43.012452 unknown[830]: wrote ssh authorized keys file for user: core May 17 00:40:43.044786 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:40:43.044786 ignition[830]: DEBUG : files: compiled without relabeling support, skipping May 17 00:40:43.044786 ignition[830]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:40:43.044786 ignition[830]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:40:43.044786 ignition[830]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:40:43.044786 ignition[830]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:40:43.044786 ignition[830]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:40:43.044786 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" May 17 00:40:43.044786 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:40:43.044786 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem118392700" May 17 00:40:43.044786 ignition[830]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem118392700": device or resource busy May 17 00:40:43.044786 ignition[830]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem118392700", trying btrfs: device or resource busy May 17 00:40:43.044786 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem118392700" May 17 00:40:43.044786 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem118392700" May 17 00:40:43.044786 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem118392700" May 17 00:40:43.044786 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem118392700" May 17 00:40:43.044786 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" May 17 00:40:43.044786 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:40:43.030824 systemd-networkd[685]: eth0: Gained IPv6LL May 17 00:40:43.315833 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:40:43.315833 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK May 17 00:40:43.380870 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:40:43.397839 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:40:43.397839 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:40:43.700873 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK May 17 00:40:43.855348 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:40:43.871833 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" May 17 00:40:43.871833 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:40:43.871833 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem756948315" May 17 00:40:43.871833 ignition[830]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem756948315": device or resource busy May 17 00:40:43.871833 ignition[830]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem756948315", trying btrfs: device or resource busy May 17 00:40:43.871833 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem756948315" May 17 00:40:43.871833 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem756948315" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem756948315" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem756948315" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/install.sh" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:40:43.995837 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:40:43.875967 systemd[1]: mnt-oem756948315.mount: Deactivated successfully. May 17 00:40:44.249871 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" May 17 00:40:44.249871 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:40:44.249871 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1728604041" May 17 00:40:44.249871 ignition[830]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1728604041": device or resource busy May 17 00:40:44.249871 ignition[830]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1728604041", trying btrfs: device or resource busy May 17 00:40:44.249871 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1728604041" May 17 00:40:44.249871 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1728604041" May 17 00:40:44.249871 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem1728604041" May 17 00:40:44.249871 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem1728604041" May 17 00:40:44.249871 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" May 17 00:40:44.249871 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:40:44.249871 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:40:43.900619 systemd[1]: mnt-oem1728604041.mount: Deactivated successfully. May 17 00:40:44.473906 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK May 17 00:40:44.751130 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:40:44.769807 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" May 17 00:40:44.769807 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:40:44.769807 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem256872157" May 17 00:40:44.769807 ignition[830]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem256872157": device or resource busy May 17 00:40:44.769807 ignition[830]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem256872157", trying btrfs: device or resource busy May 17 00:40:44.769807 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem256872157" May 17 00:40:44.769807 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem256872157" May 17 00:40:44.769807 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem256872157" May 17 00:40:44.769807 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem256872157" May 17 00:40:44.769807 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" May 17 00:40:44.769807 ignition[830]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 00:40:44.769807 ignition[830]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 00:40:44.769807 ignition[830]: INFO : files: op(1d): [started] processing unit "oem-gce.service" May 17 00:40:44.769807 ignition[830]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" May 17 00:40:44.769807 ignition[830]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" May 17 00:40:44.769807 ignition[830]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" May 17 00:40:44.769807 ignition[830]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" May 17 00:40:45.162091 kernel: audit: type=1130 audit(1747442444.785:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.162154 kernel: audit: type=1130 audit(1747442444.916:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.162184 kernel: audit: type=1130 audit(1747442444.962:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.162210 kernel: audit: type=1131 audit(1747442444.962:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:44.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:44.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:44.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:44.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.162595 ignition[830]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:40:45.162595 ignition[830]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:40:45.162595 ignition[830]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" May 17 00:40:45.162595 ignition[830]: INFO : files: op(21): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:40:45.162595 ignition[830]: INFO : files: op(21): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:40:45.162595 ignition[830]: INFO : files: op(22): [started] setting preset to enabled for "oem-gce.service" May 17 00:40:45.162595 ignition[830]: INFO : files: op(22): [finished] setting preset to enabled for "oem-gce.service" May 17 00:40:45.162595 ignition[830]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" May 17 00:40:45.162595 ignition[830]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" May 17 00:40:45.162595 ignition[830]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" May 17 00:40:45.162595 ignition[830]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:40:45.162595 ignition[830]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:40:45.162595 ignition[830]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:40:45.162595 ignition[830]: INFO : files: files passed May 17 00:40:45.162595 ignition[830]: INFO : Ignition finished successfully May 17 00:40:45.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:44.772455 systemd[1]: Finished ignition-files.service. May 17 00:40:44.800377 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:40:45.474882 initrd-setup-root-after-ignition[853]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:40:44.854011 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:40:44.855281 systemd[1]: Starting ignition-quench.service... May 17 00:40:44.901387 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:40:44.918580 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:40:44.918855 systemd[1]: Finished ignition-quench.service. May 17 00:40:45.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:44.964419 systemd[1]: Reached target ignition-complete.target. May 17 00:40:45.047167 systemd[1]: Starting initrd-parse-etc.service... May 17 00:40:45.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.085242 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:40:45.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.085378 systemd[1]: Finished initrd-parse-etc.service. May 17 00:40:45.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.097312 systemd[1]: Reached target initrd-fs.target. May 17 00:40:45.659801 ignition[868]: INFO : Ignition 2.14.0 May 17 00:40:45.659801 ignition[868]: INFO : Stage: umount May 17 00:40:45.659801 ignition[868]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:40:45.659801 ignition[868]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:40:45.130904 systemd[1]: Reached target initrd.target. May 17 00:40:45.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.732108 ignition[868]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:40:45.732108 ignition[868]: INFO : umount: umount passed May 17 00:40:45.732108 ignition[868]: INFO : Ignition finished successfully May 17 00:40:45.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.148994 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:40:45.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.150288 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:40:45.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.179192 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:40:45.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.205267 systemd[1]: Starting initrd-cleanup.service... May 17 00:40:45.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.248956 systemd[1]: Stopped target nss-lookup.target. May 17 00:40:45.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.252281 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:40:45.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.273334 systemd[1]: Stopped target timers.target. May 17 00:40:45.292353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:40:45.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.292697 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:40:45.311617 systemd[1]: Stopped target initrd.target. May 17 00:40:45.351231 systemd[1]: Stopped target basic.target. May 17 00:40:45.364271 systemd[1]: Stopped target ignition-complete.target. May 17 00:40:45.403270 systemd[1]: Stopped target ignition-diskful.target. May 17 00:40:45.419220 systemd[1]: Stopped target initrd-root-device.target. May 17 00:40:45.449188 systemd[1]: Stopped target remote-fs.target. May 17 00:40:46.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.465203 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:40:46.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.483245 systemd[1]: Stopped target sysinit.target. May 17 00:40:45.506175 systemd[1]: Stopped target local-fs.target. May 17 00:40:45.531243 systemd[1]: Stopped target local-fs-pre.target. May 17 00:40:46.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.547186 systemd[1]: Stopped target swap.target. May 17 00:40:46.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.563116 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:40:46.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:46.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:46.104000 audit: BPF prog-id=6 op=UNLOAD May 17 00:40:45.563336 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:40:45.579332 systemd[1]: Stopped target cryptsetup.target. May 17 00:40:45.594141 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:40:46.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.594378 systemd[1]: Stopped dracut-initqueue.service. May 17 00:40:46.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.611314 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:40:46.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.611523 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:40:45.629262 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:40:46.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.629471 systemd[1]: Stopped ignition-files.service. May 17 00:40:45.648075 systemd[1]: Stopping ignition-mount.service... May 17 00:40:45.682308 systemd[1]: Stopping iscsiuio.service... May 17 00:40:46.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.692998 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:40:46.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.693249 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:40:46.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.726593 systemd[1]: Stopping sysroot-boot.service... May 17 00:40:45.744180 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:40:45.744693 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:40:46.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.764163 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:40:46.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.764371 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:40:46.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:46.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.783760 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:40:45.785088 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:40:45.785220 systemd[1]: Stopped iscsiuio.service. May 17 00:40:45.795893 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:40:46.427091 systemd-journald[189]: Received SIGTERM from PID 1 (n/a). May 17 00:40:45.796035 systemd[1]: Stopped ignition-mount.service. May 17 00:40:46.434816 iscsid[695]: iscsid shutting down. May 17 00:40:45.815591 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:40:45.815747 systemd[1]: Stopped sysroot-boot.service. May 17 00:40:45.831889 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:40:45.832104 systemd[1]: Stopped ignition-disks.service. May 17 00:40:45.846007 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:40:45.846098 systemd[1]: Stopped ignition-kargs.service. May 17 00:40:45.861998 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:40:45.862086 systemd[1]: Stopped ignition-fetch.service. May 17 00:40:45.876960 systemd[1]: Stopped target network.target. May 17 00:40:45.894847 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:40:45.894990 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:40:45.910985 systemd[1]: Stopped target paths.target. May 17 00:40:45.925814 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:40:45.927720 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:40:45.941841 systemd[1]: Stopped target slices.target. May 17 00:40:45.954810 systemd[1]: Stopped target sockets.target. May 17 00:40:45.967948 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:40:45.968026 systemd[1]: Closed iscsid.socket. May 17 00:40:45.982915 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:40:45.982998 systemd[1]: Closed iscsiuio.socket. May 17 00:40:45.997903 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:40:45.998077 systemd[1]: Stopped ignition-setup.service. May 17 00:40:46.013973 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:40:46.014056 systemd[1]: Stopped initrd-setup-root.service. May 17 00:40:46.030377 systemd[1]: Stopping systemd-networkd.service... May 17 00:40:46.033709 systemd-networkd[685]: eth0: DHCPv6 lease lost May 17 00:40:46.046087 systemd[1]: Stopping systemd-resolved.service... May 17 00:40:46.053896 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:40:46.054040 systemd[1]: Stopped systemd-resolved.service. May 17 00:40:46.074751 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:40:46.074898 systemd[1]: Stopped systemd-networkd.service. May 17 00:40:46.090702 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:40:46.090830 systemd[1]: Finished initrd-cleanup.service. May 17 00:40:46.107124 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:40:46.107175 systemd[1]: Closed systemd-networkd.socket. May 17 00:40:46.123057 systemd[1]: Stopping network-cleanup.service... May 17 00:40:46.128981 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:40:46.129066 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:40:46.142178 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:40:46.142254 systemd[1]: Stopped systemd-sysctl.service. May 17 00:40:46.166154 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:40:46.166225 systemd[1]: Stopped systemd-modules-load.service. May 17 00:40:46.181214 systemd[1]: Stopping systemd-udevd.service... May 17 00:40:46.194803 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:40:46.195719 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:40:46.195901 systemd[1]: Stopped systemd-udevd.service. May 17 00:40:46.208850 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:40:46.208951 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:40:46.228919 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:40:46.228991 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:40:46.243967 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:40:46.244048 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:40:46.259087 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:40:46.259167 systemd[1]: Stopped dracut-cmdline.service. May 17 00:40:46.266274 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:40:46.266352 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:40:46.291267 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:40:46.304264 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:40:46.304358 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:40:46.329511 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:40:46.329715 systemd[1]: Stopped network-cleanup.service. May 17 00:40:46.345310 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:40:46.345443 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:40:46.363152 systemd[1]: Reached target initrd-switch-root.target. May 17 00:40:46.380067 systemd[1]: Starting initrd-switch-root.service... May 17 00:40:46.395906 systemd[1]: Switching root. May 17 00:40:46.445902 systemd-journald[189]: Journal stopped May 17 00:40:51.451710 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:40:51.451840 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:40:51.451868 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:40:51.451893 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:40:51.451919 kernel: SELinux: policy capability open_perms=1 May 17 00:40:51.451945 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:40:51.451985 kernel: SELinux: policy capability always_check_network=0 May 17 00:40:51.452013 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:40:51.452037 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:40:51.452062 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:40:51.452087 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:40:51.452114 systemd[1]: Successfully loaded SELinux policy in 114.370ms. May 17 00:40:51.452164 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.053ms. May 17 00:40:51.452194 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:40:51.452222 systemd[1]: Detected virtualization kvm. May 17 00:40:51.452259 systemd[1]: Detected architecture x86-64. May 17 00:40:51.452288 systemd[1]: Detected first boot. May 17 00:40:51.452329 systemd[1]: Initializing machine ID from VM UUID. May 17 00:40:51.452357 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:40:51.452386 kernel: kauditd_printk_skb: 44 callbacks suppressed May 17 00:40:51.452419 kernel: audit: type=1400 audit(1747442447.058:85): avc: denied { associate } for pid=901 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:40:51.452459 kernel: audit: type=1300 audit(1747442447.058:85): arch=c000003e syscall=188 success=yes exit=0 a0=c0001878d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:51.452491 kernel: audit: type=1327 audit(1747442447.058:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:40:51.452522 kernel: audit: type=1400 audit(1747442447.094:86): avc: denied { associate } for pid=901 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:40:51.452548 kernel: audit: type=1300 audit(1747442447.094:86): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879a9 a2=1ed a3=0 items=2 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:51.452572 kernel: audit: type=1307 audit(1747442447.094:86): cwd="/" May 17 00:40:51.452598 kernel: audit: type=1302 audit(1747442447.094:86): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:51.452658 kernel: audit: type=1302 audit(1747442447.094:86): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:51.452697 kernel: audit: type=1327 audit(1747442447.094:86): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:40:51.452731 systemd[1]: Populated /etc with preset unit settings. May 17 00:40:51.452762 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:40:51.452791 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:40:51.452819 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:40:51.452846 kernel: audit: type=1334 audit(1747442450.583:87): prog-id=12 op=LOAD May 17 00:40:51.452878 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:40:51.452906 systemd[1]: Stopped iscsid.service. May 17 00:40:51.452933 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:40:51.452959 systemd[1]: Stopped initrd-switch-root.service. May 17 00:40:51.452988 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:40:51.453015 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:40:51.453043 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:40:51.453071 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 00:40:51.453102 systemd[1]: Created slice system-getty.slice. May 17 00:40:51.453129 systemd[1]: Created slice system-modprobe.slice. May 17 00:40:51.453158 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:40:51.453187 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:40:51.453215 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:40:51.453242 systemd[1]: Created slice user.slice. May 17 00:40:51.453268 systemd[1]: Started systemd-ask-password-console.path. May 17 00:40:51.453304 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:40:51.453347 systemd[1]: Set up automount boot.automount. May 17 00:40:51.453383 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:40:51.453416 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:40:51.453450 systemd[1]: Stopped target initrd-fs.target. May 17 00:40:51.453480 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:40:51.453506 systemd[1]: Reached target integritysetup.target. May 17 00:40:51.453532 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:40:51.453559 systemd[1]: Reached target remote-fs.target. May 17 00:40:51.453589 systemd[1]: Reached target slices.target. May 17 00:40:51.453616 systemd[1]: Reached target swap.target. May 17 00:40:51.453684 systemd[1]: Reached target torcx.target. May 17 00:40:51.453715 systemd[1]: Reached target veritysetup.target. May 17 00:40:51.453743 systemd[1]: Listening on systemd-coredump.socket. May 17 00:40:51.453774 systemd[1]: Listening on systemd-initctl.socket. May 17 00:40:51.453809 systemd[1]: Listening on systemd-networkd.socket. May 17 00:40:51.453887 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:40:51.453916 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:40:51.453945 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:40:51.453975 systemd[1]: Mounting dev-hugepages.mount... May 17 00:40:51.454011 systemd[1]: Mounting dev-mqueue.mount... May 17 00:40:51.454051 systemd[1]: Mounting media.mount... May 17 00:40:51.454079 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:51.454115 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:40:51.454151 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:40:51.454185 systemd[1]: Mounting tmp.mount... May 17 00:40:51.454216 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:40:51.454249 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:40:51.454282 systemd[1]: Starting kmod-static-nodes.service... May 17 00:40:51.454320 systemd[1]: Starting modprobe@configfs.service... May 17 00:40:51.454352 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:40:51.454378 systemd[1]: Starting modprobe@drm.service... May 17 00:40:51.454404 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:40:51.454429 systemd[1]: Starting modprobe@fuse.service... May 17 00:40:51.454455 systemd[1]: Starting modprobe@loop.service... May 17 00:40:51.454480 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:40:51.454506 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:40:51.454533 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:40:51.454562 kernel: fuse: init (API version 7.34) May 17 00:40:51.454590 kernel: loop: module loaded May 17 00:40:51.454646 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:40:51.454678 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:40:51.454715 systemd[1]: Stopped systemd-journald.service. May 17 00:40:51.454743 systemd[1]: Starting systemd-journald.service... May 17 00:40:51.454769 systemd[1]: Starting systemd-modules-load.service... May 17 00:40:51.454798 systemd[1]: Starting systemd-network-generator.service... May 17 00:40:51.454829 systemd-journald[992]: Journal started May 17 00:40:51.454947 systemd-journald[992]: Runtime Journal (/run/log/journal/cd36fac3edfc5eb55f700f929dd2d8e7) is 8.0M, max 148.8M, 140.8M free. May 17 00:40:46.444000 audit: BPF prog-id=9 op=UNLOAD May 17 00:40:46.741000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:40:46.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:40:46.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:40:46.895000 audit: BPF prog-id=10 op=LOAD May 17 00:40:46.895000 audit: BPF prog-id=10 op=UNLOAD May 17 00:40:46.895000 audit: BPF prog-id=11 op=LOAD May 17 00:40:46.895000 audit: BPF prog-id=11 op=UNLOAD May 17 00:40:47.058000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:40:47.058000 audit[901]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.058000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:40:47.094000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:40:47.094000 audit[901]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879a9 a2=1ed a3=0 items=2 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.094000 audit: CWD cwd="/" May 17 00:40:47.094000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:47.094000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:47.094000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:40:50.583000 audit: BPF prog-id=12 op=LOAD May 17 00:40:50.583000 audit: BPF prog-id=3 op=UNLOAD May 17 00:40:50.590000 audit: BPF prog-id=13 op=LOAD May 17 00:40:50.591000 audit: BPF prog-id=14 op=LOAD May 17 00:40:50.591000 audit: BPF prog-id=4 op=UNLOAD May 17 00:40:50.591000 audit: BPF prog-id=5 op=UNLOAD May 17 00:40:50.592000 audit: BPF prog-id=15 op=LOAD May 17 00:40:50.592000 audit: BPF prog-id=12 op=UNLOAD May 17 00:40:50.592000 audit: BPF prog-id=16 op=LOAD May 17 00:40:50.592000 audit: BPF prog-id=17 op=LOAD May 17 00:40:50.592000 audit: BPF prog-id=13 op=UNLOAD May 17 00:40:50.592000 audit: BPF prog-id=14 op=UNLOAD May 17 00:40:50.594000 audit: BPF prog-id=18 op=LOAD May 17 00:40:50.594000 audit: BPF prog-id=15 op=UNLOAD May 17 00:40:50.594000 audit: BPF prog-id=19 op=LOAD May 17 00:40:50.594000 audit: BPF prog-id=20 op=LOAD May 17 00:40:50.594000 audit: BPF prog-id=16 op=UNLOAD May 17 00:40:50.594000 audit: BPF prog-id=17 op=UNLOAD May 17 00:40:50.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:50.603000 audit: BPF prog-id=18 op=UNLOAD May 17 00:40:50.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:50.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:50.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.403000 audit: BPF prog-id=21 op=LOAD May 17 00:40:51.403000 audit: BPF prog-id=22 op=LOAD May 17 00:40:51.403000 audit: BPF prog-id=23 op=LOAD May 17 00:40:51.403000 audit: BPF prog-id=19 op=UNLOAD May 17 00:40:51.404000 audit: BPF prog-id=20 op=UNLOAD May 17 00:40:51.447000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:40:51.447000 audit[992]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffd3c07610 a2=4000 a3=7fffd3c076ac items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:51.447000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:40:47.053927 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:40:50.582987 systemd[1]: Queued start job for default target multi-user.target. May 17 00:40:51.465771 systemd[1]: Starting systemd-remount-fs.service... May 17 00:40:47.055014 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:40:50.583003 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:40:47.055061 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:40:50.597311 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:40:47.055129 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:40:47.055158 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:40:47.055229 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:40:47.055262 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:40:47.055702 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:40:47.055806 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:40:47.055839 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:40:47.058880 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:40:47.058972 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:40:47.059015 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:40:47.059052 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:40:47.059095 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:40:47.059129 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:40:49.880286 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:49Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:40:49.880658 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:49Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:40:49.880906 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:49Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:40:49.881888 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:49Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:40:49.881984 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:49Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:40:49.882099 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2025-05-17T00:40:49Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:40:51.479718 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:40:51.499245 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:40:51.499373 systemd[1]: Stopped verity-setup.service. May 17 00:40:51.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.519662 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:51.530683 systemd[1]: Started systemd-journald.service. May 17 00:40:51.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.540385 systemd[1]: Mounted dev-hugepages.mount. May 17 00:40:51.548058 systemd[1]: Mounted dev-mqueue.mount. May 17 00:40:51.556066 systemd[1]: Mounted media.mount. May 17 00:40:51.564067 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:40:51.573038 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:40:51.582011 systemd[1]: Mounted tmp.mount. May 17 00:40:51.589352 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:40:51.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.598330 systemd[1]: Finished kmod-static-nodes.service. May 17 00:40:51.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.607307 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:40:51.607568 systemd[1]: Finished modprobe@configfs.service. May 17 00:40:51.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.617439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:40:51.617745 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:40:51.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.626315 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:40:51.626566 systemd[1]: Finished modprobe@drm.service. May 17 00:40:51.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.635370 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:40:51.635619 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:40:51.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.644336 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:40:51.644605 systemd[1]: Finished modprobe@fuse.service. May 17 00:40:51.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.654325 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:40:51.654594 systemd[1]: Finished modprobe@loop.service. May 17 00:40:51.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.664345 systemd[1]: Finished systemd-modules-load.service. May 17 00:40:51.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.673330 systemd[1]: Finished systemd-network-generator.service. May 17 00:40:51.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.682272 systemd[1]: Finished systemd-remount-fs.service. May 17 00:40:51.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.692335 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:40:51.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.701708 systemd[1]: Reached target network-pre.target. May 17 00:40:51.711506 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:40:51.721552 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:40:51.729809 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:40:51.733173 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:40:51.742973 systemd[1]: Starting systemd-journal-flush.service... May 17 00:40:51.751871 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:40:51.754116 systemd[1]: Starting systemd-random-seed.service... May 17 00:40:51.760864 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:40:51.762927 systemd[1]: Starting systemd-sysctl.service... May 17 00:40:51.768769 systemd-journald[992]: Time spent on flushing to /var/log/journal/cd36fac3edfc5eb55f700f929dd2d8e7 is 59.666ms for 1167 entries. May 17 00:40:51.768769 systemd-journald[992]: System Journal (/var/log/journal/cd36fac3edfc5eb55f700f929dd2d8e7) is 8.0M, max 584.8M, 576.8M free. May 17 00:40:51.875979 systemd-journald[992]: Received client request to flush runtime journal. May 17 00:40:51.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.779242 systemd[1]: Starting systemd-sysusers.service... May 17 00:40:51.790047 systemd[1]: Starting systemd-udev-settle.service... May 17 00:40:51.802111 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:40:51.877191 udevadm[1006]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:40:51.810973 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:40:51.820173 systemd[1]: Finished systemd-random-seed.service. May 17 00:40:51.829334 systemd[1]: Finished systemd-sysctl.service. May 17 00:40:51.841416 systemd[1]: Reached target first-boot-complete.target. May 17 00:40:51.873129 systemd[1]: Finished systemd-sysusers.service. May 17 00:40:51.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:51.882380 systemd[1]: Finished systemd-journal-flush.service. May 17 00:40:51.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:52.514000 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:40:52.550971 kernel: kauditd_printk_skb: 58 callbacks suppressed May 17 00:40:52.551111 kernel: audit: type=1130 audit(1747442452.521:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:52.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:52.526000 audit: BPF prog-id=24 op=LOAD May 17 00:40:52.552353 systemd[1]: Starting systemd-udevd.service... May 17 00:40:52.558604 kernel: audit: type=1334 audit(1747442452.526:145): prog-id=24 op=LOAD May 17 00:40:52.558801 kernel: audit: type=1334 audit(1747442452.549:146): prog-id=25 op=LOAD May 17 00:40:52.558846 kernel: audit: type=1334 audit(1747442452.549:147): prog-id=7 op=UNLOAD May 17 00:40:52.558888 kernel: audit: type=1334 audit(1747442452.549:148): prog-id=8 op=UNLOAD May 17 00:40:52.549000 audit: BPF prog-id=25 op=LOAD May 17 00:40:52.549000 audit: BPF prog-id=7 op=UNLOAD May 17 00:40:52.549000 audit: BPF prog-id=8 op=UNLOAD May 17 00:40:52.601072 systemd-udevd[1009]: Using default interface naming scheme 'v252'. May 17 00:40:52.660783 systemd[1]: Started systemd-udevd.service. May 17 00:40:52.691665 kernel: audit: type=1130 audit(1747442452.667:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:52.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:52.696000 audit: BPF prog-id=26 op=LOAD May 17 00:40:52.701592 systemd[1]: Starting systemd-networkd.service... May 17 00:40:52.708655 kernel: audit: type=1334 audit(1747442452.696:150): prog-id=26 op=LOAD May 17 00:40:52.721000 audit: BPF prog-id=27 op=LOAD May 17 00:40:52.730763 kernel: audit: type=1334 audit(1747442452.721:151): prog-id=27 op=LOAD May 17 00:40:52.732052 systemd[1]: Starting systemd-userdbd.service... May 17 00:40:52.729000 audit: BPF prog-id=28 op=LOAD May 17 00:40:52.741654 kernel: audit: type=1334 audit(1747442452.729:152): prog-id=28 op=LOAD May 17 00:40:52.729000 audit: BPF prog-id=29 op=LOAD May 17 00:40:52.751683 kernel: audit: type=1334 audit(1747442452.729:153): prog-id=29 op=LOAD May 17 00:40:52.795461 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:40:52.817040 systemd[1]: Started systemd-userdbd.service. May 17 00:40:52.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:52.960559 systemd-networkd[1020]: lo: Link UP May 17 00:40:52.960577 systemd-networkd[1020]: lo: Gained carrier May 17 00:40:52.961551 systemd-networkd[1020]: Enumeration completed May 17 00:40:52.961799 systemd[1]: Started systemd-networkd.service. May 17 00:40:52.962778 systemd-networkd[1020]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:40:52.965580 systemd-networkd[1020]: eth0: Link UP May 17 00:40:52.965596 systemd-networkd[1020]: eth0: Gained carrier May 17 00:40:52.967673 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:40:52.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:52.982461 systemd-networkd[1020]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2.c.flatcar-212911.internal' to 'ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2' May 17 00:40:52.982503 systemd-networkd[1020]: eth0: DHCPv4 address 10.128.0.28/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 17 00:40:53.034000 audit[1031]: AVC avc: denied { confidentiality } for pid=1031 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:40:53.034000 audit[1031]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557c0532be90 a1=338ac a2=7fe1ec165bc5 a3=5 items=110 ppid=1009 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:53.034000 audit: CWD cwd="/" May 17 00:40:53.034000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=1 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=2 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=3 name=(null) inode=13825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=4 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=5 name=(null) inode=13826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=6 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=7 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=8 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=9 name=(null) inode=13828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=10 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=11 name=(null) inode=13829 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=12 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=13 name=(null) inode=13830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=14 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=15 name=(null) inode=13831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=16 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=17 name=(null) inode=13832 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=18 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=19 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=20 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=21 name=(null) inode=13834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=22 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=23 name=(null) inode=13835 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=24 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=25 name=(null) inode=13836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=26 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=27 name=(null) inode=13837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=28 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=29 name=(null) inode=13838 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=30 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=31 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=32 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=33 name=(null) inode=13840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=34 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=35 name=(null) inode=13841 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=36 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=37 name=(null) inode=13842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=38 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=39 name=(null) inode=13843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=40 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=41 name=(null) inode=13844 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=42 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=43 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=44 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=45 name=(null) inode=13846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=46 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=47 name=(null) inode=13847 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=48 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=49 name=(null) inode=13848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=50 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=51 name=(null) inode=13849 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=52 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=53 name=(null) inode=13850 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=55 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=56 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=57 name=(null) inode=13852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=58 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=59 name=(null) inode=13853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=60 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=61 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=62 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=63 name=(null) inode=13855 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=64 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=65 name=(null) inode=13856 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=66 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=67 name=(null) inode=13857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=68 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=69 name=(null) inode=13858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=70 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=71 name=(null) inode=13859 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=72 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=73 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=74 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=75 name=(null) inode=13861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=76 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=77 name=(null) inode=13862 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=78 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=79 name=(null) inode=13863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=80 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=81 name=(null) inode=13864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=82 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=83 name=(null) inode=13865 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=84 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=85 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=86 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=87 name=(null) inode=13867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=88 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=89 name=(null) inode=13868 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=90 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=91 name=(null) inode=13869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=92 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=93 name=(null) inode=13870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=94 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=95 name=(null) inode=13871 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=96 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=97 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=98 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.061977 kernel: ACPI: button: Power Button [PWRF] May 17 00:40:53.062034 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 17 00:40:53.034000 audit: PATH item=99 name=(null) inode=13873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=100 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=101 name=(null) inode=13874 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=102 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=103 name=(null) inode=13875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=104 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=105 name=(null) inode=13876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=106 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=107 name=(null) inode=13877 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PATH item=109 name=(null) inode=13878 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:53.034000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:40:53.072867 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:40:53.086666 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 17 00:40:53.095489 kernel: ACPI: button: Sleep Button [SLPF] May 17 00:40:53.112583 kernel: EDAC MC: Ver: 3.0.0 May 17 00:40:53.145700 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:40:53.179656 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:40:53.197274 systemd[1]: Finished systemd-udev-settle.service. May 17 00:40:53.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:53.207816 systemd[1]: Starting lvm2-activation-early.service... May 17 00:40:53.238026 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:40:53.271203 systemd[1]: Finished lvm2-activation-early.service. May 17 00:40:53.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:53.280072 systemd[1]: Reached target cryptsetup.target. May 17 00:40:53.290534 systemd[1]: Starting lvm2-activation.service... May 17 00:40:53.296085 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:40:53.326194 systemd[1]: Finished lvm2-activation.service. May 17 00:40:53.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:53.335096 systemd[1]: Reached target local-fs-pre.target. May 17 00:40:53.343861 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:40:53.343921 systemd[1]: Reached target local-fs.target. May 17 00:40:53.352832 systemd[1]: Reached target machines.target. May 17 00:40:53.362615 systemd[1]: Starting ldconfig.service... May 17 00:40:53.371803 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:40:53.371916 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:53.374230 systemd[1]: Starting systemd-boot-update.service... May 17 00:40:53.383605 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:40:53.393126 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:40:53.404542 systemd[1]: Starting systemd-sysext.service... May 17 00:40:53.405617 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1046 (bootctl) May 17 00:40:53.408776 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:40:53.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:53.426077 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:40:53.438601 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:40:53.452438 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:40:53.452783 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:40:53.479686 kernel: loop0: detected capacity change from 0 to 224512 May 17 00:40:53.584860 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) May 17 00:40:53.584860 systemd-fsck[1055]: /dev/sda1: 790 files, 120726/258078 clusters May 17 00:40:53.589764 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:40:53.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:53.603053 systemd[1]: Mounting boot.mount... May 17 00:40:53.624006 systemd[1]: Mounted boot.mount. May 17 00:40:53.658416 systemd[1]: Finished systemd-boot-update.service. May 17 00:40:53.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.000672 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:40:54.004883 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:40:54.006104 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:40:54.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.031688 kernel: loop1: detected capacity change from 0 to 224512 May 17 00:40:54.056989 ldconfig[1045]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:40:54.063289 systemd[1]: Finished ldconfig.service. May 17 00:40:54.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.072664 (sd-sysext)[1061]: Using extensions 'kubernetes'. May 17 00:40:54.073406 (sd-sysext)[1061]: Merged extensions into '/usr'. May 17 00:40:54.098962 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:54.101417 systemd[1]: Mounting usr-share-oem.mount... May 17 00:40:54.109063 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:40:54.111440 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:40:54.121157 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:40:54.131110 systemd[1]: Starting modprobe@loop.service... May 17 00:40:54.138884 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:40:54.139260 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:54.139570 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:54.144527 systemd[1]: Mounted usr-share-oem.mount. May 17 00:40:54.152611 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:40:54.152955 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:40:54.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.162734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:40:54.163007 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:40:54.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.172684 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:40:54.172946 systemd[1]: Finished modprobe@loop.service. May 17 00:40:54.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.182810 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:40:54.183087 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:40:54.184998 systemd[1]: Finished systemd-sysext.service. May 17 00:40:54.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.195911 systemd[1]: Starting ensure-sysext.service... May 17 00:40:54.204781 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:40:54.218350 systemd[1]: Reloading. May 17 00:40:54.249383 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:40:54.262139 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:40:54.279123 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:40:54.327007 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2025-05-17T00:40:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:40:54.327070 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2025-05-17T00:40:54Z" level=info msg="torcx already run" May 17 00:40:54.539301 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:40:54.539701 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:40:54.591026 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:40:54.614776 systemd-networkd[1020]: eth0: Gained IPv6LL May 17 00:40:54.675000 audit: BPF prog-id=30 op=LOAD May 17 00:40:54.675000 audit: BPF prog-id=27 op=UNLOAD May 17 00:40:54.675000 audit: BPF prog-id=31 op=LOAD May 17 00:40:54.675000 audit: BPF prog-id=32 op=LOAD May 17 00:40:54.675000 audit: BPF prog-id=28 op=UNLOAD May 17 00:40:54.675000 audit: BPF prog-id=29 op=UNLOAD May 17 00:40:54.676000 audit: BPF prog-id=33 op=LOAD May 17 00:40:54.677000 audit: BPF prog-id=34 op=LOAD May 17 00:40:54.677000 audit: BPF prog-id=24 op=UNLOAD May 17 00:40:54.677000 audit: BPF prog-id=25 op=UNLOAD May 17 00:40:54.682000 audit: BPF prog-id=35 op=LOAD May 17 00:40:54.682000 audit: BPF prog-id=26 op=UNLOAD May 17 00:40:54.683000 audit: BPF prog-id=36 op=LOAD May 17 00:40:54.683000 audit: BPF prog-id=21 op=UNLOAD May 17 00:40:54.684000 audit: BPF prog-id=37 op=LOAD May 17 00:40:54.684000 audit: BPF prog-id=38 op=LOAD May 17 00:40:54.684000 audit: BPF prog-id=22 op=UNLOAD May 17 00:40:54.684000 audit: BPF prog-id=23 op=UNLOAD May 17 00:40:54.694090 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:40:54.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.710107 systemd[1]: Starting audit-rules.service... May 17 00:40:54.719976 systemd[1]: Starting clean-ca-certificates.service... May 17 00:40:54.731288 systemd[1]: Starting oem-gce-enable-oslogin.service... May 17 00:40:54.742651 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:40:54.750000 audit: BPF prog-id=39 op=LOAD May 17 00:40:54.754529 systemd[1]: Starting systemd-resolved.service... May 17 00:40:54.761000 audit: BPF prog-id=40 op=LOAD May 17 00:40:54.765600 systemd[1]: Starting systemd-timesyncd.service... May 17 00:40:54.775211 systemd[1]: Starting systemd-update-utmp.service... May 17 00:40:54.785576 systemd[1]: Finished clean-ca-certificates.service. May 17 00:40:54.789000 audit[1155]: SYSTEM_BOOT pid=1155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:40:54.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.794850 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 17 00:40:54.795130 systemd[1]: Finished oem-gce-enable-oslogin.service. May 17 00:40:54.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:54.811358 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:54.812035 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:40:54.811000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:40:54.811000 audit[1162]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc62aa5370 a2=420 a3=0 items=0 ppid=1132 pid=1162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:54.811000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:40:54.814203 augenrules[1162]: No rules May 17 00:40:54.816792 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:40:54.826187 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:40:54.836398 systemd[1]: Starting modprobe@loop.service... May 17 00:40:54.846648 systemd[1]: Starting oem-gce-enable-oslogin.service... May 17 00:40:54.855895 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:40:54.856332 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:54.856733 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:40:54.856925 enable-oslogin[1170]: /etc/pam.d/sshd already exists. Not enabling OS Login May 17 00:40:54.857086 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:54.861781 systemd[1]: Finished audit-rules.service. May 17 00:40:54.869902 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:40:54.880985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:40:54.881248 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:40:54.890877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:40:54.891147 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:40:54.900887 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:40:54.901151 systemd[1]: Finished modprobe@loop.service. May 17 00:40:54.915107 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 17 00:40:54.915413 systemd[1]: Finished oem-gce-enable-oslogin.service. May 17 00:40:54.924963 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:40:54.925302 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:40:54.928349 systemd[1]: Starting systemd-update-done.service... May 17 00:40:54.937029 systemd[1]: Finished systemd-update-utmp.service. May 17 00:40:54.946548 systemd[1]: Finished systemd-update-done.service. May 17 00:40:54.961898 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:54.962426 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:40:54.967718 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:40:54.969940 systemd-timesyncd[1151]: Contacted time server 169.254.169.254:123 (169.254.169.254). May 17 00:40:54.970049 systemd-timesyncd[1151]: Initial clock synchronization to Sat 2025-05-17 00:40:55.249807 UTC. May 17 00:40:54.973263 systemd-resolved[1148]: Positive Trust Anchors: May 17 00:40:54.973286 systemd-resolved[1148]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:40:54.973359 systemd-resolved[1148]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:40:54.976856 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:40:54.987177 systemd[1]: Starting modprobe@loop.service... May 17 00:40:54.997136 systemd[1]: Starting oem-gce-enable-oslogin.service... May 17 00:40:55.004166 enable-oslogin[1176]: /etc/pam.d/sshd already exists. Not enabling OS Login May 17 00:40:55.005920 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:40:55.006275 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:55.006528 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:40:55.006828 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:55.009200 systemd[1]: Started systemd-timesyncd.service. May 17 00:40:55.015544 systemd-resolved[1148]: Defaulting to hostname 'linux'. May 17 00:40:55.020070 systemd[1]: Started systemd-resolved.service. May 17 00:40:55.029641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:40:55.029948 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:40:55.039577 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:40:55.039913 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:40:55.050729 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:40:55.051020 systemd[1]: Finished modprobe@loop.service. May 17 00:40:55.060841 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 17 00:40:55.061175 systemd[1]: Finished oem-gce-enable-oslogin.service. May 17 00:40:55.076562 systemd[1]: Reached target network.target. May 17 00:40:55.086143 systemd[1]: Reached target nss-lookup.target. May 17 00:40:55.096190 systemd[1]: Reached target time-set.target. May 17 00:40:55.105140 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:55.105804 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:40:55.108446 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:40:55.118058 systemd[1]: Starting modprobe@drm.service... May 17 00:40:55.128442 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:40:55.138451 systemd[1]: Starting modprobe@loop.service... May 17 00:40:55.148918 systemd[1]: Starting oem-gce-enable-oslogin.service... May 17 00:40:55.153629 enable-oslogin[1181]: /etc/pam.d/sshd already exists. Not enabling OS Login May 17 00:40:55.158994 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:40:55.159382 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:55.162016 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:40:55.171030 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:40:55.171344 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:55.175299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:40:55.175570 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:40:55.185832 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:40:55.186117 systemd[1]: Finished modprobe@drm.service. May 17 00:40:55.195631 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:40:55.195944 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:40:55.205732 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:40:55.206017 systemd[1]: Finished modprobe@loop.service. May 17 00:40:55.215777 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 17 00:40:55.216126 systemd[1]: Finished oem-gce-enable-oslogin.service. May 17 00:40:55.225827 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:40:55.238120 systemd[1]: Reached target network-online.target. May 17 00:40:55.247070 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:40:55.247146 systemd[1]: Reached target sysinit.target. May 17 00:40:55.257089 systemd[1]: Started motdgen.path. May 17 00:40:55.264974 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:40:55.276228 systemd[1]: Started logrotate.timer. May 17 00:40:55.284125 systemd[1]: Started mdadm.timer. May 17 00:40:55.291958 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:40:55.300953 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:40:55.301042 systemd[1]: Reached target paths.target. May 17 00:40:55.308925 systemd[1]: Reached target timers.target. May 17 00:40:55.317817 systemd[1]: Listening on dbus.socket. May 17 00:40:55.327556 systemd[1]: Starting docker.socket... May 17 00:40:55.341058 systemd[1]: Listening on sshd.socket. May 17 00:40:55.349084 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:55.349212 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:40:55.350394 systemd[1]: Finished ensure-sysext.service. May 17 00:40:55.360213 systemd[1]: Listening on docker.socket. May 17 00:40:55.369209 systemd[1]: Reached target sockets.target. May 17 00:40:55.377879 systemd[1]: Reached target basic.target. May 17 00:40:55.384958 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:40:55.385009 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:40:55.386934 systemd[1]: Starting containerd.service... May 17 00:40:55.396646 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 00:40:55.407887 systemd[1]: Starting dbus.service... May 17 00:40:55.415853 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:40:55.424715 systemd[1]: Starting extend-filesystems.service... May 17 00:40:55.432966 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:40:55.435990 systemd[1]: Starting kubelet.service... May 17 00:40:55.445269 systemd[1]: Starting motdgen.service... May 17 00:40:55.455798 systemd[1]: Starting oem-gce.service... May 17 00:40:55.465635 systemd[1]: Starting prepare-helm.service... May 17 00:40:55.476250 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:40:55.480694 jq[1188]: false May 17 00:40:55.486194 systemd[1]: Starting sshd-keygen.service... May 17 00:40:55.500828 systemd[1]: Starting systemd-logind.service... May 17 00:40:55.507854 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:55.508003 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). May 17 00:40:55.508936 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:40:55.510581 systemd[1]: Starting update-engine.service... May 17 00:40:55.521960 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:40:55.537162 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:40:55.537557 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:40:55.546759 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:40:55.547146 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:40:55.566247 jq[1211]: true May 17 00:40:55.586002 extend-filesystems[1189]: Found loop1 May 17 00:40:55.592190 extend-filesystems[1189]: Found sda May 17 00:40:55.592190 extend-filesystems[1189]: Found sda1 May 17 00:40:55.592190 extend-filesystems[1189]: Found sda2 May 17 00:40:55.592190 extend-filesystems[1189]: Found sda3 May 17 00:40:55.640962 mkfs.ext4[1217]: mke2fs 1.46.5 (30-Dec-2021) May 17 00:40:55.640962 mkfs.ext4[1217]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done May 17 00:40:55.640962 mkfs.ext4[1217]: Creating filesystem with 262144 4k blocks and 65536 inodes May 17 00:40:55.640962 mkfs.ext4[1217]: Filesystem UUID: b573e309-762c-45b7-a4b1-8e079a25a785 May 17 00:40:55.640962 mkfs.ext4[1217]: Superblock backups stored on blocks: May 17 00:40:55.640962 mkfs.ext4[1217]: 32768, 98304, 163840, 229376 May 17 00:40:55.640962 mkfs.ext4[1217]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 17 00:40:55.640962 mkfs.ext4[1217]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 17 00:40:55.640962 mkfs.ext4[1217]: Creating journal (8192 blocks): done May 17 00:40:55.640962 mkfs.ext4[1217]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 17 00:40:55.641588 extend-filesystems[1189]: Found usr May 17 00:40:55.641588 extend-filesystems[1189]: Found sda4 May 17 00:40:55.641588 extend-filesystems[1189]: Found sda6 May 17 00:40:55.641588 extend-filesystems[1189]: Found sda7 May 17 00:40:55.641588 extend-filesystems[1189]: Found sda9 May 17 00:40:55.641588 extend-filesystems[1189]: Checking size of /dev/sda9 May 17 00:40:55.608617 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:40:55.687592 jq[1219]: true May 17 00:40:55.609147 systemd[1]: Finished motdgen.service. May 17 00:40:55.692242 tar[1215]: linux-amd64/LICENSE May 17 00:40:55.692242 tar[1215]: linux-amd64/helm May 17 00:40:55.696182 umount[1230]: umount: /var/lib/flatcar-oem-gce.img: not mounted. May 17 00:40:55.698224 systemd[1]: Started dbus.service. May 17 00:40:55.697985 dbus-daemon[1187]: [system] SELinux support is enabled May 17 00:40:55.703149 dbus-daemon[1187]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1020 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:40:55.723753 kernel: loop2: detected capacity change from 0 to 2097152 May 17 00:40:55.723888 extend-filesystems[1189]: Resized partition /dev/sda9 May 17 00:40:55.745787 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks May 17 00:40:55.710578 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:40:55.746035 extend-filesystems[1235]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:40:55.710646 systemd[1]: Reached target system-config.target. May 17 00:40:55.721919 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:40:55.721963 systemd[1]: Reached target user-config.target. May 17 00:40:55.765709 dbus-daemon[1187]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:40:55.773992 systemd[1]: Starting systemd-hostnamed.service... May 17 00:40:55.795205 update_engine[1209]: I0517 00:40:55.795092 1209 main.cc:92] Flatcar Update Engine starting May 17 00:40:55.802224 systemd[1]: Started update-engine.service. May 17 00:40:55.802796 update_engine[1209]: I0517 00:40:55.802294 1209 update_check_scheduler.cc:74] Next update check in 3m58s May 17 00:40:55.814093 systemd[1]: Started locksmithd.service. May 17 00:40:55.877705 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:40:55.896710 kernel: EXT4-fs (sda9): resized filesystem to 2538491 May 17 00:40:55.944139 coreos-metadata[1186]: May 17 00:40:55.910 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 May 17 00:40:55.944139 coreos-metadata[1186]: May 17 00:40:55.914 INFO Fetch failed with 404: resource not found May 17 00:40:55.944139 coreos-metadata[1186]: May 17 00:40:55.914 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 May 17 00:40:55.944139 coreos-metadata[1186]: May 17 00:40:55.914 INFO Fetch successful May 17 00:40:55.944139 coreos-metadata[1186]: May 17 00:40:55.914 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 May 17 00:40:55.944139 coreos-metadata[1186]: May 17 00:40:55.915 INFO Fetch failed with 404: resource not found May 17 00:40:55.944139 coreos-metadata[1186]: May 17 00:40:55.915 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 May 17 00:40:55.944139 coreos-metadata[1186]: May 17 00:40:55.915 INFO Fetch failed with 404: resource not found May 17 00:40:55.944139 coreos-metadata[1186]: May 17 00:40:55.915 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 May 17 00:40:55.944139 coreos-metadata[1186]: May 17 00:40:55.918 INFO Fetch successful May 17 00:40:55.947181 extend-filesystems[1235]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:40:55.947181 extend-filesystems[1235]: old_desc_blocks = 1, new_desc_blocks = 2 May 17 00:40:55.947181 extend-filesystems[1235]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. May 17 00:40:56.014446 extend-filesystems[1189]: Resized filesystem in /dev/sda9 May 17 00:40:56.030977 bash[1253]: Updated "/home/core/.ssh/authorized_keys" May 17 00:40:55.948966 unknown[1186]: wrote ssh authorized keys file for user: core May 17 00:40:56.031519 env[1216]: time="2025-05-17T00:40:56.027707155Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:40:55.961140 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:40:56.032119 update-ssh-keys[1259]: Updated "/home/core/.ssh/authorized_keys" May 17 00:40:55.961692 systemd[1]: Finished extend-filesystems.service. May 17 00:40:55.985575 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:40:55.997232 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 00:40:56.178188 systemd-logind[1207]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:40:56.179924 systemd-logind[1207]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 00:40:56.180134 systemd-logind[1207]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:40:56.183987 systemd-logind[1207]: New seat seat0. May 17 00:40:56.193452 systemd[1]: Started systemd-logind.service. May 17 00:40:56.243798 env[1216]: time="2025-05-17T00:40:56.243622756Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:40:56.244006 env[1216]: time="2025-05-17T00:40:56.243954755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:40:56.256001 env[1216]: time="2025-05-17T00:40:56.255930110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:40:56.256172 env[1216]: time="2025-05-17T00:40:56.256019026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:40:56.256898 env[1216]: time="2025-05-17T00:40:56.256540480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:40:56.256898 env[1216]: time="2025-05-17T00:40:56.256604551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:40:56.256898 env[1216]: time="2025-05-17T00:40:56.256667532Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:40:56.256898 env[1216]: time="2025-05-17T00:40:56.256702688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:40:56.257221 env[1216]: time="2025-05-17T00:40:56.256921967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:40:56.258292 env[1216]: time="2025-05-17T00:40:56.258231256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:40:56.258680 env[1216]: time="2025-05-17T00:40:56.258612793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:40:56.258778 env[1216]: time="2025-05-17T00:40:56.258697220Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:40:56.258937 env[1216]: time="2025-05-17T00:40:56.258840247Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:40:56.258937 env[1216]: time="2025-05-17T00:40:56.258884750Z" level=info msg="metadata content store policy set" policy=shared May 17 00:40:56.278096 env[1216]: time="2025-05-17T00:40:56.278009455Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:40:56.278256 env[1216]: time="2025-05-17T00:40:56.278107750Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:40:56.278256 env[1216]: time="2025-05-17T00:40:56.278157303Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:40:56.278380 env[1216]: time="2025-05-17T00:40:56.278317180Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:40:56.278380 env[1216]: time="2025-05-17T00:40:56.278349546Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:40:56.278515 env[1216]: time="2025-05-17T00:40:56.278379482Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:40:56.278515 env[1216]: time="2025-05-17T00:40:56.278434422Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:40:56.278515 env[1216]: time="2025-05-17T00:40:56.278461484Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:40:56.278515 env[1216]: time="2025-05-17T00:40:56.278489713Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:40:56.278771 env[1216]: time="2025-05-17T00:40:56.278514975Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:40:56.278771 env[1216]: time="2025-05-17T00:40:56.278550096Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:40:56.278771 env[1216]: time="2025-05-17T00:40:56.278594892Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:40:56.278957 env[1216]: time="2025-05-17T00:40:56.278816418Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:40:56.279021 env[1216]: time="2025-05-17T00:40:56.278963450Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:40:56.279690 env[1216]: time="2025-05-17T00:40:56.279645894Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:40:56.279811 env[1216]: time="2025-05-17T00:40:56.279752078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:40:56.279811 env[1216]: time="2025-05-17T00:40:56.279783834Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:40:56.279989 env[1216]: time="2025-05-17T00:40:56.279873682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:40:56.280089 env[1216]: time="2025-05-17T00:40:56.280002568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:40:56.280089 env[1216]: time="2025-05-17T00:40:56.280033807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:40:56.280089 env[1216]: time="2025-05-17T00:40:56.280063244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:40:56.280259 env[1216]: time="2025-05-17T00:40:56.280086956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:40:56.280259 env[1216]: time="2025-05-17T00:40:56.280114073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:40:56.280259 env[1216]: time="2025-05-17T00:40:56.280136604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:40:56.280259 env[1216]: time="2025-05-17T00:40:56.280158494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:40:56.280259 env[1216]: time="2025-05-17T00:40:56.280188814Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:40:56.280630 env[1216]: time="2025-05-17T00:40:56.280400735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:40:56.280630 env[1216]: time="2025-05-17T00:40:56.280433774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:40:56.280630 env[1216]: time="2025-05-17T00:40:56.280461880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:40:56.280630 env[1216]: time="2025-05-17T00:40:56.280490517Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:40:56.280630 env[1216]: time="2025-05-17T00:40:56.280520327Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:40:56.280630 env[1216]: time="2025-05-17T00:40:56.280543597Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:40:56.280630 env[1216]: time="2025-05-17T00:40:56.280574798Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:40:56.280998 env[1216]: time="2025-05-17T00:40:56.280631841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:40:56.281106 env[1216]: time="2025-05-17T00:40:56.280996137Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:40:56.285107 env[1216]: time="2025-05-17T00:40:56.281117197Z" level=info msg="Connect containerd service" May 17 00:40:56.285107 env[1216]: time="2025-05-17T00:40:56.281179486Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:40:56.285107 env[1216]: time="2025-05-17T00:40:56.282972098Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:40:56.285107 env[1216]: time="2025-05-17T00:40:56.283385291Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:40:56.285107 env[1216]: time="2025-05-17T00:40:56.283473424Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:40:56.283737 systemd[1]: Started containerd.service. May 17 00:40:56.298843 env[1216]: time="2025-05-17T00:40:56.298745328Z" level=info msg="Start subscribing containerd event" May 17 00:40:56.299013 env[1216]: time="2025-05-17T00:40:56.298855332Z" level=info msg="Start recovering state" May 17 00:40:56.299444 env[1216]: time="2025-05-17T00:40:56.299313896Z" level=info msg="Start event monitor" May 17 00:40:56.299444 env[1216]: time="2025-05-17T00:40:56.299364509Z" level=info msg="Start snapshots syncer" May 17 00:40:56.299444 env[1216]: time="2025-05-17T00:40:56.299388348Z" level=info msg="Start cni network conf syncer for default" May 17 00:40:56.299444 env[1216]: time="2025-05-17T00:40:56.299414431Z" level=info msg="Start streaming server" May 17 00:40:56.324537 dbus-daemon[1187]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:40:56.324811 systemd[1]: Started systemd-hostnamed.service. May 17 00:40:56.326333 dbus-daemon[1187]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1242 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:40:56.340718 systemd[1]: Starting polkit.service... May 17 00:40:56.356690 env[1216]: time="2025-05-17T00:40:56.350852276Z" level=info msg="containerd successfully booted in 0.435334s" May 17 00:40:56.425598 polkitd[1276]: Started polkitd version 121 May 17 00:40:56.481437 polkitd[1276]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:40:56.485160 polkitd[1276]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:40:56.489960 polkitd[1276]: Finished loading, compiling and executing 2 rules May 17 00:40:56.499118 dbus-daemon[1187]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:40:56.499499 systemd[1]: Started polkit.service. May 17 00:40:56.500593 polkitd[1276]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:40:56.536165 systemd-hostnamed[1242]: Hostname set to (transient) May 17 00:40:56.539788 systemd-resolved[1148]: System hostname changed to 'ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2'. May 17 00:40:57.907824 tar[1215]: linux-amd64/README.md May 17 00:40:57.921320 systemd[1]: Finished prepare-helm.service. May 17 00:40:58.365738 systemd[1]: Started kubelet.service. May 17 00:40:59.388755 locksmithd[1248]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:40:59.732661 kubelet[1292]: E0517 00:40:59.732503 1292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:40:59.735930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:40:59.736187 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:40:59.736628 systemd[1]: kubelet.service: Consumed 1.578s CPU time. May 17 00:41:00.145224 sshd_keygen[1220]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:41:00.193201 systemd[1]: Finished sshd-keygen.service. May 17 00:41:00.204774 systemd[1]: Starting issuegen.service... May 17 00:41:00.218146 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:41:00.218438 systemd[1]: Finished issuegen.service. May 17 00:41:00.228587 systemd[1]: Starting systemd-user-sessions.service... May 17 00:41:00.241527 systemd[1]: Finished systemd-user-sessions.service. May 17 00:41:00.252846 systemd[1]: Started getty@tty1.service. May 17 00:41:00.263876 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:41:00.273339 systemd[1]: Reached target getty.target. May 17 00:41:02.534279 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. May 17 00:41:03.962249 systemd[1]: Created slice system-sshd.slice. May 17 00:41:03.974477 systemd[1]: Started sshd@0-10.128.0.28:22-139.178.89.65:42038.service. May 17 00:41:04.343788 sshd[1315]: Accepted publickey for core from 139.178.89.65 port 42038 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:41:04.348833 sshd[1315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:04.369096 systemd[1]: Created slice user-500.slice. May 17 00:41:04.379346 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:41:04.391243 systemd-logind[1207]: New session 1 of user core. May 17 00:41:04.399773 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:41:04.411614 systemd[1]: Starting user@500.service... May 17 00:41:04.431666 (systemd)[1318]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:04.660282 systemd[1318]: Queued start job for default target default.target. May 17 00:41:04.661310 systemd[1318]: Reached target paths.target. May 17 00:41:04.661349 systemd[1318]: Reached target sockets.target. May 17 00:41:04.661375 systemd[1318]: Reached target timers.target. May 17 00:41:04.661406 systemd[1318]: Reached target basic.target. May 17 00:41:04.661580 systemd[1]: Started user@500.service. May 17 00:41:04.662041 systemd[1318]: Reached target default.target. May 17 00:41:04.662121 systemd[1318]: Startup finished in 220ms. May 17 00:41:04.672340 systemd[1]: Started session-1.scope. May 17 00:41:04.687888 kernel: loop2: detected capacity change from 0 to 2097152 May 17 00:41:04.711820 systemd-nspawn[1324]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. May 17 00:41:04.712279 systemd-nspawn[1324]: Press ^] three times within 1s to kill container. May 17 00:41:04.728727 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:41:04.751398 systemd[1]: tmp-unifiedSBdTEo.mount: Deactivated successfully. May 17 00:41:04.816007 systemd[1]: Started oem-gce.service. May 17 00:41:04.823412 systemd[1]: Reached target multi-user.target. May 17 00:41:04.835578 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:41:04.856955 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:41:04.857309 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:41:04.868068 systemd[1]: Startup finished in 1.142s (kernel) + 8.782s (initrd) + 18.254s (userspace) = 28.179s. May 17 00:41:04.912994 systemd[1]: Started sshd@1-10.128.0.28:22-139.178.89.65:42048.service. May 17 00:41:04.940745 systemd-nspawn[1324]: + '[' -e /etc/default/instance_configs.cfg.template ']' May 17 00:41:04.941023 systemd-nspawn[1324]: + echo -e '[InstanceSetup]\nset_host_keys = false' May 17 00:41:04.941023 systemd-nspawn[1324]: + /usr/bin/google_instance_setup May 17 00:41:05.220937 sshd[1333]: Accepted publickey for core from 139.178.89.65 port 42048 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:41:05.223769 sshd[1333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:05.232133 systemd-logind[1207]: New session 2 of user core. May 17 00:41:05.232819 systemd[1]: Started session-2.scope. May 17 00:41:05.443610 sshd[1333]: pam_unix(sshd:session): session closed for user core May 17 00:41:05.448408 systemd[1]: sshd@1-10.128.0.28:22-139.178.89.65:42048.service: Deactivated successfully. May 17 00:41:05.449681 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:41:05.452288 systemd-logind[1207]: Session 2 logged out. Waiting for processes to exit. May 17 00:41:05.453975 systemd-logind[1207]: Removed session 2. May 17 00:41:05.490838 systemd[1]: Started sshd@2-10.128.0.28:22-139.178.89.65:42056.service. May 17 00:41:05.755228 instance-setup[1335]: INFO Running google_set_multiqueue. May 17 00:41:05.773763 instance-setup[1335]: INFO Set channels for eth0 to 2. May 17 00:41:05.778758 instance-setup[1335]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. May 17 00:41:05.780558 instance-setup[1335]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 May 17 00:41:05.781270 instance-setup[1335]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. May 17 00:41:05.783952 instance-setup[1335]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 May 17 00:41:05.784227 instance-setup[1335]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. May 17 00:41:05.785411 instance-setup[1335]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 May 17 00:41:05.785936 instance-setup[1335]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. May 17 00:41:05.787555 instance-setup[1335]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 May 17 00:41:05.800124 sshd[1342]: Accepted publickey for core from 139.178.89.65 port 42056 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:41:05.801414 sshd[1342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:05.808036 instance-setup[1335]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus May 17 00:41:05.808219 instance-setup[1335]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus May 17 00:41:05.812190 systemd[1]: Started session-3.scope. May 17 00:41:05.813534 systemd-logind[1207]: New session 3 of user core. May 17 00:41:05.866818 systemd-nspawn[1324]: + /usr/bin/google_metadata_script_runner --script-type startup May 17 00:41:06.015996 sshd[1342]: pam_unix(sshd:session): session closed for user core May 17 00:41:06.020756 systemd[1]: sshd@2-10.128.0.28:22-139.178.89.65:42056.service: Deactivated successfully. May 17 00:41:06.021994 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:41:06.024232 systemd-logind[1207]: Session 3 logged out. Waiting for processes to exit. May 17 00:41:06.025856 systemd-logind[1207]: Removed session 3. May 17 00:41:06.061080 systemd[1]: Started sshd@3-10.128.0.28:22-139.178.89.65:42058.service. May 17 00:41:06.274231 startup-script[1373]: INFO Starting startup scripts. May 17 00:41:06.289052 startup-script[1373]: INFO No startup scripts found in metadata. May 17 00:41:06.289251 startup-script[1373]: INFO Finished running startup scripts. May 17 00:41:06.333853 systemd-nspawn[1324]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM May 17 00:41:06.333853 systemd-nspawn[1324]: + daemon_pids=() May 17 00:41:06.334164 systemd-nspawn[1324]: + for d in accounts clock_skew network May 17 00:41:06.334298 systemd-nspawn[1324]: + daemon_pids+=($!) May 17 00:41:06.334496 systemd-nspawn[1324]: + for d in accounts clock_skew network May 17 00:41:06.334690 systemd-nspawn[1324]: + /usr/bin/google_accounts_daemon May 17 00:41:06.335089 systemd-nspawn[1324]: + daemon_pids+=($!) May 17 00:41:06.335281 systemd-nspawn[1324]: + for d in accounts clock_skew network May 17 00:41:06.335451 systemd-nspawn[1324]: + /usr/bin/google_clock_skew_daemon May 17 00:41:06.335906 systemd-nspawn[1324]: + daemon_pids+=($!) May 17 00:41:06.336129 systemd-nspawn[1324]: + NOTIFY_SOCKET=/run/systemd/notify May 17 00:41:06.336129 systemd-nspawn[1324]: + /usr/bin/systemd-notify --ready May 17 00:41:06.337147 systemd-nspawn[1324]: + /usr/bin/google_network_daemon May 17 00:41:06.371760 sshd[1377]: Accepted publickey for core from 139.178.89.65 port 42058 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:41:06.372953 sshd[1377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:06.382014 systemd[1]: Started session-4.scope. May 17 00:41:06.383725 systemd-logind[1207]: New session 4 of user core. May 17 00:41:06.437150 systemd-nspawn[1324]: + wait -n 36 37 38 May 17 00:41:06.596411 sshd[1377]: pam_unix(sshd:session): session closed for user core May 17 00:41:06.601907 systemd[1]: sshd@3-10.128.0.28:22-139.178.89.65:42058.service: Deactivated successfully. May 17 00:41:06.603162 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:41:06.605877 systemd-logind[1207]: Session 4 logged out. Waiting for processes to exit. May 17 00:41:06.608090 systemd-logind[1207]: Removed session 4. May 17 00:41:06.649086 systemd[1]: Started sshd@4-10.128.0.28:22-139.178.89.65:55990.service. May 17 00:41:06.975145 sshd[1389]: Accepted publickey for core from 139.178.89.65 port 55990 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:41:06.975777 sshd[1389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:06.985902 systemd-logind[1207]: New session 5 of user core. May 17 00:41:06.986711 systemd[1]: Started session-5.scope. May 17 00:41:07.152112 google-clock-skew[1382]: INFO Starting Google Clock Skew daemon. May 17 00:41:07.188815 sudo[1398]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:41:07.189349 sudo[1398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:41:07.197788 google-clock-skew[1382]: INFO Clock drift token has changed: 0. May 17 00:41:07.209389 systemd-nspawn[1324]: hwclock: Cannot access the Hardware Clock via any known method. May 17 00:41:07.209389 systemd-nspawn[1324]: hwclock: Use the --verbose option to see the details of our search for an access method. May 17 00:41:07.211338 google-clock-skew[1382]: WARNING Failed to sync system time with hardware clock. May 17 00:41:07.261367 systemd[1]: Starting docker.service... May 17 00:41:07.287389 google-networking[1383]: INFO Starting Google Networking daemon. May 17 00:41:07.352142 env[1409]: time="2025-05-17T00:41:07.352063725Z" level=info msg="Starting up" May 17 00:41:07.355562 env[1409]: time="2025-05-17T00:41:07.355514824Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:41:07.355562 env[1409]: time="2025-05-17T00:41:07.355556788Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:41:07.355811 env[1409]: time="2025-05-17T00:41:07.355589181Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:41:07.355811 env[1409]: time="2025-05-17T00:41:07.355607654Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:41:07.358481 env[1409]: time="2025-05-17T00:41:07.358436658Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:41:07.358481 env[1409]: time="2025-05-17T00:41:07.358474389Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:41:07.358725 env[1409]: time="2025-05-17T00:41:07.358512008Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:41:07.358725 env[1409]: time="2025-05-17T00:41:07.358528277Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:41:07.376487 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3173594484-merged.mount: Deactivated successfully. May 17 00:41:07.422386 env[1409]: time="2025-05-17T00:41:07.422334713Z" level=info msg="Loading containers: start." May 17 00:41:07.432467 groupadd[1417]: group added to /etc/group: name=google-sudoers, GID=1000 May 17 00:41:07.437471 groupadd[1417]: group added to /etc/gshadow: name=google-sudoers May 17 00:41:07.442440 groupadd[1417]: new group: name=google-sudoers, GID=1000 May 17 00:41:07.460211 google-accounts[1381]: INFO Starting Google Accounts daemon. May 17 00:41:07.500708 google-accounts[1381]: WARNING OS Login not installed. May 17 00:41:07.502401 google-accounts[1381]: INFO Creating a new user account for 0. May 17 00:41:07.511069 systemd-nspawn[1324]: useradd: invalid user name '0': use --badname to ignore May 17 00:41:07.512272 google-accounts[1381]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. May 17 00:41:07.641670 kernel: Initializing XFRM netlink socket May 17 00:41:07.692904 env[1409]: time="2025-05-17T00:41:07.692843510Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:41:07.785876 systemd-networkd[1020]: docker0: Link UP May 17 00:41:07.807876 env[1409]: time="2025-05-17T00:41:07.807812187Z" level=info msg="Loading containers: done." May 17 00:41:07.826466 env[1409]: time="2025-05-17T00:41:07.826387679Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:41:07.826864 env[1409]: time="2025-05-17T00:41:07.826774961Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:41:07.826995 env[1409]: time="2025-05-17T00:41:07.826964739Z" level=info msg="Daemon has completed initialization" May 17 00:41:07.851011 systemd[1]: Started docker.service. May 17 00:41:07.863588 env[1409]: time="2025-05-17T00:41:07.863500003Z" level=info msg="API listen on /run/docker.sock" May 17 00:41:08.924288 env[1216]: time="2025-05-17T00:41:08.924217627Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:41:09.434737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2513936374.mount: Deactivated successfully. May 17 00:41:09.836270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:41:09.836625 systemd[1]: Stopped kubelet.service. May 17 00:41:09.836734 systemd[1]: kubelet.service: Consumed 1.578s CPU time. May 17 00:41:09.839165 systemd[1]: Starting kubelet.service... May 17 00:41:10.149894 systemd[1]: Started kubelet.service. May 17 00:41:10.249801 kubelet[1546]: E0517 00:41:10.249749 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:41:10.254016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:41:10.254211 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:41:11.368184 env[1216]: time="2025-05-17T00:41:11.368099322Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:11.371211 env[1216]: time="2025-05-17T00:41:11.371151007Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:11.374067 env[1216]: time="2025-05-17T00:41:11.374011426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:11.376528 env[1216]: time="2025-05-17T00:41:11.376477345Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:11.377784 env[1216]: time="2025-05-17T00:41:11.377706868Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 00:41:11.378725 env[1216]: time="2025-05-17T00:41:11.378688275Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:41:12.995869 env[1216]: time="2025-05-17T00:41:12.995775594Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:12.998727 env[1216]: time="2025-05-17T00:41:12.998662489Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:13.001994 env[1216]: time="2025-05-17T00:41:13.001944645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:13.004987 env[1216]: time="2025-05-17T00:41:13.004939720Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:13.006773 env[1216]: time="2025-05-17T00:41:13.006719520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 00:41:13.012664 env[1216]: time="2025-05-17T00:41:13.012591367Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:41:14.440957 env[1216]: time="2025-05-17T00:41:14.440857419Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:14.444429 env[1216]: time="2025-05-17T00:41:14.444357509Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:14.447893 env[1216]: time="2025-05-17T00:41:14.447823977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:14.450963 env[1216]: time="2025-05-17T00:41:14.450910658Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:14.452644 env[1216]: time="2025-05-17T00:41:14.452567388Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 00:41:14.454831 env[1216]: time="2025-05-17T00:41:14.454653085Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:41:15.672136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2908387177.mount: Deactivated successfully. May 17 00:41:16.506973 env[1216]: time="2025-05-17T00:41:16.506886126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:16.510336 env[1216]: time="2025-05-17T00:41:16.510267952Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:16.513174 env[1216]: time="2025-05-17T00:41:16.513123463Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:16.515755 env[1216]: time="2025-05-17T00:41:16.515663714Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:16.516184 env[1216]: time="2025-05-17T00:41:16.516139396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:41:16.517828 env[1216]: time="2025-05-17T00:41:16.517789623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:41:16.965694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264345169.mount: Deactivated successfully. May 17 00:41:18.348338 env[1216]: time="2025-05-17T00:41:18.348220111Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:18.352207 env[1216]: time="2025-05-17T00:41:18.352143674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:18.355591 env[1216]: time="2025-05-17T00:41:18.355532959Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:18.359024 env[1216]: time="2025-05-17T00:41:18.358959977Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:18.360457 env[1216]: time="2025-05-17T00:41:18.360402289Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:41:18.362373 env[1216]: time="2025-05-17T00:41:18.362305489Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:41:18.747899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321492609.mount: Deactivated successfully. May 17 00:41:18.758269 env[1216]: time="2025-05-17T00:41:18.758190593Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:18.761137 env[1216]: time="2025-05-17T00:41:18.761075477Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:18.763596 env[1216]: time="2025-05-17T00:41:18.763544504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:18.765969 env[1216]: time="2025-05-17T00:41:18.765902261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:18.766829 env[1216]: time="2025-05-17T00:41:18.766777692Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:41:18.767764 env[1216]: time="2025-05-17T00:41:18.767726882Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:41:19.179420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1329765033.mount: Deactivated successfully. May 17 00:41:20.336563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:41:20.337016 systemd[1]: Stopped kubelet.service. May 17 00:41:20.339689 systemd[1]: Starting kubelet.service... May 17 00:41:20.966492 systemd[1]: Started kubelet.service. May 17 00:41:21.064172 kubelet[1556]: E0517 00:41:21.064105 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:41:21.066898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:41:21.067175 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:41:22.689560 env[1216]: time="2025-05-17T00:41:22.689465168Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:22.693008 env[1216]: time="2025-05-17T00:41:22.692948466Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:22.696187 env[1216]: time="2025-05-17T00:41:22.696131875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:22.698746 env[1216]: time="2025-05-17T00:41:22.698677616Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:22.701032 env[1216]: time="2025-05-17T00:41:22.700970090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 00:41:26.556736 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:41:27.405758 systemd[1]: Stopped kubelet.service. May 17 00:41:27.410676 systemd[1]: Starting kubelet.service... May 17 00:41:27.459300 systemd[1]: Reloading. May 17 00:41:27.606493 /usr/lib/systemd/system-generators/torcx-generator[1607]: time="2025-05-17T00:41:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:41:27.606556 /usr/lib/systemd/system-generators/torcx-generator[1607]: time="2025-05-17T00:41:27Z" level=info msg="torcx already run" May 17 00:41:27.772755 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:27.772787 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:27.798485 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:27.976615 systemd[1]: Started kubelet.service. May 17 00:41:27.991055 systemd[1]: Stopping kubelet.service... May 17 00:41:27.992342 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:41:27.992755 systemd[1]: Stopped kubelet.service. May 17 00:41:27.995860 systemd[1]: Starting kubelet.service... May 17 00:41:28.321923 systemd[1]: Started kubelet.service. May 17 00:41:28.399526 kubelet[1663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:41:28.399526 kubelet[1663]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:41:28.399526 kubelet[1663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:41:28.400228 kubelet[1663]: I0517 00:41:28.399675 1663 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:41:28.958728 kubelet[1663]: I0517 00:41:28.958661 1663 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:41:28.958728 kubelet[1663]: I0517 00:41:28.958708 1663 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:41:28.959202 kubelet[1663]: I0517 00:41:28.959157 1663 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:41:29.026231 kubelet[1663]: E0517 00:41:29.026162 1663 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:29.028012 kubelet[1663]: I0517 00:41:29.027969 1663 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:41:29.037938 kubelet[1663]: E0517 00:41:29.037888 1663 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:41:29.037938 kubelet[1663]: I0517 00:41:29.037936 1663 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:41:29.042618 kubelet[1663]: I0517 00:41:29.042562 1663 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:41:29.044961 kubelet[1663]: I0517 00:41:29.044878 1663 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:41:29.045241 kubelet[1663]: I0517 00:41:29.044944 1663 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:41:29.045472 kubelet[1663]: I0517 00:41:29.045241 1663 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:41:29.045472 kubelet[1663]: I0517 00:41:29.045263 1663 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:41:29.045472 kubelet[1663]: I0517 00:41:29.045440 1663 state_mem.go:36] "Initialized new in-memory state store" May 17 00:41:29.051447 kubelet[1663]: I0517 00:41:29.051398 1663 kubelet.go:446] "Attempting to sync node with API server" May 17 00:41:29.051447 kubelet[1663]: I0517 00:41:29.051463 1663 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:41:29.051752 kubelet[1663]: I0517 00:41:29.051504 1663 kubelet.go:352] "Adding apiserver pod source" May 17 00:41:29.051752 kubelet[1663]: I0517 00:41:29.051522 1663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:41:29.076940 kubelet[1663]: W0517 00:41:29.076856 1663 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused May 17 00:41:29.077184 kubelet[1663]: E0517 00:41:29.077014 1663 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:29.077282 kubelet[1663]: W0517 00:41:29.077200 1663 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2&limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused May 17 00:41:29.077355 kubelet[1663]: E0517 00:41:29.077273 1663 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2&limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:29.078620 kubelet[1663]: I0517 00:41:29.077968 1663 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:41:29.078995 kubelet[1663]: I0517 00:41:29.078963 1663 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:41:29.083887 kubelet[1663]: W0517 00:41:29.083835 1663 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:41:29.099760 kubelet[1663]: I0517 00:41:29.099703 1663 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:41:29.099970 kubelet[1663]: I0517 00:41:29.099776 1663 server.go:1287] "Started kubelet" May 17 00:41:29.101220 kubelet[1663]: I0517 00:41:29.101154 1663 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:41:29.102708 kubelet[1663]: I0517 00:41:29.102615 1663 server.go:479] "Adding debug handlers to kubelet server" May 17 00:41:29.109323 kubelet[1663]: I0517 00:41:29.109225 1663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:41:29.109924 kubelet[1663]: I0517 00:41:29.109894 1663 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:41:29.122961 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:41:29.124080 kubelet[1663]: I0517 00:41:29.124033 1663 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:41:29.126769 kubelet[1663]: E0517 00:41:29.124375 1663 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2.184029af7c36e319 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,UID:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,},FirstTimestamp:2025-05-17 00:41:29.099739929 +0000 UTC m=+0.768746065,LastTimestamp:2025-05-17 00:41:29.099739929 +0000 UTC m=+0.768746065,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,}" May 17 00:41:29.128663 kubelet[1663]: I0517 00:41:29.126957 1663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:41:29.129666 kubelet[1663]: E0517 00:41:29.129587 1663 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:41:29.130055 kubelet[1663]: I0517 00:41:29.130022 1663 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:41:29.130202 kubelet[1663]: I0517 00:41:29.130178 1663 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:41:29.130307 kubelet[1663]: I0517 00:41:29.130260 1663 reconciler.go:26] "Reconciler: start to sync state" May 17 00:41:29.131152 kubelet[1663]: W0517 00:41:29.131054 1663 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused May 17 00:41:29.131290 kubelet[1663]: E0517 00:41:29.131200 1663 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:29.131492 kubelet[1663]: I0517 00:41:29.131461 1663 factory.go:221] Registration of the systemd container factory successfully May 17 00:41:29.131602 kubelet[1663]: I0517 00:41:29.131588 1663 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:41:29.133386 kubelet[1663]: E0517 00:41:29.133334 1663 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" not found" May 17 00:41:29.133553 kubelet[1663]: E0517 00:41:29.133500 1663 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2?timeout=10s\": dial tcp 10.128.0.28:6443: connect: connection refused" interval="200ms" May 17 00:41:29.133762 kubelet[1663]: I0517 00:41:29.133734 1663 factory.go:221] Registration of the containerd container factory successfully May 17 00:41:29.159666 kubelet[1663]: I0517 00:41:29.159559 1663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:41:29.162118 kubelet[1663]: I0517 00:41:29.162078 1663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:41:29.162355 kubelet[1663]: I0517 00:41:29.162331 1663 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:41:29.162582 kubelet[1663]: I0517 00:41:29.162559 1663 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:41:29.162754 kubelet[1663]: I0517 00:41:29.162735 1663 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:41:29.162987 kubelet[1663]: E0517 00:41:29.162953 1663 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:41:29.170597 kubelet[1663]: W0517 00:41:29.170525 1663 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused May 17 00:41:29.170937 kubelet[1663]: E0517 00:41:29.170886 1663 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:29.195197 kubelet[1663]: I0517 00:41:29.195158 1663 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:41:29.195197 kubelet[1663]: I0517 00:41:29.195193 1663 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:41:29.195504 kubelet[1663]: I0517 00:41:29.195247 1663 state_mem.go:36] "Initialized new in-memory state store" May 17 00:41:29.198272 kubelet[1663]: I0517 00:41:29.198225 1663 policy_none.go:49] "None policy: Start" May 17 00:41:29.198272 kubelet[1663]: I0517 00:41:29.198257 1663 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:41:29.198502 kubelet[1663]: I0517 00:41:29.198305 1663 state_mem.go:35] "Initializing new in-memory state store" May 17 00:41:29.206258 systemd[1]: Created slice kubepods.slice. May 17 00:41:29.215375 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:41:29.223406 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:41:29.230016 kubelet[1663]: I0517 00:41:29.229979 1663 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:41:29.230478 kubelet[1663]: I0517 00:41:29.230454 1663 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:41:29.230725 kubelet[1663]: I0517 00:41:29.230672 1663 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:41:29.232669 kubelet[1663]: E0517 00:41:29.232377 1663 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:41:29.232884 kubelet[1663]: E0517 00:41:29.232861 1663 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" not found" May 17 00:41:29.233255 kubelet[1663]: I0517 00:41:29.233231 1663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:41:29.283608 systemd[1]: Created slice kubepods-burstable-pode361017feca930cc1139115d3bdaf794.slice. May 17 00:41:29.296446 kubelet[1663]: E0517 00:41:29.296349 1663 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" not found" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.304042 systemd[1]: Created slice kubepods-burstable-podcfcda4494fc1f0a5d97a902bf3d9a906.slice. May 17 00:41:29.308878 kubelet[1663]: E0517 00:41:29.308531 1663 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" not found" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.312218 systemd[1]: Created slice kubepods-burstable-pod367ac3b407a2e2db1510b5bbcb2a9d59.slice. May 17 00:41:29.315207 kubelet[1663]: E0517 00:41:29.315156 1663 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" not found" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.335150 kubelet[1663]: E0517 00:41:29.335071 1663 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2?timeout=10s\": dial tcp 10.128.0.28:6443: connect: connection refused" interval="400ms" May 17 00:41:29.340837 kubelet[1663]: I0517 00:41:29.340800 1663 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.341609 kubelet[1663]: E0517 00:41:29.341542 1663 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.28:6443/api/v1/nodes\": dial tcp 10.128.0.28:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.432209 kubelet[1663]: I0517 00:41:29.432128 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e361017feca930cc1139115d3bdaf794-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"e361017feca930cc1139115d3bdaf794\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.432209 kubelet[1663]: I0517 00:41:29.432210 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfcda4494fc1f0a5d97a902bf3d9a906-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"cfcda4494fc1f0a5d97a902bf3d9a906\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.432925 kubelet[1663]: I0517 00:41:29.432248 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfcda4494fc1f0a5d97a902bf3d9a906-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"cfcda4494fc1f0a5d97a902bf3d9a906\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.432925 kubelet[1663]: I0517 00:41:29.432283 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cfcda4494fc1f0a5d97a902bf3d9a906-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"cfcda4494fc1f0a5d97a902bf3d9a906\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.432925 kubelet[1663]: I0517 00:41:29.432321 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfcda4494fc1f0a5d97a902bf3d9a906-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"cfcda4494fc1f0a5d97a902bf3d9a906\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.432925 kubelet[1663]: I0517 00:41:29.432363 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/367ac3b407a2e2db1510b5bbcb2a9d59-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"367ac3b407a2e2db1510b5bbcb2a9d59\") " pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.433086 kubelet[1663]: I0517 00:41:29.432400 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e361017feca930cc1139115d3bdaf794-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"e361017feca930cc1139115d3bdaf794\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.433086 kubelet[1663]: I0517 00:41:29.432455 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e361017feca930cc1139115d3bdaf794-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"e361017feca930cc1139115d3bdaf794\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.433086 kubelet[1663]: I0517 00:41:29.432501 1663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cfcda4494fc1f0a5d97a902bf3d9a906-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"cfcda4494fc1f0a5d97a902bf3d9a906\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.559696 kubelet[1663]: I0517 00:41:29.558901 1663 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.559696 kubelet[1663]: E0517 00:41:29.559366 1663 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.28:6443/api/v1/nodes\": dial tcp 10.128.0.28:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.598870 env[1216]: time="2025-05-17T00:41:29.598791306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,Uid:e361017feca930cc1139115d3bdaf794,Namespace:kube-system,Attempt:0,}" May 17 00:41:29.610307 env[1216]: time="2025-05-17T00:41:29.609915582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,Uid:cfcda4494fc1f0a5d97a902bf3d9a906,Namespace:kube-system,Attempt:0,}" May 17 00:41:29.616809 env[1216]: time="2025-05-17T00:41:29.616744662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,Uid:367ac3b407a2e2db1510b5bbcb2a9d59,Namespace:kube-system,Attempt:0,}" May 17 00:41:29.736532 kubelet[1663]: E0517 00:41:29.736435 1663 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2?timeout=10s\": dial tcp 10.128.0.28:6443: connect: connection refused" interval="800ms" May 17 00:41:29.965371 kubelet[1663]: I0517 00:41:29.965324 1663 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.966079 kubelet[1663]: E0517 00:41:29.966028 1663 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.28:6443/api/v1/nodes\": dial tcp 10.128.0.28:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:29.996300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1528092112.mount: Deactivated successfully. May 17 00:41:30.008283 env[1216]: time="2025-05-17T00:41:30.008205616Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.012033 env[1216]: time="2025-05-17T00:41:30.011974664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.013323 env[1216]: time="2025-05-17T00:41:30.013265716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.016402 env[1216]: time="2025-05-17T00:41:30.016333742Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.018598 env[1216]: time="2025-05-17T00:41:30.018547974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.020545 env[1216]: time="2025-05-17T00:41:30.020486028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.022118 env[1216]: time="2025-05-17T00:41:30.022058557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.024315 env[1216]: time="2025-05-17T00:41:30.024259208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.025274 env[1216]: time="2025-05-17T00:41:30.025216531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.026328 env[1216]: time="2025-05-17T00:41:30.026289029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.030339 env[1216]: time="2025-05-17T00:41:30.030285689Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.031378 env[1216]: time="2025-05-17T00:41:30.031319724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:30.082063 env[1216]: time="2025-05-17T00:41:30.081924747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:30.082431 env[1216]: time="2025-05-17T00:41:30.082352710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:30.082658 env[1216]: time="2025-05-17T00:41:30.082594489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:30.083332 env[1216]: time="2025-05-17T00:41:30.083225907Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/047fe49ec54f337a01406ed5d3eb4ae13d628225200911d1a2c2805a5f65045f pid=1704 runtime=io.containerd.runc.v2 May 17 00:41:30.097440 env[1216]: time="2025-05-17T00:41:30.097336984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:30.097619 env[1216]: time="2025-05-17T00:41:30.097482082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:30.097619 env[1216]: time="2025-05-17T00:41:30.097530437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:30.098765 env[1216]: time="2025-05-17T00:41:30.098700239Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2edcde743c05af751b2f5c9fb55f4a6fd1df1803aff6ea89f8e41170fd15576b pid=1716 runtime=io.containerd.runc.v2 May 17 00:41:30.109850 kubelet[1663]: W0517 00:41:30.109720 1663 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused May 17 00:41:30.109850 kubelet[1663]: E0517 00:41:30.109789 1663 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:30.132021 systemd[1]: Started cri-containerd-047fe49ec54f337a01406ed5d3eb4ae13d628225200911d1a2c2805a5f65045f.scope. May 17 00:41:30.147791 systemd[1]: Started cri-containerd-2edcde743c05af751b2f5c9fb55f4a6fd1df1803aff6ea89f8e41170fd15576b.scope. May 17 00:41:30.160292 env[1216]: time="2025-05-17T00:41:30.159913757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:30.160292 env[1216]: time="2025-05-17T00:41:30.160007942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:30.160292 env[1216]: time="2025-05-17T00:41:30.160031079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:30.160683 env[1216]: time="2025-05-17T00:41:30.160381216Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51a3b409db78aac505266b7d9f73302169ecba61649b189f0769381e51994503 pid=1753 runtime=io.containerd.runc.v2 May 17 00:41:30.201456 systemd[1]: Started cri-containerd-51a3b409db78aac505266b7d9f73302169ecba61649b189f0769381e51994503.scope. May 17 00:41:30.264604 env[1216]: time="2025-05-17T00:41:30.264453265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,Uid:cfcda4494fc1f0a5d97a902bf3d9a906,Namespace:kube-system,Attempt:0,} returns sandbox id \"047fe49ec54f337a01406ed5d3eb4ae13d628225200911d1a2c2805a5f65045f\"" May 17 00:41:30.270170 kubelet[1663]: E0517 00:41:30.269202 1663 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0" May 17 00:41:30.272321 env[1216]: time="2025-05-17T00:41:30.272227158Z" level=info msg="CreateContainer within sandbox \"047fe49ec54f337a01406ed5d3eb4ae13d628225200911d1a2c2805a5f65045f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:41:30.325067 env[1216]: time="2025-05-17T00:41:30.324988879Z" level=info msg="CreateContainer within sandbox \"047fe49ec54f337a01406ed5d3eb4ae13d628225200911d1a2c2805a5f65045f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2ce23d978f285f373887a1efff9fab820ee0ee663a7715485aa3e7a18f3bc446\"" May 17 00:41:30.326151 env[1216]: time="2025-05-17T00:41:30.326104311Z" level=info msg="StartContainer for \"2ce23d978f285f373887a1efff9fab820ee0ee663a7715485aa3e7a18f3bc446\"" May 17 00:41:30.332818 env[1216]: time="2025-05-17T00:41:30.332760343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,Uid:367ac3b407a2e2db1510b5bbcb2a9d59,Namespace:kube-system,Attempt:0,} returns sandbox id \"2edcde743c05af751b2f5c9fb55f4a6fd1df1803aff6ea89f8e41170fd15576b\"" May 17 00:41:30.336000 kubelet[1663]: E0517 00:41:30.335950 1663 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed" May 17 00:41:30.338047 env[1216]: time="2025-05-17T00:41:30.337991556Z" level=info msg="CreateContainer within sandbox \"2edcde743c05af751b2f5c9fb55f4a6fd1df1803aff6ea89f8e41170fd15576b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:41:30.357324 env[1216]: time="2025-05-17T00:41:30.357250859Z" level=info msg="CreateContainer within sandbox \"2edcde743c05af751b2f5c9fb55f4a6fd1df1803aff6ea89f8e41170fd15576b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc37672ec8a8dc5c5c7738aef4e3039dd9655adce5c3a27d964cc5e25858678c\"" May 17 00:41:30.358267 env[1216]: time="2025-05-17T00:41:30.358218274Z" level=info msg="StartContainer for \"dc37672ec8a8dc5c5c7738aef4e3039dd9655adce5c3a27d964cc5e25858678c\"" May 17 00:41:30.360323 env[1216]: time="2025-05-17T00:41:30.360271578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,Uid:e361017feca930cc1139115d3bdaf794,Namespace:kube-system,Attempt:0,} returns sandbox id \"51a3b409db78aac505266b7d9f73302169ecba61649b189f0769381e51994503\"" May 17 00:41:30.362733 kubelet[1663]: E0517 00:41:30.362689 1663 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed" May 17 00:41:30.364767 env[1216]: time="2025-05-17T00:41:30.364714233Z" level=info msg="CreateContainer within sandbox \"51a3b409db78aac505266b7d9f73302169ecba61649b189f0769381e51994503\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:41:30.374203 systemd[1]: Started cri-containerd-2ce23d978f285f373887a1efff9fab820ee0ee663a7715485aa3e7a18f3bc446.scope. May 17 00:41:30.384188 kubelet[1663]: W0517 00:41:30.381653 1663 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused May 17 00:41:30.384188 kubelet[1663]: E0517 00:41:30.381807 1663 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:30.411855 env[1216]: time="2025-05-17T00:41:30.411788847Z" level=info msg="CreateContainer within sandbox \"51a3b409db78aac505266b7d9f73302169ecba61649b189f0769381e51994503\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8ca56dc3377aaef69b8cf3220cf646b5e23af9888415a369350db0ba8f7ca605\"" May 17 00:41:30.413057 env[1216]: time="2025-05-17T00:41:30.412997622Z" level=info msg="StartContainer for \"8ca56dc3377aaef69b8cf3220cf646b5e23af9888415a369350db0ba8f7ca605\"" May 17 00:41:30.416570 kubelet[1663]: W0517 00:41:30.416376 1663 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2&limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused May 17 00:41:30.416570 kubelet[1663]: E0517 00:41:30.416494 1663 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2&limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:30.433988 systemd[1]: Started cri-containerd-dc37672ec8a8dc5c5c7738aef4e3039dd9655adce5c3a27d964cc5e25858678c.scope. May 17 00:41:30.459686 systemd[1]: Started cri-containerd-8ca56dc3377aaef69b8cf3220cf646b5e23af9888415a369350db0ba8f7ca605.scope. May 17 00:41:30.470303 kubelet[1663]: W0517 00:41:30.470054 1663 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused May 17 00:41:30.470303 kubelet[1663]: E0517 00:41:30.470218 1663 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" May 17 00:41:30.511797 env[1216]: time="2025-05-17T00:41:30.511721917Z" level=info msg="StartContainer for \"2ce23d978f285f373887a1efff9fab820ee0ee663a7715485aa3e7a18f3bc446\" returns successfully" May 17 00:41:30.539415 kubelet[1663]: E0517 00:41:30.539156 1663 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2?timeout=10s\": dial tcp 10.128.0.28:6443: connect: connection refused" interval="1.6s" May 17 00:41:30.608787 env[1216]: time="2025-05-17T00:41:30.608699806Z" level=info msg="StartContainer for \"8ca56dc3377aaef69b8cf3220cf646b5e23af9888415a369350db0ba8f7ca605\" returns successfully" May 17 00:41:30.620050 env[1216]: time="2025-05-17T00:41:30.619981961Z" level=info msg="StartContainer for \"dc37672ec8a8dc5c5c7738aef4e3039dd9655adce5c3a27d964cc5e25858678c\" returns successfully" May 17 00:41:30.771875 kubelet[1663]: I0517 00:41:30.771828 1663 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:31.187226 kubelet[1663]: E0517 00:41:31.187154 1663 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" not found" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:31.187843 kubelet[1663]: E0517 00:41:31.187789 1663 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" not found" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:31.188259 kubelet[1663]: E0517 00:41:31.188224 1663 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" not found" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:32.190766 kubelet[1663]: E0517 00:41:32.190718 1663 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" not found" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:32.191527 kubelet[1663]: E0517 00:41:32.191338 1663 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" not found" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:34.219952 kubelet[1663]: E0517 00:41:34.219874 1663 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" not found" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:34.340355 kubelet[1663]: E0517 00:41:34.340174 1663 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2.184029af7c36e319 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,UID:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,},FirstTimestamp:2025-05-17 00:41:29.099739929 +0000 UTC m=+0.768746065,LastTimestamp:2025-05-17 00:41:29.099739929 +0000 UTC m=+0.768746065,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2,}" May 17 00:41:34.407082 kubelet[1663]: I0517 00:41:34.407015 1663 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:34.442679 kubelet[1663]: I0517 00:41:34.442603 1663 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:34.486887 kubelet[1663]: E0517 00:41:34.486155 1663 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:34.486887 kubelet[1663]: I0517 00:41:34.486203 1663 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:34.491499 kubelet[1663]: E0517 00:41:34.491159 1663 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:34.491499 kubelet[1663]: I0517 00:41:34.491202 1663 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:34.505727 kubelet[1663]: E0517 00:41:34.505596 1663 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:35.060854 kubelet[1663]: I0517 00:41:35.060784 1663 apiserver.go:52] "Watching apiserver" May 17 00:41:35.131156 kubelet[1663]: I0517 00:41:35.131104 1663 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:41:36.324288 systemd[1]: Reloading. May 17 00:41:36.452344 /usr/lib/systemd/system-generators/torcx-generator[1950]: time="2025-05-17T00:41:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:41:36.452404 /usr/lib/systemd/system-generators/torcx-generator[1950]: time="2025-05-17T00:41:36Z" level=info msg="torcx already run" May 17 00:41:36.596995 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:36.597024 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:36.626012 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:36.824101 systemd[1]: Stopping kubelet.service... May 17 00:41:36.841315 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:41:36.841579 systemd[1]: Stopped kubelet.service. May 17 00:41:36.841707 systemd[1]: kubelet.service: Consumed 1.321s CPU time. May 17 00:41:36.845217 systemd[1]: Starting kubelet.service... May 17 00:41:37.485071 systemd[1]: Started kubelet.service. May 17 00:41:37.592217 kubelet[1999]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:41:37.592864 kubelet[1999]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:41:37.592962 kubelet[1999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:41:37.593150 kubelet[1999]: I0517 00:41:37.593118 1999 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:41:37.616665 kubelet[1999]: I0517 00:41:37.612269 1999 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:41:37.616665 kubelet[1999]: I0517 00:41:37.612310 1999 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:41:37.616665 kubelet[1999]: I0517 00:41:37.612788 1999 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:41:37.616665 kubelet[1999]: I0517 00:41:37.615444 1999 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:41:37.632137 kubelet[1999]: I0517 00:41:37.632080 1999 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:41:37.633774 sudo[2014]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:41:37.634817 sudo[2014]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:41:37.642313 kubelet[1999]: E0517 00:41:37.642040 1999 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:41:37.642313 kubelet[1999]: I0517 00:41:37.642107 1999 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:41:37.652863 kubelet[1999]: I0517 00:41:37.652824 1999 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:41:37.653327 kubelet[1999]: I0517 00:41:37.653274 1999 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:41:37.653618 kubelet[1999]: I0517 00:41:37.653331 1999 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:41:37.653850 kubelet[1999]: I0517 00:41:37.653664 1999 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:41:37.653850 kubelet[1999]: I0517 00:41:37.653686 1999 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:41:37.653850 kubelet[1999]: I0517 00:41:37.653776 1999 state_mem.go:36] "Initialized new in-memory state store" May 17 00:41:37.654069 kubelet[1999]: I0517 00:41:37.654008 1999 kubelet.go:446] "Attempting to sync node with API server" May 17 00:41:37.660747 kubelet[1999]: I0517 00:41:37.660695 1999 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:41:37.660996 kubelet[1999]: I0517 00:41:37.660976 1999 kubelet.go:352] "Adding apiserver pod source" May 17 00:41:37.661124 kubelet[1999]: I0517 00:41:37.661107 1999 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:41:37.674306 kubelet[1999]: I0517 00:41:37.670690 1999 apiserver.go:52] "Watching apiserver" May 17 00:41:37.679675 kubelet[1999]: I0517 00:41:37.677340 1999 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:41:37.679675 kubelet[1999]: I0517 00:41:37.678092 1999 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:41:37.679675 kubelet[1999]: I0517 00:41:37.678902 1999 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:41:37.679675 kubelet[1999]: I0517 00:41:37.678965 1999 server.go:1287] "Started kubelet" May 17 00:41:37.682318 kubelet[1999]: I0517 00:41:37.682255 1999 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:41:37.684185 kubelet[1999]: I0517 00:41:37.684105 1999 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:41:37.684442 kubelet[1999]: I0517 00:41:37.684164 1999 server.go:479] "Adding debug handlers to kubelet server" May 17 00:41:37.684758 kubelet[1999]: I0517 00:41:37.684732 1999 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:41:37.692184 kubelet[1999]: I0517 00:41:37.688819 1999 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:41:37.702612 kubelet[1999]: I0517 00:41:37.702568 1999 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:41:37.703475 kubelet[1999]: E0517 00:41:37.703415 1999 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:41:37.704470 kubelet[1999]: I0517 00:41:37.704444 1999 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:41:37.704878 kubelet[1999]: I0517 00:41:37.704843 1999 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:41:37.705354 kubelet[1999]: I0517 00:41:37.705320 1999 reconciler.go:26] "Reconciler: start to sync state" May 17 00:41:37.711771 kubelet[1999]: I0517 00:41:37.711729 1999 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:41:37.720587 kubelet[1999]: I0517 00:41:37.720549 1999 factory.go:221] Registration of the containerd container factory successfully May 17 00:41:37.720860 kubelet[1999]: I0517 00:41:37.720838 1999 factory.go:221] Registration of the systemd container factory successfully May 17 00:41:37.728666 kubelet[1999]: I0517 00:41:37.722969 1999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:41:37.728666 kubelet[1999]: I0517 00:41:37.725272 1999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:41:37.728666 kubelet[1999]: I0517 00:41:37.725307 1999 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:41:37.728666 kubelet[1999]: I0517 00:41:37.725336 1999 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:41:37.728666 kubelet[1999]: I0517 00:41:37.725350 1999 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:41:37.728666 kubelet[1999]: E0517 00:41:37.725423 1999 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:41:37.828533 kubelet[1999]: E0517 00:41:37.828403 1999 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:41:37.839652 kubelet[1999]: I0517 00:41:37.839584 1999 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:41:37.839856 kubelet[1999]: I0517 00:41:37.839673 1999 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:41:37.839856 kubelet[1999]: I0517 00:41:37.839702 1999 state_mem.go:36] "Initialized new in-memory state store" May 17 00:41:37.840039 kubelet[1999]: I0517 00:41:37.839969 1999 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:41:37.840039 kubelet[1999]: I0517 00:41:37.839989 1999 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:41:37.840039 kubelet[1999]: I0517 00:41:37.840020 1999 policy_none.go:49] "None policy: Start" May 17 00:41:37.840039 kubelet[1999]: I0517 00:41:37.840036 1999 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:41:37.840265 kubelet[1999]: I0517 00:41:37.840053 1999 state_mem.go:35] "Initializing new in-memory state store" May 17 00:41:37.840331 kubelet[1999]: I0517 00:41:37.840265 1999 state_mem.go:75] "Updated machine memory state" May 17 00:41:37.851556 kubelet[1999]: I0517 00:41:37.851515 1999 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:41:37.852181 kubelet[1999]: I0517 00:41:37.852148 1999 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:41:37.853049 kubelet[1999]: I0517 00:41:37.852989 1999 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:41:37.859507 kubelet[1999]: I0517 00:41:37.859475 1999 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:41:37.861207 kubelet[1999]: E0517 00:41:37.861176 1999 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:41:37.967855 kubelet[1999]: I0517 00:41:37.967799 1999 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:37.981643 kubelet[1999]: I0517 00:41:37.981581 1999 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:37.981878 kubelet[1999]: I0517 00:41:37.981710 1999 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.029527 kubelet[1999]: I0517 00:41:38.029474 1999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.033116 kubelet[1999]: I0517 00:41:38.030790 1999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.033116 kubelet[1999]: I0517 00:41:38.032523 1999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.044434 kubelet[1999]: W0517 00:41:38.044387 1999 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 17 00:41:38.054352 kubelet[1999]: W0517 00:41:38.054307 1999 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 17 00:41:38.062776 kubelet[1999]: W0517 00:41:38.062727 1999 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 17 00:41:38.105439 kubelet[1999]: I0517 00:41:38.105387 1999 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:41:38.108258 kubelet[1999]: I0517 00:41:38.108205 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfcda4494fc1f0a5d97a902bf3d9a906-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"cfcda4494fc1f0a5d97a902bf3d9a906\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.108569 kubelet[1999]: I0517 00:41:38.108497 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfcda4494fc1f0a5d97a902bf3d9a906-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"cfcda4494fc1f0a5d97a902bf3d9a906\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.108743 kubelet[1999]: I0517 00:41:38.108610 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e361017feca930cc1139115d3bdaf794-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"e361017feca930cc1139115d3bdaf794\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.108743 kubelet[1999]: I0517 00:41:38.108691 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e361017feca930cc1139115d3bdaf794-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"e361017feca930cc1139115d3bdaf794\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.108896 kubelet[1999]: I0517 00:41:38.108745 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cfcda4494fc1f0a5d97a902bf3d9a906-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"cfcda4494fc1f0a5d97a902bf3d9a906\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.108896 kubelet[1999]: I0517 00:41:38.108785 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cfcda4494fc1f0a5d97a902bf3d9a906-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"cfcda4494fc1f0a5d97a902bf3d9a906\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.108896 kubelet[1999]: I0517 00:41:38.108852 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfcda4494fc1f0a5d97a902bf3d9a906-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"cfcda4494fc1f0a5d97a902bf3d9a906\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.109079 kubelet[1999]: I0517 00:41:38.108910 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/367ac3b407a2e2db1510b5bbcb2a9d59-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"367ac3b407a2e2db1510b5bbcb2a9d59\") " pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.109079 kubelet[1999]: I0517 00:41:38.108946 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e361017feca930cc1139115d3bdaf794-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" (UID: \"e361017feca930cc1139115d3bdaf794\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" May 17 00:41:38.452410 kubelet[1999]: I0517 00:41:38.452208 1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" podStartSLOduration=0.452122704 podStartE2EDuration="452.122704ms" podCreationTimestamp="2025-05-17 00:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:41:38.434803728 +0000 UTC m=+0.937375901" watchObservedRunningTime="2025-05-17 00:41:38.452122704 +0000 UTC m=+0.954694870" May 17 00:41:38.468368 kubelet[1999]: I0517 00:41:38.468286 1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" podStartSLOduration=0.468241543 podStartE2EDuration="468.241543ms" podCreationTimestamp="2025-05-17 00:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:41:38.452776795 +0000 UTC m=+0.955348969" watchObservedRunningTime="2025-05-17 00:41:38.468241543 +0000 UTC m=+0.970813721" May 17 00:41:38.607838 sudo[2014]: pam_unix(sudo:session): session closed for user root May 17 00:41:38.833245 kubelet[1999]: I0517 00:41:38.833060 1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" podStartSLOduration=0.833034087 podStartE2EDuration="833.034087ms" podCreationTimestamp="2025-05-17 00:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:41:38.47003455 +0000 UTC m=+0.972606721" watchObservedRunningTime="2025-05-17 00:41:38.833034087 +0000 UTC m=+1.335606260" May 17 00:41:40.691604 update_engine[1209]: I0517 00:41:40.690724 1209 update_attempter.cc:509] Updating boot flags... May 17 00:41:41.065498 sudo[1398]: pam_unix(sudo:session): session closed for user root May 17 00:41:41.109927 sshd[1389]: pam_unix(sshd:session): session closed for user core May 17 00:41:41.117269 systemd[1]: sshd@4-10.128.0.28:22-139.178.89.65:55990.service: Deactivated successfully. May 17 00:41:41.119218 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:41:41.119519 systemd[1]: session-5.scope: Consumed 7.868s CPU time. May 17 00:41:41.120939 systemd-logind[1207]: Session 5 logged out. Waiting for processes to exit. May 17 00:41:41.122822 systemd-logind[1207]: Removed session 5. May 17 00:41:42.074183 kubelet[1999]: I0517 00:41:42.074139 1999 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:41:42.075493 env[1216]: time="2025-05-17T00:41:42.075433840Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:41:42.076266 kubelet[1999]: I0517 00:41:42.076236 1999 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:41:42.889788 systemd[1]: Created slice kubepods-besteffort-pod7cd4e63f_7484_4b51_9d35_c675eef7c780.slice. May 17 00:41:42.893924 kubelet[1999]: W0517 00:41:42.893887 1999 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2' and this object May 17 00:41:42.894237 kubelet[1999]: E0517 00:41:42.894204 1999 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2' and this object" logger="UnhandledError" May 17 00:41:42.894468 kubelet[1999]: I0517 00:41:42.893873 1999 status_manager.go:890] "Failed to get status for pod" podUID="7cd4e63f-7484-4b51-9d35-c675eef7c780" pod="kube-system/cilium-operator-6c4d7847fc-h2gff" err="pods \"cilium-operator-6c4d7847fc-h2gff\" is forbidden: User \"system:node:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2' and this object" May 17 00:41:42.894701 kubelet[1999]: W0517 00:41:42.894119 1999 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2' and this object May 17 00:41:42.894874 kubelet[1999]: E0517 00:41:42.894843 1999 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2' and this object" logger="UnhandledError" May 17 00:41:42.986254 systemd[1]: Created slice kubepods-besteffort-pod9ff77569_1518_406c_b778_f6bcd9120cd7.slice. May 17 00:41:43.007260 systemd[1]: Created slice kubepods-burstable-pod983a33dd_b9bb_42c0_9687_80d7c4dc09c1.slice. May 17 00:41:43.045781 kubelet[1999]: I0517 00:41:43.045725 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-etc-cni-netd\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.046022 kubelet[1999]: I0517 00:41:43.045791 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-host-proc-sys-net\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.046022 kubelet[1999]: I0517 00:41:43.045832 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-hubble-tls\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.046022 kubelet[1999]: I0517 00:41:43.045860 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xgzv\" (UniqueName: \"kubernetes.io/projected/7cd4e63f-7484-4b51-9d35-c675eef7c780-kube-api-access-4xgzv\") pod \"cilium-operator-6c4d7847fc-h2gff\" (UID: \"7cd4e63f-7484-4b51-9d35-c675eef7c780\") " pod="kube-system/cilium-operator-6c4d7847fc-h2gff" May 17 00:41:43.046022 kubelet[1999]: I0517 00:41:43.045894 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ff77569-1518-406c-b778-f6bcd9120cd7-xtables-lock\") pod \"kube-proxy-pv5hb\" (UID: \"9ff77569-1518-406c-b778-f6bcd9120cd7\") " pod="kube-system/kube-proxy-pv5hb" May 17 00:41:43.046022 kubelet[1999]: I0517 00:41:43.045920 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ff77569-1518-406c-b778-f6bcd9120cd7-lib-modules\") pod \"kube-proxy-pv5hb\" (UID: \"9ff77569-1518-406c-b778-f6bcd9120cd7\") " pod="kube-system/kube-proxy-pv5hb" May 17 00:41:43.046345 kubelet[1999]: I0517 00:41:43.045950 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkxtg\" (UniqueName: \"kubernetes.io/projected/9ff77569-1518-406c-b778-f6bcd9120cd7-kube-api-access-lkxtg\") pod \"kube-proxy-pv5hb\" (UID: \"9ff77569-1518-406c-b778-f6bcd9120cd7\") " pod="kube-system/kube-proxy-pv5hb" May 17 00:41:43.046345 kubelet[1999]: I0517 00:41:43.045977 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-xtables-lock\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.046345 kubelet[1999]: I0517 00:41:43.046005 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-host-proc-sys-kernel\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.046345 kubelet[1999]: I0517 00:41:43.046039 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cd4e63f-7484-4b51-9d35-c675eef7c780-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-h2gff\" (UID: \"7cd4e63f-7484-4b51-9d35-c675eef7c780\") " pod="kube-system/cilium-operator-6c4d7847fc-h2gff" May 17 00:41:43.046345 kubelet[1999]: I0517 00:41:43.046074 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-clustermesh-secrets\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.046650 kubelet[1999]: I0517 00:41:43.046108 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpxc\" (UniqueName: \"kubernetes.io/projected/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-kube-api-access-pmpxc\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.046650 kubelet[1999]: I0517 00:41:43.046140 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-hostproc\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.046650 kubelet[1999]: I0517 00:41:43.046171 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-run\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.046650 kubelet[1999]: I0517 00:41:43.046200 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-lib-modules\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.046650 kubelet[1999]: I0517 00:41:43.046232 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ff77569-1518-406c-b778-f6bcd9120cd7-kube-proxy\") pod \"kube-proxy-pv5hb\" (UID: \"9ff77569-1518-406c-b778-f6bcd9120cd7\") " pod="kube-system/kube-proxy-pv5hb" May 17 00:41:43.046650 kubelet[1999]: I0517 00:41:43.046272 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cni-path\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.047022 kubelet[1999]: I0517 00:41:43.046305 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-config-path\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.047022 kubelet[1999]: I0517 00:41:43.046352 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-bpf-maps\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.047022 kubelet[1999]: I0517 00:41:43.046382 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-cgroup\") pod \"cilium-cpsb6\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " pod="kube-system/cilium-cpsb6" May 17 00:41:43.148606 kubelet[1999]: I0517 00:41:43.148431 1999 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:41:43.893687 env[1216]: time="2025-05-17T00:41:43.893612127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pv5hb,Uid:9ff77569-1518-406c-b778-f6bcd9120cd7,Namespace:kube-system,Attempt:0,}" May 17 00:41:43.918163 env[1216]: time="2025-05-17T00:41:43.918010209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:43.918163 env[1216]: time="2025-05-17T00:41:43.918088003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:43.918163 env[1216]: time="2025-05-17T00:41:43.918109087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:43.918951 env[1216]: time="2025-05-17T00:41:43.918391306Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29eccbb1afc94d8165174ed69311944c5f75f436f21750d7bae1c6cbbe5b6bea pid=2098 runtime=io.containerd.runc.v2 May 17 00:41:43.943772 systemd[1]: Started cri-containerd-29eccbb1afc94d8165174ed69311944c5f75f436f21750d7bae1c6cbbe5b6bea.scope. May 17 00:41:43.985115 env[1216]: time="2025-05-17T00:41:43.985054178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pv5hb,Uid:9ff77569-1518-406c-b778-f6bcd9120cd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"29eccbb1afc94d8165174ed69311944c5f75f436f21750d7bae1c6cbbe5b6bea\"" May 17 00:41:43.991855 env[1216]: time="2025-05-17T00:41:43.991784541Z" level=info msg="CreateContainer within sandbox \"29eccbb1afc94d8165174ed69311944c5f75f436f21750d7bae1c6cbbe5b6bea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:41:44.020372 env[1216]: time="2025-05-17T00:41:44.020263383Z" level=info msg="CreateContainer within sandbox \"29eccbb1afc94d8165174ed69311944c5f75f436f21750d7bae1c6cbbe5b6bea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9f0764956931e6cebf2be8889fc68096a6f14fdef0533d55fe4a3beeec1eda5e\"" May 17 00:41:44.023322 env[1216]: time="2025-05-17T00:41:44.021793822Z" level=info msg="StartContainer for \"9f0764956931e6cebf2be8889fc68096a6f14fdef0533d55fe4a3beeec1eda5e\"" May 17 00:41:44.055571 systemd[1]: Started cri-containerd-9f0764956931e6cebf2be8889fc68096a6f14fdef0533d55fe4a3beeec1eda5e.scope. May 17 00:41:44.117172 env[1216]: time="2025-05-17T00:41:44.117058220Z" level=info msg="StartContainer for \"9f0764956931e6cebf2be8889fc68096a6f14fdef0533d55fe4a3beeec1eda5e\" returns successfully" May 17 00:41:44.148193 kubelet[1999]: E0517 00:41:44.148020 1999 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 17 00:41:44.148193 kubelet[1999]: E0517 00:41:44.148153 1999 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7cd4e63f-7484-4b51-9d35-c675eef7c780-cilium-config-path podName:7cd4e63f-7484-4b51-9d35-c675eef7c780 nodeName:}" failed. No retries permitted until 2025-05-17 00:41:44.648117543 +0000 UTC m=+7.150689714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/7cd4e63f-7484-4b51-9d35-c675eef7c780-cilium-config-path") pod "cilium-operator-6c4d7847fc-h2gff" (UID: "7cd4e63f-7484-4b51-9d35-c675eef7c780") : failed to sync configmap cache: timed out waiting for the condition May 17 00:41:44.150774 kubelet[1999]: E0517 00:41:44.150711 1999 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 17 00:41:44.151414 kubelet[1999]: E0517 00:41:44.150859 1999 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-config-path podName:983a33dd-b9bb-42c0-9687-80d7c4dc09c1 nodeName:}" failed. No retries permitted until 2025-05-17 00:41:44.650829839 +0000 UTC m=+7.153402003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-config-path") pod "cilium-cpsb6" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1") : failed to sync configmap cache: timed out waiting for the condition May 17 00:41:44.700435 env[1216]: time="2025-05-17T00:41:44.700373422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h2gff,Uid:7cd4e63f-7484-4b51-9d35-c675eef7c780,Namespace:kube-system,Attempt:0,}" May 17 00:41:44.731971 env[1216]: time="2025-05-17T00:41:44.731834994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:44.731971 env[1216]: time="2025-05-17T00:41:44.731918472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:44.731971 env[1216]: time="2025-05-17T00:41:44.731937133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:44.733159 env[1216]: time="2025-05-17T00:41:44.732953716Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27 pid=2302 runtime=io.containerd.runc.v2 May 17 00:41:44.772405 systemd[1]: run-containerd-runc-k8s.io-7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27-runc.hzZt6L.mount: Deactivated successfully. May 17 00:41:44.781865 systemd[1]: Started cri-containerd-7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27.scope. May 17 00:41:44.814086 env[1216]: time="2025-05-17T00:41:44.814020598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cpsb6,Uid:983a33dd-b9bb-42c0-9687-80d7c4dc09c1,Namespace:kube-system,Attempt:0,}" May 17 00:41:44.864744 env[1216]: time="2025-05-17T00:41:44.864057890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:44.864744 env[1216]: time="2025-05-17T00:41:44.864152557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:44.864744 env[1216]: time="2025-05-17T00:41:44.864173379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:44.866673 env[1216]: time="2025-05-17T00:41:44.866537594Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51 pid=2336 runtime=io.containerd.runc.v2 May 17 00:41:44.879154 env[1216]: time="2025-05-17T00:41:44.879091988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h2gff,Uid:7cd4e63f-7484-4b51-9d35-c675eef7c780,Namespace:kube-system,Attempt:0,} returns sandbox id \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\"" May 17 00:41:44.883082 env[1216]: time="2025-05-17T00:41:44.883013792Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:41:44.897943 systemd[1]: Started cri-containerd-130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51.scope. May 17 00:41:44.946667 env[1216]: time="2025-05-17T00:41:44.946047240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cpsb6,Uid:983a33dd-b9bb-42c0-9687-80d7c4dc09c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\"" May 17 00:41:45.845663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580597179.mount: Deactivated successfully. May 17 00:41:46.752215 env[1216]: time="2025-05-17T00:41:46.752146047Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:46.755366 env[1216]: time="2025-05-17T00:41:46.755310057Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:46.758031 env[1216]: time="2025-05-17T00:41:46.757978126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:46.758830 env[1216]: time="2025-05-17T00:41:46.758780772Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:41:46.763441 env[1216]: time="2025-05-17T00:41:46.762028822Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:41:46.763441 env[1216]: time="2025-05-17T00:41:46.763249137Z" level=info msg="CreateContainer within sandbox \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:41:46.789642 env[1216]: time="2025-05-17T00:41:46.789522832Z" level=info msg="CreateContainer within sandbox \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8\"" May 17 00:41:46.790815 env[1216]: time="2025-05-17T00:41:46.790756176Z" level=info msg="StartContainer for \"f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8\"" May 17 00:41:46.830924 systemd[1]: Started cri-containerd-f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8.scope. May 17 00:41:46.899304 env[1216]: time="2025-05-17T00:41:46.899229184Z" level=info msg="StartContainer for \"f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8\" returns successfully" May 17 00:41:48.040082 kubelet[1999]: I0517 00:41:48.039982 1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pv5hb" podStartSLOduration=6.039955986 podStartE2EDuration="6.039955986s" podCreationTimestamp="2025-05-17 00:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:41:44.854875561 +0000 UTC m=+7.357447764" watchObservedRunningTime="2025-05-17 00:41:48.039955986 +0000 UTC m=+10.542528155" May 17 00:41:48.371269 kubelet[1999]: I0517 00:41:48.370597 1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-h2gff" podStartSLOduration=4.491714718 podStartE2EDuration="6.370565965s" podCreationTimestamp="2025-05-17 00:41:42 +0000 UTC" firstStartedPulling="2025-05-17 00:41:44.881896435 +0000 UTC m=+7.384468597" lastFinishedPulling="2025-05-17 00:41:46.760747678 +0000 UTC m=+9.263319844" observedRunningTime="2025-05-17 00:41:48.134900485 +0000 UTC m=+10.637472657" watchObservedRunningTime="2025-05-17 00:41:48.370565965 +0000 UTC m=+10.873138163" May 17 00:41:53.074749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656335446.mount: Deactivated successfully. May 17 00:41:56.607973 env[1216]: time="2025-05-17T00:41:56.607889678Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:56.611301 env[1216]: time="2025-05-17T00:41:56.611240646Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:56.614010 env[1216]: time="2025-05-17T00:41:56.613954435Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:56.614916 env[1216]: time="2025-05-17T00:41:56.614860971Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:41:56.621156 env[1216]: time="2025-05-17T00:41:56.621086445Z" level=info msg="CreateContainer within sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:41:56.645992 env[1216]: time="2025-05-17T00:41:56.645919689Z" level=info msg="CreateContainer within sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16\"" May 17 00:41:56.649293 env[1216]: time="2025-05-17T00:41:56.649185932Z" level=info msg="StartContainer for \"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16\"" May 17 00:41:56.704482 systemd[1]: Started cri-containerd-7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16.scope. May 17 00:41:56.753379 env[1216]: time="2025-05-17T00:41:56.753294286Z" level=info msg="StartContainer for \"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16\" returns successfully" May 17 00:41:56.769506 systemd[1]: cri-containerd-7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16.scope: Deactivated successfully. May 17 00:41:57.635084 systemd[1]: run-containerd-runc-k8s.io-7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16-runc.2WLNG2.mount: Deactivated successfully. May 17 00:41:57.635253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16-rootfs.mount: Deactivated successfully. May 17 00:41:58.855435 env[1216]: time="2025-05-17T00:41:58.855353772Z" level=info msg="shim disconnected" id=7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16 May 17 00:41:58.855435 env[1216]: time="2025-05-17T00:41:58.855437435Z" level=warning msg="cleaning up after shim disconnected" id=7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16 namespace=k8s.io May 17 00:41:58.856263 env[1216]: time="2025-05-17T00:41:58.855455917Z" level=info msg="cleaning up dead shim" May 17 00:41:58.869965 env[1216]: time="2025-05-17T00:41:58.869883831Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2470 runtime=io.containerd.runc.v2\n" May 17 00:41:59.074922 env[1216]: time="2025-05-17T00:41:59.074863680Z" level=info msg="CreateContainer within sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:41:59.117496 env[1216]: time="2025-05-17T00:41:59.116962758Z" level=info msg="CreateContainer within sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f\"" May 17 00:41:59.117389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2808862076.mount: Deactivated successfully. May 17 00:41:59.123163 env[1216]: time="2025-05-17T00:41:59.123113560Z" level=info msg="StartContainer for \"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f\"" May 17 00:41:59.170376 systemd[1]: run-containerd-runc-k8s.io-4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f-runc.KyAqvm.mount: Deactivated successfully. May 17 00:41:59.177771 systemd[1]: Started cri-containerd-4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f.scope. May 17 00:41:59.225185 env[1216]: time="2025-05-17T00:41:59.225091572Z" level=info msg="StartContainer for \"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f\" returns successfully" May 17 00:41:59.245679 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:41:59.246160 systemd[1]: Stopped systemd-sysctl.service. May 17 00:41:59.247191 systemd[1]: Stopping systemd-sysctl.service... May 17 00:41:59.253286 systemd[1]: Starting systemd-sysctl.service... May 17 00:41:59.255829 systemd[1]: cri-containerd-4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f.scope: Deactivated successfully. May 17 00:41:59.268447 systemd[1]: Finished systemd-sysctl.service. May 17 00:41:59.302001 env[1216]: time="2025-05-17T00:41:59.301893344Z" level=info msg="shim disconnected" id=4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f May 17 00:41:59.302977 env[1216]: time="2025-05-17T00:41:59.302828677Z" level=warning msg="cleaning up after shim disconnected" id=4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f namespace=k8s.io May 17 00:41:59.304794 env[1216]: time="2025-05-17T00:41:59.303200992Z" level=info msg="cleaning up dead shim" May 17 00:41:59.318899 env[1216]: time="2025-05-17T00:41:59.318830503Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2534 runtime=io.containerd.runc.v2\n" May 17 00:42:00.089281 env[1216]: time="2025-05-17T00:42:00.089214204Z" level=info msg="CreateContainer within sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:42:00.101643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f-rootfs.mount: Deactivated successfully. May 17 00:42:00.158502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1174230734.mount: Deactivated successfully. May 17 00:42:00.173302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2130038709.mount: Deactivated successfully. May 17 00:42:00.180431 env[1216]: time="2025-05-17T00:42:00.180373425Z" level=info msg="CreateContainer within sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4\"" May 17 00:42:00.181423 env[1216]: time="2025-05-17T00:42:00.181373801Z" level=info msg="StartContainer for \"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4\"" May 17 00:42:00.212393 systemd[1]: Started cri-containerd-593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4.scope. May 17 00:42:00.267746 env[1216]: time="2025-05-17T00:42:00.267514024Z" level=info msg="StartContainer for \"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4\" returns successfully" May 17 00:42:00.272150 systemd[1]: cri-containerd-593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4.scope: Deactivated successfully. May 17 00:42:00.320761 env[1216]: time="2025-05-17T00:42:00.320670997Z" level=info msg="shim disconnected" id=593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4 May 17 00:42:00.320761 env[1216]: time="2025-05-17T00:42:00.320754368Z" level=warning msg="cleaning up after shim disconnected" id=593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4 namespace=k8s.io May 17 00:42:00.321194 env[1216]: time="2025-05-17T00:42:00.320775364Z" level=info msg="cleaning up dead shim" May 17 00:42:00.334141 env[1216]: time="2025-05-17T00:42:00.334051450Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2598 runtime=io.containerd.runc.v2\n" May 17 00:42:01.082928 env[1216]: time="2025-05-17T00:42:01.082869067Z" level=info msg="CreateContainer within sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:42:01.124059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925245860.mount: Deactivated successfully. May 17 00:42:01.130820 env[1216]: time="2025-05-17T00:42:01.130721543Z" level=info msg="CreateContainer within sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837\"" May 17 00:42:01.131872 env[1216]: time="2025-05-17T00:42:01.131812198Z" level=info msg="StartContainer for \"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837\"" May 17 00:42:01.179560 systemd[1]: Started cri-containerd-9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837.scope. May 17 00:42:01.228051 systemd[1]: cri-containerd-9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837.scope: Deactivated successfully. May 17 00:42:01.236575 env[1216]: time="2025-05-17T00:42:01.232128178Z" level=info msg="StartContainer for \"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837\" returns successfully" May 17 00:42:01.273876 env[1216]: time="2025-05-17T00:42:01.273806364Z" level=info msg="shim disconnected" id=9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837 May 17 00:42:01.274219 env[1216]: time="2025-05-17T00:42:01.273878490Z" level=warning msg="cleaning up after shim disconnected" id=9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837 namespace=k8s.io May 17 00:42:01.274219 env[1216]: time="2025-05-17T00:42:01.273898403Z" level=info msg="cleaning up dead shim" May 17 00:42:01.286694 env[1216]: time="2025-05-17T00:42:01.286502927Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2655 runtime=io.containerd.runc.v2\n" May 17 00:42:02.090900 env[1216]: time="2025-05-17T00:42:02.090832825Z" level=info msg="CreateContainer within sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:42:02.101191 systemd[1]: run-containerd-runc-k8s.io-9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837-runc.d8IArY.mount: Deactivated successfully. May 17 00:42:02.101379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837-rootfs.mount: Deactivated successfully. May 17 00:42:02.138052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730623492.mount: Deactivated successfully. May 17 00:42:02.143673 env[1216]: time="2025-05-17T00:42:02.142542537Z" level=info msg="CreateContainer within sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\"" May 17 00:42:02.145664 env[1216]: time="2025-05-17T00:42:02.144673607Z" level=info msg="StartContainer for \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\"" May 17 00:42:02.184550 systemd[1]: Started cri-containerd-6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3.scope. May 17 00:42:02.248147 env[1216]: time="2025-05-17T00:42:02.248075839Z" level=info msg="StartContainer for \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\" returns successfully" May 17 00:42:02.428940 kubelet[1999]: I0517 00:42:02.426272 1999 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:42:02.489618 systemd[1]: Created slice kubepods-burstable-pod5dfa033c_e73c_4e3c_a9ad_171fd4228b39.slice. May 17 00:42:02.508556 systemd[1]: Created slice kubepods-burstable-pod4e911b96_2f20_4e4d_965e_4c0fe28ea319.slice. May 17 00:42:02.532208 kubelet[1999]: I0517 00:42:02.532149 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dfa033c-e73c-4e3c-a9ad-171fd4228b39-config-volume\") pod \"coredns-668d6bf9bc-qwxkh\" (UID: \"5dfa033c-e73c-4e3c-a9ad-171fd4228b39\") " pod="kube-system/coredns-668d6bf9bc-qwxkh" May 17 00:42:02.532462 kubelet[1999]: I0517 00:42:02.532226 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb7rp\" (UniqueName: \"kubernetes.io/projected/5dfa033c-e73c-4e3c-a9ad-171fd4228b39-kube-api-access-bb7rp\") pod \"coredns-668d6bf9bc-qwxkh\" (UID: \"5dfa033c-e73c-4e3c-a9ad-171fd4228b39\") " pod="kube-system/coredns-668d6bf9bc-qwxkh" May 17 00:42:02.532462 kubelet[1999]: I0517 00:42:02.532263 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e911b96-2f20-4e4d-965e-4c0fe28ea319-config-volume\") pod \"coredns-668d6bf9bc-fv8dl\" (UID: \"4e911b96-2f20-4e4d-965e-4c0fe28ea319\") " pod="kube-system/coredns-668d6bf9bc-fv8dl" May 17 00:42:02.532462 kubelet[1999]: I0517 00:42:02.532298 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl8wm\" (UniqueName: \"kubernetes.io/projected/4e911b96-2f20-4e4d-965e-4c0fe28ea319-kube-api-access-wl8wm\") pod \"coredns-668d6bf9bc-fv8dl\" (UID: \"4e911b96-2f20-4e4d-965e-4c0fe28ea319\") " pod="kube-system/coredns-668d6bf9bc-fv8dl" May 17 00:42:02.803038 env[1216]: time="2025-05-17T00:42:02.802886313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qwxkh,Uid:5dfa033c-e73c-4e3c-a9ad-171fd4228b39,Namespace:kube-system,Attempt:0,}" May 17 00:42:02.818383 env[1216]: time="2025-05-17T00:42:02.818318726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fv8dl,Uid:4e911b96-2f20-4e4d-965e-4c0fe28ea319,Namespace:kube-system,Attempt:0,}" May 17 00:42:03.150424 kubelet[1999]: I0517 00:42:03.150331 1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cpsb6" podStartSLOduration=9.480639298 podStartE2EDuration="21.150303262s" podCreationTimestamp="2025-05-17 00:41:42 +0000 UTC" firstStartedPulling="2025-05-17 00:41:44.947786537 +0000 UTC m=+7.450358702" lastFinishedPulling="2025-05-17 00:41:56.617450505 +0000 UTC m=+19.120022666" observedRunningTime="2025-05-17 00:42:03.147054717 +0000 UTC m=+25.649626975" watchObservedRunningTime="2025-05-17 00:42:03.150303262 +0000 UTC m=+25.652875436" May 17 00:42:04.372057 systemd-networkd[1020]: cilium_host: Link UP May 17 00:42:04.374096 systemd-networkd[1020]: cilium_net: Link UP May 17 00:42:04.390679 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 00:42:04.390868 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:42:04.391300 systemd-networkd[1020]: cilium_net: Gained carrier May 17 00:42:04.392789 systemd-networkd[1020]: cilium_host: Gained carrier May 17 00:42:04.563717 systemd-networkd[1020]: cilium_vxlan: Link UP May 17 00:42:04.563741 systemd-networkd[1020]: cilium_vxlan: Gained carrier May 17 00:42:04.590920 systemd-networkd[1020]: cilium_net: Gained IPv6LL May 17 00:42:04.862685 kernel: NET: Registered PF_ALG protocol family May 17 00:42:05.334945 systemd-networkd[1020]: cilium_host: Gained IPv6LL May 17 00:42:05.839018 systemd-networkd[1020]: lxc_health: Link UP May 17 00:42:05.855578 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:42:05.855364 systemd-networkd[1020]: lxc_health: Gained carrier May 17 00:42:06.039498 systemd-networkd[1020]: cilium_vxlan: Gained IPv6LL May 17 00:42:06.380585 systemd-networkd[1020]: lxc18171f76c5b8: Link UP May 17 00:42:06.391794 kernel: eth0: renamed from tmpad332 May 17 00:42:06.413313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc18171f76c5b8: link becomes ready May 17 00:42:06.413839 systemd-networkd[1020]: lxc18171f76c5b8: Gained carrier May 17 00:42:06.427603 systemd-networkd[1020]: lxcf022753f1b61: Link UP May 17 00:42:06.442922 kernel: eth0: renamed from tmpf9af0 May 17 00:42:06.456905 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf022753f1b61: link becomes ready May 17 00:42:06.457285 systemd-networkd[1020]: lxcf022753f1b61: Gained carrier May 17 00:42:07.574826 systemd-networkd[1020]: lxc_health: Gained IPv6LL May 17 00:42:07.894981 systemd-networkd[1020]: lxcf022753f1b61: Gained IPv6LL May 17 00:42:08.406823 systemd-networkd[1020]: lxc18171f76c5b8: Gained IPv6LL May 17 00:42:09.144657 kubelet[1999]: I0517 00:42:09.144581 1999 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:42:11.686673 env[1216]: time="2025-05-17T00:42:11.685146229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:11.686673 env[1216]: time="2025-05-17T00:42:11.685261674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:11.686673 env[1216]: time="2025-05-17T00:42:11.685302362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:11.686673 env[1216]: time="2025-05-17T00:42:11.685560671Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad332026996889b628abc4eee5d150c3c06152682b3a762a1a0f78a3d645fd36 pid=3213 runtime=io.containerd.runc.v2 May 17 00:42:11.736661 env[1216]: time="2025-05-17T00:42:11.720261194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:11.736661 env[1216]: time="2025-05-17T00:42:11.720404334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:11.736661 env[1216]: time="2025-05-17T00:42:11.720481623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:11.736661 env[1216]: time="2025-05-17T00:42:11.721294205Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9af05618539a6215ef337a340a724acce914eeff1740c33e9324719c872d73d pid=3202 runtime=io.containerd.runc.v2 May 17 00:42:11.780042 systemd[1]: Started cri-containerd-ad332026996889b628abc4eee5d150c3c06152682b3a762a1a0f78a3d645fd36.scope. May 17 00:42:11.785073 systemd[1]: Started cri-containerd-f9af05618539a6215ef337a340a724acce914eeff1740c33e9324719c872d73d.scope. May 17 00:42:11.896696 env[1216]: time="2025-05-17T00:42:11.896550733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qwxkh,Uid:5dfa033c-e73c-4e3c-a9ad-171fd4228b39,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad332026996889b628abc4eee5d150c3c06152682b3a762a1a0f78a3d645fd36\"" May 17 00:42:11.908059 env[1216]: time="2025-05-17T00:42:11.907999309Z" level=info msg="CreateContainer within sandbox \"ad332026996889b628abc4eee5d150c3c06152682b3a762a1a0f78a3d645fd36\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:42:11.935665 env[1216]: time="2025-05-17T00:42:11.935582786Z" level=info msg="CreateContainer within sandbox \"ad332026996889b628abc4eee5d150c3c06152682b3a762a1a0f78a3d645fd36\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb22fadf73b60a818628ae8f664f25e37e70abcf053625edd54dd7444823a891\"" May 17 00:42:11.942529 env[1216]: time="2025-05-17T00:42:11.942381864Z" level=info msg="StartContainer for \"bb22fadf73b60a818628ae8f664f25e37e70abcf053625edd54dd7444823a891\"" May 17 00:42:11.945056 env[1216]: time="2025-05-17T00:42:11.944985356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fv8dl,Uid:4e911b96-2f20-4e4d-965e-4c0fe28ea319,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9af05618539a6215ef337a340a724acce914eeff1740c33e9324719c872d73d\"" May 17 00:42:11.953689 env[1216]: time="2025-05-17T00:42:11.953499824Z" level=info msg="CreateContainer within sandbox \"f9af05618539a6215ef337a340a724acce914eeff1740c33e9324719c872d73d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:42:11.984420 systemd[1]: Started cri-containerd-bb22fadf73b60a818628ae8f664f25e37e70abcf053625edd54dd7444823a891.scope. May 17 00:42:11.987521 env[1216]: time="2025-05-17T00:42:11.987461762Z" level=info msg="CreateContainer within sandbox \"f9af05618539a6215ef337a340a724acce914eeff1740c33e9324719c872d73d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ab10501c7580a61f69a2645716bdced64d191b784507bf0e10f0b791ea27f22\"" May 17 00:42:11.989492 env[1216]: time="2025-05-17T00:42:11.989447172Z" level=info msg="StartContainer for \"1ab10501c7580a61f69a2645716bdced64d191b784507bf0e10f0b791ea27f22\"" May 17 00:42:12.045160 systemd[1]: Started cri-containerd-1ab10501c7580a61f69a2645716bdced64d191b784507bf0e10f0b791ea27f22.scope. May 17 00:42:12.137819 env[1216]: time="2025-05-17T00:42:12.137757739Z" level=info msg="StartContainer for \"bb22fadf73b60a818628ae8f664f25e37e70abcf053625edd54dd7444823a891\" returns successfully" May 17 00:42:12.197588 env[1216]: time="2025-05-17T00:42:12.197446216Z" level=info msg="StartContainer for \"1ab10501c7580a61f69a2645716bdced64d191b784507bf0e10f0b791ea27f22\" returns successfully" May 17 00:42:12.696827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779470348.mount: Deactivated successfully. May 17 00:42:13.155060 kubelet[1999]: I0517 00:42:13.154948 1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fv8dl" podStartSLOduration=31.154922467 podStartE2EDuration="31.154922467s" podCreationTimestamp="2025-05-17 00:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:13.153009649 +0000 UTC m=+35.655581826" watchObservedRunningTime="2025-05-17 00:42:13.154922467 +0000 UTC m=+35.657494641" May 17 00:42:13.211717 kubelet[1999]: I0517 00:42:13.211607 1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qwxkh" podStartSLOduration=31.211579438 podStartE2EDuration="31.211579438s" podCreationTimestamp="2025-05-17 00:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:13.17857416 +0000 UTC m=+35.681146335" watchObservedRunningTime="2025-05-17 00:42:13.211579438 +0000 UTC m=+35.714151613" May 17 00:42:32.197601 systemd[1]: Started sshd@5-10.128.0.28:22-139.178.89.65:39212.service. May 17 00:42:32.503064 sshd[3365]: Accepted publickey for core from 139.178.89.65 port 39212 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:32.505116 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:32.514484 systemd[1]: Started session-6.scope. May 17 00:42:32.518068 systemd-logind[1207]: New session 6 of user core. May 17 00:42:32.831698 sshd[3365]: pam_unix(sshd:session): session closed for user core May 17 00:42:32.837099 systemd[1]: sshd@5-10.128.0.28:22-139.178.89.65:39212.service: Deactivated successfully. May 17 00:42:32.838370 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:42:32.839513 systemd-logind[1207]: Session 6 logged out. Waiting for processes to exit. May 17 00:42:32.841265 systemd-logind[1207]: Removed session 6. May 17 00:42:37.878505 systemd[1]: Started sshd@6-10.128.0.28:22-139.178.89.65:33818.service. May 17 00:42:38.170268 sshd[3380]: Accepted publickey for core from 139.178.89.65 port 33818 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:38.172350 sshd[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:38.180746 systemd[1]: Started session-7.scope. May 17 00:42:38.182348 systemd-logind[1207]: New session 7 of user core. May 17 00:42:38.463430 sshd[3380]: pam_unix(sshd:session): session closed for user core May 17 00:42:38.468799 systemd[1]: sshd@6-10.128.0.28:22-139.178.89.65:33818.service: Deactivated successfully. May 17 00:42:38.470206 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:42:38.471385 systemd-logind[1207]: Session 7 logged out. Waiting for processes to exit. May 17 00:42:38.473374 systemd-logind[1207]: Removed session 7. May 17 00:42:43.511790 systemd[1]: Started sshd@7-10.128.0.28:22-139.178.89.65:33828.service. May 17 00:42:43.805641 sshd[3393]: Accepted publickey for core from 139.178.89.65 port 33828 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:43.808125 sshd[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:43.815999 systemd[1]: Started session-8.scope. May 17 00:42:43.816749 systemd-logind[1207]: New session 8 of user core. May 17 00:42:44.099591 sshd[3393]: pam_unix(sshd:session): session closed for user core May 17 00:42:44.105338 systemd[1]: sshd@7-10.128.0.28:22-139.178.89.65:33828.service: Deactivated successfully. May 17 00:42:44.106763 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:42:44.107949 systemd-logind[1207]: Session 8 logged out. Waiting for processes to exit. May 17 00:42:44.109974 systemd-logind[1207]: Removed session 8. May 17 00:42:45.768754 systemd[1]: Started sshd@8-10.128.0.28:22-46.32.178.46:33020.service. May 17 00:42:47.691565 sshd[3407]: Failed password for root from 46.32.178.46 port 33020 ssh2 May 17 00:42:48.333092 sshd[3407]: Connection closed by authenticating user root 46.32.178.46 port 33020 [preauth] May 17 00:42:48.335023 systemd[1]: sshd@8-10.128.0.28:22-46.32.178.46:33020.service: Deactivated successfully. May 17 00:42:48.826524 systemd[1]: Started sshd@9-10.128.0.28:22-46.32.178.46:33024.service. May 17 00:42:49.148944 systemd[1]: Started sshd@10-10.128.0.28:22-139.178.89.65:47786.service. May 17 00:42:49.438092 sshd[3414]: Accepted publickey for core from 139.178.89.65 port 47786 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:49.440482 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:49.448588 systemd[1]: Started session-9.scope. May 17 00:42:49.449268 systemd-logind[1207]: New session 9 of user core. May 17 00:42:49.764348 sshd[3414]: pam_unix(sshd:session): session closed for user core May 17 00:42:49.770180 systemd[1]: sshd@10-10.128.0.28:22-139.178.89.65:47786.service: Deactivated successfully. May 17 00:42:49.772294 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:42:49.773493 systemd-logind[1207]: Session 9 logged out. Waiting for processes to exit. May 17 00:42:49.776164 systemd-logind[1207]: Removed session 9. May 17 00:42:49.812959 systemd[1]: Started sshd@11-10.128.0.28:22-139.178.89.65:47802.service. May 17 00:42:50.109560 sshd[3427]: Accepted publickey for core from 139.178.89.65 port 47802 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:50.112406 sshd[3427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:50.125699 systemd[1]: Started session-10.scope. May 17 00:42:50.127536 systemd-logind[1207]: New session 10 of user core. May 17 00:42:50.346041 sshd[3411]: Failed password for root from 46.32.178.46 port 33024 ssh2 May 17 00:42:50.471280 sshd[3427]: pam_unix(sshd:session): session closed for user core May 17 00:42:50.476916 systemd[1]: sshd@11-10.128.0.28:22-139.178.89.65:47802.service: Deactivated successfully. May 17 00:42:50.478254 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:42:50.479423 systemd-logind[1207]: Session 10 logged out. Waiting for processes to exit. May 17 00:42:50.481118 systemd-logind[1207]: Removed session 10. May 17 00:42:50.519226 systemd[1]: Started sshd@12-10.128.0.28:22-139.178.89.65:47808.service. May 17 00:42:50.640645 sshd[3411]: Connection closed by authenticating user root 46.32.178.46 port 33024 [preauth] May 17 00:42:50.642667 systemd[1]: sshd@9-10.128.0.28:22-46.32.178.46:33024.service: Deactivated successfully. May 17 00:42:50.811066 sshd[3437]: Accepted publickey for core from 139.178.89.65 port 47808 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:50.813518 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:50.822509 systemd[1]: Started session-11.scope. May 17 00:42:50.823803 systemd-logind[1207]: New session 11 of user core. May 17 00:42:51.063824 systemd[1]: Started sshd@13-10.128.0.28:22-46.32.178.46:45530.service. May 17 00:42:51.115892 sshd[3437]: pam_unix(sshd:session): session closed for user core May 17 00:42:51.121073 systemd[1]: sshd@12-10.128.0.28:22-139.178.89.65:47808.service: Deactivated successfully. May 17 00:42:51.123201 systemd-logind[1207]: Session 11 logged out. Waiting for processes to exit. May 17 00:42:51.123402 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:42:51.125687 systemd-logind[1207]: Removed session 11. May 17 00:42:53.127266 sshd[3449]: Failed password for root from 46.32.178.46 port 45530 ssh2 May 17 00:42:53.465157 sshd[3449]: Connection closed by authenticating user root 46.32.178.46 port 45530 [preauth] May 17 00:42:53.467163 systemd[1]: sshd@13-10.128.0.28:22-46.32.178.46:45530.service: Deactivated successfully. May 17 00:42:53.654519 systemd[1]: Started sshd@14-10.128.0.28:22-46.32.178.46:45534.service. May 17 00:42:55.537147 sshd[3455]: Failed password for root from 46.32.178.46 port 45534 ssh2 May 17 00:42:56.052001 sshd[3455]: Connection closed by authenticating user root 46.32.178.46 port 45534 [preauth] May 17 00:42:56.053929 systemd[1]: sshd@14-10.128.0.28:22-46.32.178.46:45534.service: Deactivated successfully. May 17 00:42:56.163345 systemd[1]: Started sshd@15-10.128.0.28:22-139.178.89.65:47816.service. May 17 00:42:56.264366 systemd[1]: Started sshd@16-10.128.0.28:22-46.32.178.46:45538.service. May 17 00:42:56.461274 sshd[3459]: Accepted publickey for core from 139.178.89.65 port 47816 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:56.463697 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:56.470735 systemd-logind[1207]: New session 12 of user core. May 17 00:42:56.471743 systemd[1]: Started session-12.scope. May 17 00:42:56.757287 sshd[3459]: pam_unix(sshd:session): session closed for user core May 17 00:42:56.762925 systemd-logind[1207]: Session 12 logged out. Waiting for processes to exit. May 17 00:42:56.763212 systemd[1]: sshd@15-10.128.0.28:22-139.178.89.65:47816.service: Deactivated successfully. May 17 00:42:56.764655 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:42:56.766292 systemd-logind[1207]: Removed session 12. May 17 00:42:58.645818 sshd[3462]: Failed password for root from 46.32.178.46 port 45538 ssh2 May 17 00:42:58.954546 sshd[3462]: Connection closed by authenticating user root 46.32.178.46 port 45538 [preauth] May 17 00:42:58.956659 systemd[1]: sshd@16-10.128.0.28:22-46.32.178.46:45538.service: Deactivated successfully. May 17 00:42:59.171410 systemd[1]: Started sshd@17-10.128.0.28:22-46.32.178.46:45554.service. May 17 00:43:01.535676 sshd[3476]: Failed password for root from 46.32.178.46 port 45554 ssh2 May 17 00:43:01.767464 sshd[3476]: Connection closed by authenticating user root 46.32.178.46 port 45554 [preauth] May 17 00:43:01.769483 systemd[1]: sshd@17-10.128.0.28:22-46.32.178.46:45554.service: Deactivated successfully. May 17 00:43:01.807244 systemd[1]: Started sshd@18-10.128.0.28:22-139.178.89.65:56262.service. May 17 00:43:02.101879 sshd[3480]: Accepted publickey for core from 139.178.89.65 port 56262 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:02.104064 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:02.110755 systemd[1]: Started sshd@19-10.128.0.28:22-46.32.178.46:33706.service. May 17 00:43:02.120877 systemd[1]: Started session-13.scope. May 17 00:43:02.122897 systemd-logind[1207]: New session 13 of user core. May 17 00:43:02.409348 sshd[3480]: pam_unix(sshd:session): session closed for user core May 17 00:43:02.414911 systemd[1]: sshd@18-10.128.0.28:22-139.178.89.65:56262.service: Deactivated successfully. May 17 00:43:02.416481 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:43:02.417796 systemd-logind[1207]: Session 13 logged out. Waiting for processes to exit. May 17 00:43:02.419444 systemd-logind[1207]: Removed session 13. May 17 00:43:02.460154 systemd[1]: Started sshd@20-10.128.0.28:22-139.178.89.65:56270.service. May 17 00:43:02.759481 sshd[3495]: Accepted publickey for core from 139.178.89.65 port 56270 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:02.761491 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:02.769772 systemd[1]: Started session-14.scope. May 17 00:43:02.770414 systemd-logind[1207]: New session 14 of user core. May 17 00:43:03.130700 sshd[3495]: pam_unix(sshd:session): session closed for user core May 17 00:43:03.136861 systemd-logind[1207]: Session 14 logged out. Waiting for processes to exit. May 17 00:43:03.137456 systemd[1]: sshd@20-10.128.0.28:22-139.178.89.65:56270.service: Deactivated successfully. May 17 00:43:03.138882 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:43:03.141895 systemd-logind[1207]: Removed session 14. May 17 00:43:03.177556 systemd[1]: Started sshd@21-10.128.0.28:22-139.178.89.65:56272.service. May 17 00:43:03.473960 sshd[3505]: Accepted publickey for core from 139.178.89.65 port 56272 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:03.475958 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:03.486438 systemd[1]: Started session-15.scope. May 17 00:43:03.487460 systemd-logind[1207]: New session 15 of user core. May 17 00:43:04.130718 sshd[3483]: Failed password for root from 46.32.178.46 port 33706 ssh2 May 17 00:43:04.515602 sshd[3505]: pam_unix(sshd:session): session closed for user core May 17 00:43:04.522853 systemd[1]: sshd@21-10.128.0.28:22-139.178.89.65:56272.service: Deactivated successfully. May 17 00:43:04.524197 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:43:04.524925 systemd-logind[1207]: Session 15 logged out. Waiting for processes to exit. May 17 00:43:04.526947 systemd-logind[1207]: Removed session 15. May 17 00:43:04.528976 sshd[3483]: Connection closed by authenticating user root 46.32.178.46 port 33706 [preauth] May 17 00:43:04.531914 systemd[1]: sshd@19-10.128.0.28:22-46.32.178.46:33706.service: Deactivated successfully. May 17 00:43:04.565403 systemd[1]: Started sshd@22-10.128.0.28:22-139.178.89.65:56282.service. May 17 00:43:04.863073 systemd[1]: Started sshd@23-10.128.0.28:22-46.32.178.46:33708.service. May 17 00:43:04.875302 sshd[3523]: Accepted publickey for core from 139.178.89.65 port 56282 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:04.877114 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:04.885965 systemd[1]: Started session-16.scope. May 17 00:43:04.886894 systemd-logind[1207]: New session 16 of user core. May 17 00:43:05.346917 sshd[3523]: pam_unix(sshd:session): session closed for user core May 17 00:43:05.352518 systemd-logind[1207]: Session 16 logged out. Waiting for processes to exit. May 17 00:43:05.355113 systemd[1]: sshd@22-10.128.0.28:22-139.178.89.65:56282.service: Deactivated successfully. May 17 00:43:05.356442 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:43:05.359947 systemd-logind[1207]: Removed session 16. May 17 00:43:05.396399 systemd[1]: Started sshd@24-10.128.0.28:22-139.178.89.65:56298.service. May 17 00:43:05.702297 sshd[3536]: Accepted publickey for core from 139.178.89.65 port 56298 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:05.705137 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:05.713490 systemd[1]: Started session-17.scope. May 17 00:43:05.714502 systemd-logind[1207]: New session 17 of user core. May 17 00:43:06.002705 sshd[3536]: pam_unix(sshd:session): session closed for user core May 17 00:43:06.007990 systemd[1]: sshd@24-10.128.0.28:22-139.178.89.65:56298.service: Deactivated successfully. May 17 00:43:06.009353 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:43:06.010515 systemd-logind[1207]: Session 17 logged out. Waiting for processes to exit. May 17 00:43:06.012236 systemd-logind[1207]: Removed session 17. May 17 00:43:07.427350 sshd[3526]: Failed password for root from 46.32.178.46 port 33708 ssh2 May 17 00:43:07.680797 sshd[3526]: Connection closed by authenticating user root 46.32.178.46 port 33708 [preauth] May 17 00:43:07.682816 systemd[1]: sshd@23-10.128.0.28:22-46.32.178.46:33708.service: Deactivated successfully. May 17 00:43:08.256539 systemd[1]: Started sshd@25-10.128.0.28:22-46.32.178.46:33714.service. May 17 00:43:09.975127 sshd[3549]: Failed password for root from 46.32.178.46 port 33714 ssh2 May 17 00:43:10.395473 sshd[3549]: Connection closed by authenticating user root 46.32.178.46 port 33714 [preauth] May 17 00:43:10.397571 systemd[1]: sshd@25-10.128.0.28:22-46.32.178.46:33714.service: Deactivated successfully. May 17 00:43:10.659856 systemd[1]: Started sshd@26-10.128.0.28:22-46.32.178.46:33716.service. May 17 00:43:11.048621 systemd[1]: Started sshd@27-10.128.0.28:22-139.178.89.65:54518.service. May 17 00:43:11.337902 sshd[3558]: Accepted publickey for core from 139.178.89.65 port 54518 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:11.340016 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:11.347151 systemd-logind[1207]: New session 18 of user core. May 17 00:43:11.348020 systemd[1]: Started session-18.scope. May 17 00:43:11.634523 sshd[3558]: pam_unix(sshd:session): session closed for user core May 17 00:43:11.641061 systemd-logind[1207]: Session 18 logged out. Waiting for processes to exit. May 17 00:43:11.641382 systemd[1]: sshd@27-10.128.0.28:22-139.178.89.65:54518.service: Deactivated successfully. May 17 00:43:11.642835 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:43:11.644453 systemd-logind[1207]: Removed session 18. May 17 00:43:12.735654 sshd[3553]: Failed password for root from 46.32.178.46 port 33716 ssh2 May 17 00:43:13.516567 sshd[3553]: Connection closed by authenticating user root 46.32.178.46 port 33716 [preauth] May 17 00:43:13.518288 systemd[1]: sshd@26-10.128.0.28:22-46.32.178.46:33716.service: Deactivated successfully. May 17 00:43:14.025372 systemd[1]: Started sshd@28-10.128.0.28:22-46.32.178.46:35266.service. May 17 00:43:15.255006 sshd[3571]: Failed password for root from 46.32.178.46 port 35266 ssh2 May 17 00:43:15.484498 sshd[3571]: Connection closed by authenticating user root 46.32.178.46 port 35266 [preauth] May 17 00:43:15.486569 systemd[1]: sshd@28-10.128.0.28:22-46.32.178.46:35266.service: Deactivated successfully. May 17 00:43:15.706049 systemd[1]: Started sshd@29-10.128.0.28:22-46.32.178.46:35270.service. May 17 00:43:16.682085 systemd[1]: Started sshd@30-10.128.0.28:22-139.178.89.65:57894.service. May 17 00:43:16.974195 sshd[3580]: Accepted publickey for core from 139.178.89.65 port 57894 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:16.976253 sshd[3580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:16.983466 systemd-logind[1207]: New session 19 of user core. May 17 00:43:16.984314 systemd[1]: Started session-19.scope. May 17 00:43:17.265959 sshd[3580]: pam_unix(sshd:session): session closed for user core May 17 00:43:17.271993 systemd[1]: sshd@30-10.128.0.28:22-139.178.89.65:57894.service: Deactivated successfully. May 17 00:43:17.273347 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:43:17.274564 systemd-logind[1207]: Session 19 logged out. Waiting for processes to exit. May 17 00:43:17.276115 systemd-logind[1207]: Removed session 19. May 17 00:43:17.541335 sshd[3577]: Failed password for root from 46.32.178.46 port 35270 ssh2 May 17 00:43:17.976806 sshd[3577]: Connection closed by authenticating user root 46.32.178.46 port 35270 [preauth] May 17 00:43:17.979064 systemd[1]: sshd@29-10.128.0.28:22-46.32.178.46:35270.service: Deactivated successfully. May 17 00:43:18.185239 systemd[1]: Started sshd@31-10.128.0.28:22-46.32.178.46:35278.service. May 17 00:43:20.073117 sshd[3593]: Failed password for root from 46.32.178.46 port 35278 ssh2 May 17 00:43:20.519233 sshd[3593]: Connection closed by authenticating user root 46.32.178.46 port 35278 [preauth] May 17 00:43:20.521135 systemd[1]: sshd@31-10.128.0.28:22-46.32.178.46:35278.service: Deactivated successfully. May 17 00:43:20.782987 systemd[1]: Started sshd@32-10.128.0.28:22-46.32.178.46:41432.service. May 17 00:43:22.317087 systemd[1]: Started sshd@33-10.128.0.28:22-139.178.89.65:57898.service. May 17 00:43:22.611224 sshd[3600]: Accepted publickey for core from 139.178.89.65 port 57898 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:22.613459 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:22.620685 systemd-logind[1207]: New session 20 of user core. May 17 00:43:22.621352 systemd[1]: Started session-20.scope. May 17 00:43:22.662054 sshd[3597]: Failed password for root from 46.32.178.46 port 41432 ssh2 May 17 00:43:22.907516 sshd[3600]: pam_unix(sshd:session): session closed for user core May 17 00:43:22.912978 systemd[1]: sshd@33-10.128.0.28:22-139.178.89.65:57898.service: Deactivated successfully. May 17 00:43:22.914366 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:43:22.915437 systemd-logind[1207]: Session 20 logged out. Waiting for processes to exit. May 17 00:43:22.916894 systemd-logind[1207]: Removed session 20. May 17 00:43:22.920881 sshd[3597]: Connection closed by authenticating user root 46.32.178.46 port 41432 [preauth] May 17 00:43:22.923040 systemd[1]: sshd@32-10.128.0.28:22-46.32.178.46:41432.service: Deactivated successfully. May 17 00:43:22.955995 systemd[1]: Started sshd@34-10.128.0.28:22-139.178.89.65:57908.service. May 17 00:43:23.252252 sshd[3613]: Accepted publickey for core from 139.178.89.65 port 57908 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:23.254888 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:23.262854 systemd[1]: Started session-21.scope. May 17 00:43:23.264314 systemd-logind[1207]: New session 21 of user core. May 17 00:43:23.554525 systemd[1]: Started sshd@35-10.128.0.28:22-46.32.178.46:41436.service. May 17 00:43:25.127144 env[1216]: time="2025-05-17T00:43:25.127076492Z" level=info msg="StopContainer for \"f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8\" with timeout 30 (s)" May 17 00:43:25.128582 env[1216]: time="2025-05-17T00:43:25.128518551Z" level=info msg="Stop container \"f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8\" with signal terminated" May 17 00:43:25.147812 systemd[1]: cri-containerd-f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8.scope: Deactivated successfully. May 17 00:43:25.186570 systemd[1]: run-containerd-runc-k8s.io-6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3-runc.ItlToA.mount: Deactivated successfully. May 17 00:43:25.205103 env[1216]: time="2025-05-17T00:43:25.204982047Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:43:25.218286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8-rootfs.mount: Deactivated successfully. May 17 00:43:25.224367 env[1216]: time="2025-05-17T00:43:25.224297743Z" level=info msg="StopContainer for \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\" with timeout 2 (s)" May 17 00:43:25.225550 env[1216]: time="2025-05-17T00:43:25.225502147Z" level=info msg="Stop container \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\" with signal terminated" May 17 00:43:25.242121 env[1216]: time="2025-05-17T00:43:25.242045929Z" level=info msg="shim disconnected" id=f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8 May 17 00:43:25.243050 env[1216]: time="2025-05-17T00:43:25.242870340Z" level=warning msg="cleaning up after shim disconnected" id=f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8 namespace=k8s.io May 17 00:43:25.243455 env[1216]: time="2025-05-17T00:43:25.243423299Z" level=info msg="cleaning up dead shim" May 17 00:43:25.250733 systemd-networkd[1020]: lxc_health: Link DOWN May 17 00:43:25.250758 systemd-networkd[1020]: lxc_health: Lost carrier May 17 00:43:25.276702 env[1216]: time="2025-05-17T00:43:25.276560738Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3669 runtime=io.containerd.runc.v2\n" May 17 00:43:25.279297 systemd[1]: cri-containerd-6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3.scope: Deactivated successfully. May 17 00:43:25.279705 systemd[1]: cri-containerd-6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3.scope: Consumed 9.917s CPU time. May 17 00:43:25.282685 env[1216]: time="2025-05-17T00:43:25.282601756Z" level=info msg="StopContainer for \"f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8\" returns successfully" May 17 00:43:25.283543 env[1216]: time="2025-05-17T00:43:25.283444818Z" level=info msg="StopPodSandbox for \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\"" May 17 00:43:25.284132 env[1216]: time="2025-05-17T00:43:25.283581496Z" level=info msg="Container to stop \"f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:43:25.290267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27-shm.mount: Deactivated successfully. May 17 00:43:25.304452 systemd[1]: cri-containerd-7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27.scope: Deactivated successfully. May 17 00:43:25.358256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27-rootfs.mount: Deactivated successfully. May 17 00:43:25.365524 env[1216]: time="2025-05-17T00:43:25.365453816Z" level=info msg="shim disconnected" id=7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27 May 17 00:43:25.368824 sshd[3622]: Failed password for root from 46.32.178.46 port 41436 ssh2 May 17 00:43:25.369506 env[1216]: time="2025-05-17T00:43:25.369457414Z" level=warning msg="cleaning up after shim disconnected" id=7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27 namespace=k8s.io May 17 00:43:25.371909 env[1216]: time="2025-05-17T00:43:25.371847286Z" level=info msg="cleaning up dead shim" May 17 00:43:25.372645 env[1216]: time="2025-05-17T00:43:25.372572270Z" level=info msg="shim disconnected" id=6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3 May 17 00:43:25.372899 env[1216]: time="2025-05-17T00:43:25.372852220Z" level=warning msg="cleaning up after shim disconnected" id=6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3 namespace=k8s.io May 17 00:43:25.373071 env[1216]: time="2025-05-17T00:43:25.373043234Z" level=info msg="cleaning up dead shim" May 17 00:43:25.395367 env[1216]: time="2025-05-17T00:43:25.392509756Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3719 runtime=io.containerd.runc.v2\n" May 17 00:43:25.395367 env[1216]: time="2025-05-17T00:43:25.393124881Z" level=info msg="TearDown network for sandbox \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\" successfully" May 17 00:43:25.395367 env[1216]: time="2025-05-17T00:43:25.393165293Z" level=info msg="StopPodSandbox for \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\" returns successfully" May 17 00:43:25.398141 env[1216]: time="2025-05-17T00:43:25.398095244Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3720 runtime=io.containerd.runc.v2\n" May 17 00:43:25.401850 env[1216]: time="2025-05-17T00:43:25.401798423Z" level=info msg="StopContainer for \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\" returns successfully" May 17 00:43:25.402571 env[1216]: time="2025-05-17T00:43:25.402524280Z" level=info msg="StopPodSandbox for \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\"" May 17 00:43:25.402775 env[1216]: time="2025-05-17T00:43:25.402616131Z" level=info msg="Container to stop \"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:43:25.402775 env[1216]: time="2025-05-17T00:43:25.402685887Z" level=info msg="Container to stop \"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:43:25.402775 env[1216]: time="2025-05-17T00:43:25.402708259Z" level=info msg="Container to stop \"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:43:25.402775 env[1216]: time="2025-05-17T00:43:25.402729524Z" level=info msg="Container to stop \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:43:25.402775 env[1216]: time="2025-05-17T00:43:25.402749053Z" level=info msg="Container to stop \"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:43:25.414955 systemd[1]: cri-containerd-130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51.scope: Deactivated successfully. May 17 00:43:25.458308 env[1216]: time="2025-05-17T00:43:25.458224447Z" level=info msg="shim disconnected" id=130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51 May 17 00:43:25.458308 env[1216]: time="2025-05-17T00:43:25.458306130Z" level=warning msg="cleaning up after shim disconnected" id=130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51 namespace=k8s.io May 17 00:43:25.458790 env[1216]: time="2025-05-17T00:43:25.458322818Z" level=info msg="cleaning up dead shim" May 17 00:43:25.474972 env[1216]: time="2025-05-17T00:43:25.474912445Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3765 runtime=io.containerd.runc.v2\n" May 17 00:43:25.475737 env[1216]: time="2025-05-17T00:43:25.475683481Z" level=info msg="TearDown network for sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" successfully" May 17 00:43:25.475933 env[1216]: time="2025-05-17T00:43:25.475899034Z" level=info msg="StopPodSandbox for \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" returns successfully" May 17 00:43:25.572563 kubelet[1999]: I0517 00:43:25.572498 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-run\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.573258 kubelet[1999]: I0517 00:43:25.572581 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-config-path\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.573258 kubelet[1999]: I0517 00:43:25.572616 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-cgroup\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.573258 kubelet[1999]: I0517 00:43:25.572684 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmpxc\" (UniqueName: \"kubernetes.io/projected/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-kube-api-access-pmpxc\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.573258 kubelet[1999]: I0517 00:43:25.572718 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-hostproc\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.573258 kubelet[1999]: I0517 00:43:25.572783 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-clustermesh-secrets\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.573258 kubelet[1999]: I0517 00:43:25.572817 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-xtables-lock\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.573771 kubelet[1999]: I0517 00:43:25.572846 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-lib-modules\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.573771 kubelet[1999]: I0517 00:43:25.572878 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-hubble-tls\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.573771 kubelet[1999]: I0517 00:43:25.572909 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xgzv\" (UniqueName: \"kubernetes.io/projected/7cd4e63f-7484-4b51-9d35-c675eef7c780-kube-api-access-4xgzv\") pod \"7cd4e63f-7484-4b51-9d35-c675eef7c780\" (UID: \"7cd4e63f-7484-4b51-9d35-c675eef7c780\") " May 17 00:43:25.573771 kubelet[1999]: I0517 00:43:25.572947 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-etc-cni-netd\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.573771 kubelet[1999]: I0517 00:43:25.572980 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-host-proc-sys-net\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.573771 kubelet[1999]: I0517 00:43:25.573011 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-bpf-maps\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.574207 kubelet[1999]: I0517 00:43:25.573044 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cni-path\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.574207 kubelet[1999]: I0517 00:43:25.573075 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-host-proc-sys-kernel\") pod \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\" (UID: \"983a33dd-b9bb-42c0-9687-80d7c4dc09c1\") " May 17 00:43:25.574207 kubelet[1999]: I0517 00:43:25.573134 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cd4e63f-7484-4b51-9d35-c675eef7c780-cilium-config-path\") pod \"7cd4e63f-7484-4b51-9d35-c675eef7c780\" (UID: \"7cd4e63f-7484-4b51-9d35-c675eef7c780\") " May 17 00:43:25.574599 kubelet[1999]: I0517 00:43:25.574535 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:25.574876 kubelet[1999]: I0517 00:43:25.574820 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:25.577169 kubelet[1999]: I0517 00:43:25.577119 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd4e63f-7484-4b51-9d35-c675eef7c780-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7cd4e63f-7484-4b51-9d35-c675eef7c780" (UID: "7cd4e63f-7484-4b51-9d35-c675eef7c780"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:43:25.579331 kubelet[1999]: I0517 00:43:25.579286 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:43:25.579571 kubelet[1999]: I0517 00:43:25.579537 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:25.582577 kubelet[1999]: I0517 00:43:25.582525 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:43:25.585272 kubelet[1999]: I0517 00:43:25.585224 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-kube-api-access-pmpxc" (OuterVolumeSpecName: "kube-api-access-pmpxc") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "kube-api-access-pmpxc". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:43:25.585513 kubelet[1999]: I0517 00:43:25.585482 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-hostproc" (OuterVolumeSpecName: "hostproc") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:25.587744 kubelet[1999]: I0517 00:43:25.587697 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd4e63f-7484-4b51-9d35-c675eef7c780-kube-api-access-4xgzv" (OuterVolumeSpecName: "kube-api-access-4xgzv") pod "7cd4e63f-7484-4b51-9d35-c675eef7c780" (UID: "7cd4e63f-7484-4b51-9d35-c675eef7c780"). InnerVolumeSpecName "kube-api-access-4xgzv". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:43:25.587874 kubelet[1999]: I0517 00:43:25.587769 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:25.587874 kubelet[1999]: I0517 00:43:25.587802 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:25.587874 kubelet[1999]: I0517 00:43:25.587830 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:25.587874 kubelet[1999]: I0517 00:43:25.587855 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cni-path" (OuterVolumeSpecName: "cni-path") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:25.588176 kubelet[1999]: I0517 00:43:25.587881 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:25.588176 kubelet[1999]: I0517 00:43:25.587916 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:25.590069 kubelet[1999]: I0517 00:43:25.590021 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "983a33dd-b9bb-42c0-9687-80d7c4dc09c1" (UID: "983a33dd-b9bb-42c0-9687-80d7c4dc09c1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:43:25.674377 kubelet[1999]: I0517 00:43:25.673934 1999 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-host-proc-sys-kernel\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674377 kubelet[1999]: I0517 00:43:25.673991 1999 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cd4e63f-7484-4b51-9d35-c675eef7c780-cilium-config-path\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674377 kubelet[1999]: I0517 00:43:25.674017 1999 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-config-path\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674377 kubelet[1999]: I0517 00:43:25.674037 1999 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-cgroup\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674377 kubelet[1999]: I0517 00:43:25.674089 1999 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cilium-run\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674377 kubelet[1999]: I0517 00:43:25.674118 1999 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pmpxc\" (UniqueName: \"kubernetes.io/projected/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-kube-api-access-pmpxc\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674377 kubelet[1999]: I0517 00:43:25.674140 1999 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-hostproc\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674981 kubelet[1999]: I0517 00:43:25.674159 1999 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-clustermesh-secrets\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674981 kubelet[1999]: I0517 00:43:25.674179 1999 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-hubble-tls\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674981 kubelet[1999]: I0517 00:43:25.674195 1999 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4xgzv\" (UniqueName: \"kubernetes.io/projected/7cd4e63f-7484-4b51-9d35-c675eef7c780-kube-api-access-4xgzv\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674981 kubelet[1999]: I0517 00:43:25.674211 1999 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-xtables-lock\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674981 kubelet[1999]: I0517 00:43:25.674229 1999 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-lib-modules\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674981 kubelet[1999]: I0517 00:43:25.674246 1999 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-etc-cni-netd\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.674981 kubelet[1999]: I0517 00:43:25.674264 1999 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-host-proc-sys-net\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.675444 kubelet[1999]: I0517 00:43:25.674282 1999 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-cni-path\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.675444 kubelet[1999]: I0517 00:43:25.674298 1999 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/983a33dd-b9bb-42c0-9687-80d7c4dc09c1-bpf-maps\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:25.737008 systemd[1]: Removed slice kubepods-besteffort-pod7cd4e63f_7484_4b51_9d35_c675eef7c780.slice. May 17 00:43:25.741197 systemd[1]: Removed slice kubepods-burstable-pod983a33dd_b9bb_42c0_9687_80d7c4dc09c1.slice. May 17 00:43:25.741401 systemd[1]: kubepods-burstable-pod983a33dd_b9bb_42c0_9687_80d7c4dc09c1.slice: Consumed 10.097s CPU time. May 17 00:43:25.922446 sshd[3622]: Connection closed by authenticating user root 46.32.178.46 port 41436 [preauth] May 17 00:43:25.924433 systemd[1]: sshd@35-10.128.0.28:22-46.32.178.46:41436.service: Deactivated successfully. May 17 00:43:26.160756 systemd[1]: Started sshd@36-10.128.0.28:22-46.32.178.46:41440.service. May 17 00:43:26.167128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3-rootfs.mount: Deactivated successfully. May 17 00:43:26.167571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51-rootfs.mount: Deactivated successfully. May 17 00:43:26.167936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51-shm.mount: Deactivated successfully. May 17 00:43:26.168260 systemd[1]: var-lib-kubelet-pods-7cd4e63f\x2d7484\x2d4b51\x2d9d35\x2dc675eef7c780-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4xgzv.mount: Deactivated successfully. May 17 00:43:26.168531 systemd[1]: var-lib-kubelet-pods-983a33dd\x2db9bb\x2d42c0\x2d9687\x2d80d7c4dc09c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpmpxc.mount: Deactivated successfully. May 17 00:43:26.168869 systemd[1]: var-lib-kubelet-pods-983a33dd\x2db9bb\x2d42c0\x2d9687\x2d80d7c4dc09c1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:43:26.169173 systemd[1]: var-lib-kubelet-pods-983a33dd\x2db9bb\x2d42c0\x2d9687\x2d80d7c4dc09c1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:43:26.326714 kubelet[1999]: I0517 00:43:26.326527 1999 scope.go:117] "RemoveContainer" containerID="f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8" May 17 00:43:26.333111 env[1216]: time="2025-05-17T00:43:26.332977542Z" level=info msg="RemoveContainer for \"f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8\"" May 17 00:43:26.348378 env[1216]: time="2025-05-17T00:43:26.348286545Z" level=info msg="RemoveContainer for \"f7d1e6f4b0f931a78f6d2587452bd31ad035c802e299edcd11e048254bdfa5f8\" returns successfully" May 17 00:43:26.349829 kubelet[1999]: I0517 00:43:26.349753 1999 scope.go:117] "RemoveContainer" containerID="6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3" May 17 00:43:26.354039 env[1216]: time="2025-05-17T00:43:26.353934836Z" level=info msg="RemoveContainer for \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\"" May 17 00:43:26.360906 env[1216]: time="2025-05-17T00:43:26.360834384Z" level=info msg="RemoveContainer for \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\" returns successfully" May 17 00:43:26.361414 kubelet[1999]: I0517 00:43:26.361381 1999 scope.go:117] "RemoveContainer" containerID="9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837" May 17 00:43:26.366808 env[1216]: time="2025-05-17T00:43:26.366753949Z" level=info msg="RemoveContainer for \"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837\"" May 17 00:43:26.374005 env[1216]: time="2025-05-17T00:43:26.373939858Z" level=info msg="RemoveContainer for \"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837\" returns successfully" May 17 00:43:26.374267 kubelet[1999]: I0517 00:43:26.374234 1999 scope.go:117] "RemoveContainer" containerID="593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4" May 17 00:43:26.376048 env[1216]: time="2025-05-17T00:43:26.375995870Z" level=info msg="RemoveContainer for \"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4\"" May 17 00:43:26.381375 env[1216]: time="2025-05-17T00:43:26.381316792Z" level=info msg="RemoveContainer for \"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4\" returns successfully" May 17 00:43:26.381653 kubelet[1999]: I0517 00:43:26.381589 1999 scope.go:117] "RemoveContainer" containerID="4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f" May 17 00:43:26.383349 env[1216]: time="2025-05-17T00:43:26.383288508Z" level=info msg="RemoveContainer for \"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f\"" May 17 00:43:26.388319 env[1216]: time="2025-05-17T00:43:26.388257892Z" level=info msg="RemoveContainer for \"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f\" returns successfully" May 17 00:43:26.388574 kubelet[1999]: I0517 00:43:26.388538 1999 scope.go:117] "RemoveContainer" containerID="7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16" May 17 00:43:26.390284 env[1216]: time="2025-05-17T00:43:26.390213672Z" level=info msg="RemoveContainer for \"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16\"" May 17 00:43:26.395385 env[1216]: time="2025-05-17T00:43:26.395319756Z" level=info msg="RemoveContainer for \"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16\" returns successfully" May 17 00:43:26.395772 kubelet[1999]: I0517 00:43:26.395723 1999 scope.go:117] "RemoveContainer" containerID="6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3" May 17 00:43:26.396197 env[1216]: time="2025-05-17T00:43:26.396098254Z" level=error msg="ContainerStatus for \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\": not found" May 17 00:43:26.396485 kubelet[1999]: E0517 00:43:26.396352 1999 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\": not found" containerID="6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3" May 17 00:43:26.396587 kubelet[1999]: I0517 00:43:26.396466 1999 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3"} err="failed to get container status \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ec078303da938a2373db331e395e472d61e5d484fd0f503b3a45e4d57f220d3\": not found" May 17 00:43:26.396761 kubelet[1999]: I0517 00:43:26.396588 1999 scope.go:117] "RemoveContainer" containerID="9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837" May 17 00:43:26.397045 env[1216]: time="2025-05-17T00:43:26.396931325Z" level=error msg="ContainerStatus for \"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837\": not found" May 17 00:43:26.397229 kubelet[1999]: E0517 00:43:26.397159 1999 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837\": not found" containerID="9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837" May 17 00:43:26.397229 kubelet[1999]: I0517 00:43:26.397198 1999 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837"} err="failed to get container status \"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837\": rpc error: code = NotFound desc = an error occurred when try to find container \"9fb5829841d3f73ef640449c7ccfa40e966b4115e8c6b63afa1aafeefcbac837\": not found" May 17 00:43:26.397664 kubelet[1999]: I0517 00:43:26.397235 1999 scope.go:117] "RemoveContainer" containerID="593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4" May 17 00:43:26.397782 env[1216]: time="2025-05-17T00:43:26.397503139Z" level=error msg="ContainerStatus for \"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4\": not found" May 17 00:43:26.397857 kubelet[1999]: E0517 00:43:26.397739 1999 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4\": not found" containerID="593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4" May 17 00:43:26.397857 kubelet[1999]: I0517 00:43:26.397774 1999 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4"} err="failed to get container status \"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"593c8d61f498c51a9104eeb2b5e966b0d941e239d82631d189aa2a93310a70d4\": not found" May 17 00:43:26.397857 kubelet[1999]: I0517 00:43:26.397797 1999 scope.go:117] "RemoveContainer" containerID="4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f" May 17 00:43:26.398149 env[1216]: time="2025-05-17T00:43:26.398059108Z" level=error msg="ContainerStatus for \"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f\": not found" May 17 00:43:26.398299 kubelet[1999]: E0517 00:43:26.398254 1999 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f\": not found" containerID="4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f" May 17 00:43:26.398408 kubelet[1999]: I0517 00:43:26.398301 1999 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f"} err="failed to get container status \"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"4243fc3fee2ed0b2c7fcfb2a628806849b5c8c4a11e522cea353f11b5ff4fc9f\": not found" May 17 00:43:26.398408 kubelet[1999]: I0517 00:43:26.398329 1999 scope.go:117] "RemoveContainer" containerID="7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16" May 17 00:43:26.398659 env[1216]: time="2025-05-17T00:43:26.398562196Z" level=error msg="ContainerStatus for \"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16\": not found" May 17 00:43:26.398823 kubelet[1999]: E0517 00:43:26.398790 1999 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16\": not found" containerID="7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16" May 17 00:43:26.398936 kubelet[1999]: I0517 00:43:26.398840 1999 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16"} err="failed to get container status \"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cda49b9cf1b7ac23ec697df63061d376d173b30dcd1b78c03032606dc31cd16\": not found" May 17 00:43:27.077450 sshd[3613]: pam_unix(sshd:session): session closed for user core May 17 00:43:27.082753 systemd[1]: sshd@34-10.128.0.28:22-139.178.89.65:57908.service: Deactivated successfully. May 17 00:43:27.084194 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:43:27.084642 systemd[1]: session-21.scope: Consumed 1.047s CPU time. May 17 00:43:27.086685 systemd-logind[1207]: Session 21 logged out. Waiting for processes to exit. May 17 00:43:27.088228 systemd-logind[1207]: Removed session 21. May 17 00:43:27.125986 systemd[1]: Started sshd@37-10.128.0.28:22-139.178.89.65:53180.service. May 17 00:43:27.415507 sshd[3791]: Accepted publickey for core from 139.178.89.65 port 53180 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:27.418142 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:27.425747 systemd-logind[1207]: New session 22 of user core. May 17 00:43:27.426488 systemd[1]: Started session-22.scope. May 17 00:43:27.730874 kubelet[1999]: I0517 00:43:27.730718 1999 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd4e63f-7484-4b51-9d35-c675eef7c780" path="/var/lib/kubelet/pods/7cd4e63f-7484-4b51-9d35-c675eef7c780/volumes" May 17 00:43:27.731938 kubelet[1999]: I0517 00:43:27.731891 1999 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983a33dd-b9bb-42c0-9687-80d7c4dc09c1" path="/var/lib/kubelet/pods/983a33dd-b9bb-42c0-9687-80d7c4dc09c1/volumes" May 17 00:43:27.939530 kubelet[1999]: E0517 00:43:27.939484 1999 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:43:28.044481 sshd[3787]: Failed password for root from 46.32.178.46 port 41440 ssh2 May 17 00:43:28.335714 sshd[3787]: Connection closed by authenticating user root 46.32.178.46 port 41440 [preauth] May 17 00:43:28.336672 systemd[1]: sshd@36-10.128.0.28:22-46.32.178.46:41440.service: Deactivated successfully. May 17 00:43:28.650356 systemd[1]: Started sshd@38-10.128.0.28:22-46.32.178.46:41448.service. May 17 00:43:28.728664 kubelet[1999]: I0517 00:43:28.727707 1999 memory_manager.go:355] "RemoveStaleState removing state" podUID="983a33dd-b9bb-42c0-9687-80d7c4dc09c1" containerName="cilium-agent" May 17 00:43:28.728664 kubelet[1999]: I0517 00:43:28.727752 1999 memory_manager.go:355] "RemoveStaleState removing state" podUID="7cd4e63f-7484-4b51-9d35-c675eef7c780" containerName="cilium-operator" May 17 00:43:28.734964 sshd[3791]: pam_unix(sshd:session): session closed for user core May 17 00:43:28.741468 systemd[1]: sshd@37-10.128.0.28:22-139.178.89.65:53180.service: Deactivated successfully. May 17 00:43:28.742944 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:43:28.743191 systemd[1]: session-22.scope: Consumed 1.044s CPU time. May 17 00:43:28.747157 systemd-logind[1207]: Session 22 logged out. Waiting for processes to exit. May 17 00:43:28.751866 systemd-logind[1207]: Removed session 22. May 17 00:43:28.763154 systemd[1]: Created slice kubepods-burstable-pode4c09d53_027b_4dc7_a610_10f4a2dc87b5.slice. May 17 00:43:28.782529 systemd[1]: Started sshd@39-10.128.0.28:22-139.178.89.65:53188.service. May 17 00:43:28.897374 kubelet[1999]: I0517 00:43:28.897315 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-clustermesh-secrets\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.898036 kubelet[1999]: I0517 00:43:28.897990 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-host-proc-sys-net\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.898290 kubelet[1999]: I0517 00:43:28.898267 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-host-proc-sys-kernel\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.898475 kubelet[1999]: I0517 00:43:28.898452 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-run\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.898619 kubelet[1999]: I0517 00:43:28.898600 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cni-path\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.898800 kubelet[1999]: I0517 00:43:28.898770 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-ipsec-secrets\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.898954 kubelet[1999]: I0517 00:43:28.898935 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-bpf-maps\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.899115 kubelet[1999]: I0517 00:43:28.899086 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-hostproc\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.899255 kubelet[1999]: I0517 00:43:28.899236 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-hubble-tls\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.899403 kubelet[1999]: I0517 00:43:28.899384 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9xbf\" (UniqueName: \"kubernetes.io/projected/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-kube-api-access-h9xbf\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.899542 kubelet[1999]: I0517 00:43:28.899522 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-lib-modules\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.899758 kubelet[1999]: I0517 00:43:28.899721 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-cgroup\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.899875 kubelet[1999]: I0517 00:43:28.899776 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-xtables-lock\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.899875 kubelet[1999]: I0517 00:43:28.899810 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-config-path\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:28.899875 kubelet[1999]: I0517 00:43:28.899849 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-etc-cni-netd\") pod \"cilium-msvjf\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " pod="kube-system/cilium-msvjf" May 17 00:43:29.087231 env[1216]: time="2025-05-17T00:43:29.086943634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-msvjf,Uid:e4c09d53-027b-4dc7-a610-10f4a2dc87b5,Namespace:kube-system,Attempt:0,}" May 17 00:43:29.115346 env[1216]: time="2025-05-17T00:43:29.115234014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:29.115602 env[1216]: time="2025-05-17T00:43:29.115295785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:29.115602 env[1216]: time="2025-05-17T00:43:29.115330328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:29.115602 env[1216]: time="2025-05-17T00:43:29.115553291Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59 pid=3819 runtime=io.containerd.runc.v2 May 17 00:43:29.127689 sshd[3805]: Accepted publickey for core from 139.178.89.65 port 53188 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:29.128785 sshd[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:29.139087 systemd[1]: Started session-23.scope. May 17 00:43:29.141692 systemd-logind[1207]: New session 23 of user core. May 17 00:43:29.157717 systemd[1]: Started cri-containerd-ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59.scope. May 17 00:43:29.204414 env[1216]: time="2025-05-17T00:43:29.204350190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-msvjf,Uid:e4c09d53-027b-4dc7-a610-10f4a2dc87b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\"" May 17 00:43:29.210978 env[1216]: time="2025-05-17T00:43:29.210756717Z" level=info msg="CreateContainer within sandbox \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:43:29.228672 env[1216]: time="2025-05-17T00:43:29.228586292Z" level=info msg="CreateContainer within sandbox \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b\"" May 17 00:43:29.230151 env[1216]: time="2025-05-17T00:43:29.230071073Z" level=info msg="StartContainer for \"13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b\"" May 17 00:43:29.256748 systemd[1]: Started cri-containerd-13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b.scope. May 17 00:43:29.275132 systemd[1]: cri-containerd-13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b.scope: Deactivated successfully. May 17 00:43:29.314587 env[1216]: time="2025-05-17T00:43:29.314508066Z" level=info msg="shim disconnected" id=13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b May 17 00:43:29.315224 env[1216]: time="2025-05-17T00:43:29.315170700Z" level=warning msg="cleaning up after shim disconnected" id=13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b namespace=k8s.io May 17 00:43:29.315647 env[1216]: time="2025-05-17T00:43:29.315602007Z" level=info msg="cleaning up dead shim" May 17 00:43:29.330651 env[1216]: time="2025-05-17T00:43:29.330561664Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3884 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:43:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:43:29.331563 env[1216]: time="2025-05-17T00:43:29.331332139Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" May 17 00:43:29.332522 env[1216]: time="2025-05-17T00:43:29.332419062Z" level=error msg="Failed to pipe stderr of container \"13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b\"" error="reading from a closed fifo" May 17 00:43:29.332801 env[1216]: time="2025-05-17T00:43:29.332419856Z" level=error msg="Failed to pipe stdout of container \"13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b\"" error="reading from a closed fifo" May 17 00:43:29.335180 env[1216]: time="2025-05-17T00:43:29.335078387Z" level=error msg="StartContainer for \"13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:43:29.335658 kubelet[1999]: E0517 00:43:29.335487 1999 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b" May 17 00:43:29.337384 kubelet[1999]: E0517 00:43:29.335956 1999 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 17 00:43:29.337384 kubelet[1999]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:43:29.337384 kubelet[1999]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:43:29.337384 kubelet[1999]: rm /hostbin/cilium-mount May 17 00:43:29.337698 kubelet[1999]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9xbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-msvjf_kube-system(e4c09d53-027b-4dc7-a610-10f4a2dc87b5): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:43:29.337698 kubelet[1999]: > logger="UnhandledError" May 17 00:43:29.339391 kubelet[1999]: E0517 00:43:29.339296 1999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-msvjf" podUID="e4c09d53-027b-4dc7-a610-10f4a2dc87b5" May 17 00:43:29.374607 env[1216]: time="2025-05-17T00:43:29.374550142Z" level=info msg="CreateContainer within sandbox \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" May 17 00:43:29.402522 env[1216]: time="2025-05-17T00:43:29.402453507Z" level=info msg="CreateContainer within sandbox \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e\"" May 17 00:43:29.405760 env[1216]: time="2025-05-17T00:43:29.405490354Z" level=info msg="StartContainer for \"eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e\"" May 17 00:43:29.447552 systemd[1]: Started cri-containerd-eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e.scope. May 17 00:43:29.481439 systemd[1]: cri-containerd-eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e.scope: Deactivated successfully. May 17 00:43:29.498654 env[1216]: time="2025-05-17T00:43:29.498550081Z" level=info msg="shim disconnected" id=eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e May 17 00:43:29.498654 env[1216]: time="2025-05-17T00:43:29.498651103Z" level=warning msg="cleaning up after shim disconnected" id=eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e namespace=k8s.io May 17 00:43:29.499102 env[1216]: time="2025-05-17T00:43:29.498671141Z" level=info msg="cleaning up dead shim" May 17 00:43:29.525808 env[1216]: time="2025-05-17T00:43:29.525614255Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3920 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:43:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:43:29.527655 env[1216]: time="2025-05-17T00:43:29.526128261Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" May 17 00:43:29.530063 env[1216]: time="2025-05-17T00:43:29.529984743Z" level=error msg="Failed to pipe stdout of container \"eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e\"" error="reading from a closed fifo" May 17 00:43:29.530228 env[1216]: time="2025-05-17T00:43:29.530132759Z" level=error msg="Failed to pipe stderr of container \"eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e\"" error="reading from a closed fifo" May 17 00:43:29.534994 sshd[3805]: pam_unix(sshd:session): session closed for user core May 17 00:43:29.541836 systemd[1]: sshd@39-10.128.0.28:22-139.178.89.65:53188.service: Deactivated successfully. May 17 00:43:29.543163 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:43:29.546322 systemd-logind[1207]: Session 23 logged out. Waiting for processes to exit. May 17 00:43:29.551969 systemd-logind[1207]: Removed session 23. May 17 00:43:29.555156 env[1216]: time="2025-05-17T00:43:29.555056009Z" level=error msg="StartContainer for \"eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:43:29.556707 kubelet[1999]: E0517 00:43:29.555716 1999 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e" May 17 00:43:29.556707 kubelet[1999]: E0517 00:43:29.555962 1999 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 17 00:43:29.556707 kubelet[1999]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:43:29.556707 kubelet[1999]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:43:29.556707 kubelet[1999]: rm /hostbin/cilium-mount May 17 00:43:29.556707 kubelet[1999]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9xbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-msvjf_kube-system(e4c09d53-027b-4dc7-a610-10f4a2dc87b5): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:43:29.556707 kubelet[1999]: > logger="UnhandledError" May 17 00:43:29.557903 kubelet[1999]: E0517 00:43:29.557818 1999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-msvjf" podUID="e4c09d53-027b-4dc7-a610-10f4a2dc87b5" May 17 00:43:29.581563 systemd[1]: Started sshd@40-10.128.0.28:22-139.178.89.65:53204.service. May 17 00:43:29.618271 kubelet[1999]: I0517 00:43:29.618095 1999 setters.go:602] "Node became not ready" node="ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:43:29Z","lastTransitionTime":"2025-05-17T00:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:43:29.885432 sshd[3934]: Accepted publickey for core from 139.178.89.65 port 53204 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:29.887463 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:29.897301 systemd[1]: Started session-24.scope. May 17 00:43:29.898335 systemd-logind[1207]: New session 24 of user core. May 17 00:43:30.356348 kubelet[1999]: I0517 00:43:30.356303 1999 scope.go:117] "RemoveContainer" containerID="13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b" May 17 00:43:30.358024 env[1216]: time="2025-05-17T00:43:30.357973450Z" level=info msg="StopPodSandbox for \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\"" May 17 00:43:30.359394 env[1216]: time="2025-05-17T00:43:30.359302779Z" level=info msg="Container to stop \"eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:43:30.359653 env[1216]: time="2025-05-17T00:43:30.359588876Z" level=info msg="Container to stop \"13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:43:30.367387 env[1216]: time="2025-05-17T00:43:30.359195640Z" level=info msg="RemoveContainer for \"13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b\"" May 17 00:43:30.367387 env[1216]: time="2025-05-17T00:43:30.365213816Z" level=info msg="RemoveContainer for \"13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b\" returns successfully" May 17 00:43:30.367417 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59-shm.mount: Deactivated successfully. May 17 00:43:30.380958 systemd[1]: cri-containerd-ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59.scope: Deactivated successfully. May 17 00:43:30.426257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59-rootfs.mount: Deactivated successfully. May 17 00:43:30.436011 env[1216]: time="2025-05-17T00:43:30.435942532Z" level=info msg="shim disconnected" id=ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59 May 17 00:43:30.436011 env[1216]: time="2025-05-17T00:43:30.436012781Z" level=warning msg="cleaning up after shim disconnected" id=ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59 namespace=k8s.io May 17 00:43:30.436456 env[1216]: time="2025-05-17T00:43:30.436030387Z" level=info msg="cleaning up dead shim" May 17 00:43:30.450260 env[1216]: time="2025-05-17T00:43:30.450182349Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3962 runtime=io.containerd.runc.v2\n" May 17 00:43:30.450901 env[1216]: time="2025-05-17T00:43:30.450863955Z" level=info msg="TearDown network for sandbox \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\" successfully" May 17 00:43:30.451067 env[1216]: time="2025-05-17T00:43:30.451028662Z" level=info msg="StopPodSandbox for \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\" returns successfully" May 17 00:43:30.623288 sshd[3801]: Failed password for root from 46.32.178.46 port 41448 ssh2 May 17 00:43:30.628099 kubelet[1999]: I0517 00:43:30.628043 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-host-proc-sys-net\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.628099 kubelet[1999]: I0517 00:43:30.628107 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-bpf-maps\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.628447 kubelet[1999]: I0517 00:43:30.628147 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-cgroup\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.628447 kubelet[1999]: I0517 00:43:30.628176 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-host-proc-sys-kernel\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.628447 kubelet[1999]: I0517 00:43:30.628219 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-hubble-tls\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.628447 kubelet[1999]: I0517 00:43:30.628249 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-lib-modules\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.628447 kubelet[1999]: I0517 00:43:30.628310 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-ipsec-secrets\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.628447 kubelet[1999]: I0517 00:43:30.628340 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-etc-cni-netd\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.628447 kubelet[1999]: I0517 00:43:30.628374 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-config-path\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.628447 kubelet[1999]: I0517 00:43:30.628411 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cni-path\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.628447 kubelet[1999]: I0517 00:43:30.628444 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-run\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.629114 kubelet[1999]: I0517 00:43:30.628475 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-hostproc\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.629114 kubelet[1999]: I0517 00:43:30.628516 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-clustermesh-secrets\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.629114 kubelet[1999]: I0517 00:43:30.628549 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9xbf\" (UniqueName: \"kubernetes.io/projected/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-kube-api-access-h9xbf\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.629114 kubelet[1999]: I0517 00:43:30.628581 1999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-xtables-lock\") pod \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\" (UID: \"e4c09d53-027b-4dc7-a610-10f4a2dc87b5\") " May 17 00:43:30.629114 kubelet[1999]: I0517 00:43:30.628729 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:30.629114 kubelet[1999]: I0517 00:43:30.628775 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:30.629114 kubelet[1999]: I0517 00:43:30.628805 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:30.629114 kubelet[1999]: I0517 00:43:30.628830 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:30.629114 kubelet[1999]: I0517 00:43:30.628861 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:30.630072 kubelet[1999]: I0517 00:43:30.629989 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cni-path" (OuterVolumeSpecName: "cni-path") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:30.630268 kubelet[1999]: I0517 00:43:30.630072 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:30.630440 kubelet[1999]: I0517 00:43:30.630096 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-hostproc" (OuterVolumeSpecName: "hostproc") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:30.630667 kubelet[1999]: I0517 00:43:30.630604 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:30.635958 kubelet[1999]: I0517 00:43:30.631719 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:43:30.636204 kubelet[1999]: I0517 00:43:30.635866 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:43:30.642722 systemd[1]: var-lib-kubelet-pods-e4c09d53\x2d027b\x2d4dc7\x2da610\x2d10f4a2dc87b5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:43:30.645899 kubelet[1999]: I0517 00:43:30.645846 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:43:30.651558 kubelet[1999]: I0517 00:43:30.650764 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:43:30.652245 kubelet[1999]: I0517 00:43:30.652203 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-kube-api-access-h9xbf" (OuterVolumeSpecName: "kube-api-access-h9xbf") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "kube-api-access-h9xbf". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:43:30.655040 systemd[1]: var-lib-kubelet-pods-e4c09d53\x2d027b\x2d4dc7\x2da610\x2d10f4a2dc87b5-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:43:30.659520 kubelet[1999]: I0517 00:43:30.659452 1999 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e4c09d53-027b-4dc7-a610-10f4a2dc87b5" (UID: "e4c09d53-027b-4dc7-a610-10f4a2dc87b5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:43:30.729616 kubelet[1999]: I0517 00:43:30.729534 1999 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-run\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.729616 kubelet[1999]: I0517 00:43:30.729592 1999 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-hostproc\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.729616 kubelet[1999]: I0517 00:43:30.729619 1999 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-clustermesh-secrets\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729679 1999 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h9xbf\" (UniqueName: \"kubernetes.io/projected/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-kube-api-access-h9xbf\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729700 1999 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-xtables-lock\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729717 1999 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-host-proc-sys-net\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729735 1999 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-bpf-maps\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729757 1999 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-cgroup\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729777 1999 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-host-proc-sys-kernel\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729794 1999 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-hubble-tls\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729816 1999 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-lib-modules\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729836 1999 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-ipsec-secrets\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729855 1999 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-etc-cni-netd\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729874 1999 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cilium-config-path\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:30.730054 kubelet[1999]: I0517 00:43:30.729892 1999 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4c09d53-027b-4dc7-a610-10f4a2dc87b5-cni-path\") on node \"ci-3510-3-7-nightly-20250516-2100-838e0870b6e0ed707fc2\" DevicePath \"\"" May 17 00:43:31.019150 systemd[1]: var-lib-kubelet-pods-e4c09d53\x2d027b\x2d4dc7\x2da610\x2d10f4a2dc87b5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh9xbf.mount: Deactivated successfully. May 17 00:43:31.019356 systemd[1]: var-lib-kubelet-pods-e4c09d53\x2d027b\x2d4dc7\x2da610\x2d10f4a2dc87b5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:43:31.293568 sshd[3801]: Connection closed by authenticating user root 46.32.178.46 port 41448 [preauth] May 17 00:43:31.295825 systemd[1]: sshd@38-10.128.0.28:22-46.32.178.46:41448.service: Deactivated successfully. May 17 00:43:31.360951 kubelet[1999]: I0517 00:43:31.360911 1999 scope.go:117] "RemoveContainer" containerID="eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e" May 17 00:43:31.368063 env[1216]: time="2025-05-17T00:43:31.367390511Z" level=info msg="RemoveContainer for \"eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e\"" May 17 00:43:31.372523 systemd[1]: Removed slice kubepods-burstable-pode4c09d53_027b_4dc7_a610_10f4a2dc87b5.slice. May 17 00:43:31.377368 env[1216]: time="2025-05-17T00:43:31.377067286Z" level=info msg="RemoveContainer for \"eb1a0dfba90e6041f4ad6dc179279dd62164f8f9faa9453018c8f120301e131e\" returns successfully" May 17 00:43:31.423937 kubelet[1999]: I0517 00:43:31.423867 1999 memory_manager.go:355] "RemoveStaleState removing state" podUID="e4c09d53-027b-4dc7-a610-10f4a2dc87b5" containerName="mount-cgroup" May 17 00:43:31.424234 kubelet[1999]: I0517 00:43:31.424208 1999 memory_manager.go:355] "RemoveStaleState removing state" podUID="e4c09d53-027b-4dc7-a610-10f4a2dc87b5" containerName="mount-cgroup" May 17 00:43:31.435543 systemd[1]: Created slice kubepods-burstable-pod124c7e7d_d56f_447c_bee2_39aad57e8f90.slice. May 17 00:43:31.534257 kubelet[1999]: I0517 00:43:31.534181 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/124c7e7d-d56f-447c-bee2-39aad57e8f90-bpf-maps\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.534257 kubelet[1999]: I0517 00:43:31.534253 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/124c7e7d-d56f-447c-bee2-39aad57e8f90-cni-path\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.534576 kubelet[1999]: I0517 00:43:31.534286 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/124c7e7d-d56f-447c-bee2-39aad57e8f90-etc-cni-netd\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.534576 kubelet[1999]: I0517 00:43:31.534315 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/124c7e7d-d56f-447c-bee2-39aad57e8f90-cilium-config-path\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.534576 kubelet[1999]: I0517 00:43:31.534348 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/124c7e7d-d56f-447c-bee2-39aad57e8f90-host-proc-sys-kernel\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.534576 kubelet[1999]: I0517 00:43:31.534381 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/124c7e7d-d56f-447c-bee2-39aad57e8f90-lib-modules\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.534576 kubelet[1999]: I0517 00:43:31.534410 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/124c7e7d-d56f-447c-bee2-39aad57e8f90-cilium-ipsec-secrets\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.534576 kubelet[1999]: I0517 00:43:31.534442 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/124c7e7d-d56f-447c-bee2-39aad57e8f90-cilium-cgroup\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.534576 kubelet[1999]: I0517 00:43:31.534472 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/124c7e7d-d56f-447c-bee2-39aad57e8f90-xtables-lock\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.534576 kubelet[1999]: I0517 00:43:31.534503 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/124c7e7d-d56f-447c-bee2-39aad57e8f90-hostproc\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.534576 kubelet[1999]: I0517 00:43:31.534539 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llfgn\" (UniqueName: \"kubernetes.io/projected/124c7e7d-d56f-447c-bee2-39aad57e8f90-kube-api-access-llfgn\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.534576 kubelet[1999]: I0517 00:43:31.534572 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/124c7e7d-d56f-447c-bee2-39aad57e8f90-cilium-run\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.535106 kubelet[1999]: I0517 00:43:31.534607 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/124c7e7d-d56f-447c-bee2-39aad57e8f90-hubble-tls\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.535106 kubelet[1999]: I0517 00:43:31.534669 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/124c7e7d-d56f-447c-bee2-39aad57e8f90-clustermesh-secrets\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.535106 kubelet[1999]: I0517 00:43:31.534701 1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/124c7e7d-d56f-447c-bee2-39aad57e8f90-host-proc-sys-net\") pod \"cilium-j7db9\" (UID: \"124c7e7d-d56f-447c-bee2-39aad57e8f90\") " pod="kube-system/cilium-j7db9" May 17 00:43:31.668518 systemd[1]: Started sshd@41-10.128.0.28:22-46.32.178.46:43038.service. May 17 00:43:31.729347 kubelet[1999]: I0517 00:43:31.729308 1999 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c09d53-027b-4dc7-a610-10f4a2dc87b5" path="/var/lib/kubelet/pods/e4c09d53-027b-4dc7-a610-10f4a2dc87b5/volumes" May 17 00:43:31.745365 env[1216]: time="2025-05-17T00:43:31.745296118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7db9,Uid:124c7e7d-d56f-447c-bee2-39aad57e8f90,Namespace:kube-system,Attempt:0,}" May 17 00:43:31.770047 env[1216]: time="2025-05-17T00:43:31.769901066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:31.770309 env[1216]: time="2025-05-17T00:43:31.769994596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:31.770309 env[1216]: time="2025-05-17T00:43:31.770015097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:31.770530 env[1216]: time="2025-05-17T00:43:31.770327972Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f pid=3996 runtime=io.containerd.runc.v2 May 17 00:43:31.795166 systemd[1]: Started cri-containerd-2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f.scope. May 17 00:43:31.830965 env[1216]: time="2025-05-17T00:43:31.830453925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7db9,Uid:124c7e7d-d56f-447c-bee2-39aad57e8f90,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f\"" May 17 00:43:31.835139 env[1216]: time="2025-05-17T00:43:31.835065817Z" level=info msg="CreateContainer within sandbox \"2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:43:31.853563 env[1216]: time="2025-05-17T00:43:31.853483702Z" level=info msg="CreateContainer within sandbox \"2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e6266f828a5983f289e8e9c86fe7ae4c7cd3a10ef4ba02535982651281913825\"" May 17 00:43:31.856684 env[1216]: time="2025-05-17T00:43:31.856387228Z" level=info msg="StartContainer for \"e6266f828a5983f289e8e9c86fe7ae4c7cd3a10ef4ba02535982651281913825\"" May 17 00:43:31.886029 systemd[1]: Started cri-containerd-e6266f828a5983f289e8e9c86fe7ae4c7cd3a10ef4ba02535982651281913825.scope. May 17 00:43:31.944740 env[1216]: time="2025-05-17T00:43:31.944566331Z" level=info msg="StartContainer for \"e6266f828a5983f289e8e9c86fe7ae4c7cd3a10ef4ba02535982651281913825\" returns successfully" May 17 00:43:31.965470 systemd[1]: cri-containerd-e6266f828a5983f289e8e9c86fe7ae4c7cd3a10ef4ba02535982651281913825.scope: Deactivated successfully. May 17 00:43:32.005051 env[1216]: time="2025-05-17T00:43:32.004981415Z" level=info msg="shim disconnected" id=e6266f828a5983f289e8e9c86fe7ae4c7cd3a10ef4ba02535982651281913825 May 17 00:43:32.005051 env[1216]: time="2025-05-17T00:43:32.005050813Z" level=warning msg="cleaning up after shim disconnected" id=e6266f828a5983f289e8e9c86fe7ae4c7cd3a10ef4ba02535982651281913825 namespace=k8s.io May 17 00:43:32.005531 env[1216]: time="2025-05-17T00:43:32.005067766Z" level=info msg="cleaning up dead shim" May 17 00:43:32.028020 env[1216]: time="2025-05-17T00:43:32.027963525Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4079 runtime=io.containerd.runc.v2\n" May 17 00:43:32.369513 env[1216]: time="2025-05-17T00:43:32.369439973Z" level=info msg="CreateContainer within sandbox \"2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:43:32.397436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402803573.mount: Deactivated successfully. May 17 00:43:32.405473 env[1216]: time="2025-05-17T00:43:32.405395879Z" level=info msg="CreateContainer within sandbox \"2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"719470a20b5b965b88da35af7aba192b69f58719a72e367acf43434e18786a9d\"" May 17 00:43:32.406792 env[1216]: time="2025-05-17T00:43:32.406741893Z" level=info msg="StartContainer for \"719470a20b5b965b88da35af7aba192b69f58719a72e367acf43434e18786a9d\"" May 17 00:43:32.427020 kubelet[1999]: W0517 00:43:32.426840 1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4c09d53_027b_4dc7_a610_10f4a2dc87b5.slice/cri-containerd-13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b.scope WatchSource:0}: container "13fb52e5621f5e88dd44cec201c70d27171af694e4bc70a4aded11c888a3053b" in namespace "k8s.io": not found May 17 00:43:32.460431 systemd[1]: Started cri-containerd-719470a20b5b965b88da35af7aba192b69f58719a72e367acf43434e18786a9d.scope. May 17 00:43:32.529821 env[1216]: time="2025-05-17T00:43:32.529753024Z" level=info msg="StartContainer for \"719470a20b5b965b88da35af7aba192b69f58719a72e367acf43434e18786a9d\" returns successfully" May 17 00:43:32.542955 systemd[1]: cri-containerd-719470a20b5b965b88da35af7aba192b69f58719a72e367acf43434e18786a9d.scope: Deactivated successfully. May 17 00:43:32.579287 env[1216]: time="2025-05-17T00:43:32.579202852Z" level=info msg="shim disconnected" id=719470a20b5b965b88da35af7aba192b69f58719a72e367acf43434e18786a9d May 17 00:43:32.579287 env[1216]: time="2025-05-17T00:43:32.579266275Z" level=warning msg="cleaning up after shim disconnected" id=719470a20b5b965b88da35af7aba192b69f58719a72e367acf43434e18786a9d namespace=k8s.io May 17 00:43:32.579287 env[1216]: time="2025-05-17T00:43:32.579288161Z" level=info msg="cleaning up dead shim" May 17 00:43:32.594360 env[1216]: time="2025-05-17T00:43:32.594291172Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4143 runtime=io.containerd.runc.v2\n" May 17 00:43:32.941424 kubelet[1999]: E0517 00:43:32.941358 1999 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:43:33.020195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-719470a20b5b965b88da35af7aba192b69f58719a72e367acf43434e18786a9d-rootfs.mount: Deactivated successfully. May 17 00:43:33.145058 sshd[3986]: Failed password for root from 46.32.178.46 port 43038 ssh2 May 17 00:43:33.382212 env[1216]: time="2025-05-17T00:43:33.382122201Z" level=info msg="CreateContainer within sandbox \"2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:43:33.410490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3999622179.mount: Deactivated successfully. May 17 00:43:33.423033 env[1216]: time="2025-05-17T00:43:33.422953461Z" level=info msg="CreateContainer within sandbox \"2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"21405c94cccd7f0e2a41b9aa2c053a302e9426e24bdc3b647a1e1c7ee2d62203\"" May 17 00:43:33.426401 env[1216]: time="2025-05-17T00:43:33.424113521Z" level=info msg="StartContainer for \"21405c94cccd7f0e2a41b9aa2c053a302e9426e24bdc3b647a1e1c7ee2d62203\"" May 17 00:43:33.467120 systemd[1]: Started cri-containerd-21405c94cccd7f0e2a41b9aa2c053a302e9426e24bdc3b647a1e1c7ee2d62203.scope. May 17 00:43:33.528063 env[1216]: time="2025-05-17T00:43:33.527987218Z" level=info msg="StartContainer for \"21405c94cccd7f0e2a41b9aa2c053a302e9426e24bdc3b647a1e1c7ee2d62203\" returns successfully" May 17 00:43:33.535834 systemd[1]: cri-containerd-21405c94cccd7f0e2a41b9aa2c053a302e9426e24bdc3b647a1e1c7ee2d62203.scope: Deactivated successfully. May 17 00:43:33.557910 sshd[3986]: Connection closed by authenticating user root 46.32.178.46 port 43038 [preauth] May 17 00:43:33.558606 systemd[1]: sshd@41-10.128.0.28:22-46.32.178.46:43038.service: Deactivated successfully. May 17 00:43:33.596876 env[1216]: time="2025-05-17T00:43:33.596802056Z" level=info msg="shim disconnected" id=21405c94cccd7f0e2a41b9aa2c053a302e9426e24bdc3b647a1e1c7ee2d62203 May 17 00:43:33.596876 env[1216]: time="2025-05-17T00:43:33.596877081Z" level=warning msg="cleaning up after shim disconnected" id=21405c94cccd7f0e2a41b9aa2c053a302e9426e24bdc3b647a1e1c7ee2d62203 namespace=k8s.io May 17 00:43:33.597300 env[1216]: time="2025-05-17T00:43:33.596894199Z" level=info msg="cleaning up dead shim" May 17 00:43:33.611378 env[1216]: time="2025-05-17T00:43:33.611305192Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4203 runtime=io.containerd.runc.v2\n" May 17 00:43:33.727315 kubelet[1999]: E0517 00:43:33.726762 1999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qwxkh" podUID="5dfa033c-e73c-4e3c-a9ad-171fd4228b39" May 17 00:43:33.895611 systemd[1]: Started sshd@42-10.128.0.28:22-46.32.178.46:43052.service. May 17 00:43:34.021199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21405c94cccd7f0e2a41b9aa2c053a302e9426e24bdc3b647a1e1c7ee2d62203-rootfs.mount: Deactivated successfully. May 17 00:43:34.385963 env[1216]: time="2025-05-17T00:43:34.385904169Z" level=info msg="CreateContainer within sandbox \"2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:43:34.424771 env[1216]: time="2025-05-17T00:43:34.424692993Z" level=info msg="CreateContainer within sandbox \"2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b10e924c2da5dc39feb1a0100d8e61b52afab55c6b842918ee1bf4a30cee68b8\"" May 17 00:43:34.425758 env[1216]: time="2025-05-17T00:43:34.425715109Z" level=info msg="StartContainer for \"b10e924c2da5dc39feb1a0100d8e61b52afab55c6b842918ee1bf4a30cee68b8\"" May 17 00:43:34.469372 systemd[1]: Started cri-containerd-b10e924c2da5dc39feb1a0100d8e61b52afab55c6b842918ee1bf4a30cee68b8.scope. May 17 00:43:34.513313 systemd[1]: cri-containerd-b10e924c2da5dc39feb1a0100d8e61b52afab55c6b842918ee1bf4a30cee68b8.scope: Deactivated successfully. May 17 00:43:34.516601 env[1216]: time="2025-05-17T00:43:34.516467576Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124c7e7d_d56f_447c_bee2_39aad57e8f90.slice/cri-containerd-b10e924c2da5dc39feb1a0100d8e61b52afab55c6b842918ee1bf4a30cee68b8.scope/memory.events\": no such file or directory" May 17 00:43:34.522128 env[1216]: time="2025-05-17T00:43:34.522060100Z" level=info msg="StartContainer for \"b10e924c2da5dc39feb1a0100d8e61b52afab55c6b842918ee1bf4a30cee68b8\" returns successfully" May 17 00:43:34.557334 env[1216]: time="2025-05-17T00:43:34.557256964Z" level=info msg="shim disconnected" id=b10e924c2da5dc39feb1a0100d8e61b52afab55c6b842918ee1bf4a30cee68b8 May 17 00:43:34.557761 env[1216]: time="2025-05-17T00:43:34.557337272Z" level=warning msg="cleaning up after shim disconnected" id=b10e924c2da5dc39feb1a0100d8e61b52afab55c6b842918ee1bf4a30cee68b8 namespace=k8s.io May 17 00:43:34.557761 env[1216]: time="2025-05-17T00:43:34.557355218Z" level=info msg="cleaning up dead shim" May 17 00:43:34.573089 env[1216]: time="2025-05-17T00:43:34.573012240Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4261 runtime=io.containerd.runc.v2\n" May 17 00:43:35.020154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b10e924c2da5dc39feb1a0100d8e61b52afab55c6b842918ee1bf4a30cee68b8-rootfs.mount: Deactivated successfully. May 17 00:43:35.392702 env[1216]: time="2025-05-17T00:43:35.392596366Z" level=info msg="CreateContainer within sandbox \"2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:43:35.423654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2080443309.mount: Deactivated successfully. May 17 00:43:35.444331 env[1216]: time="2025-05-17T00:43:35.444236645Z" level=info msg="CreateContainer within sandbox \"2ddf38260e09027cbde139f0d78b1dcf496ac05a7e7fa1852a85eaab0dec670f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"24053554b1219ca574cf31101cb50acb3293558b3a48c755c21d41d5b08e7762\"" May 17 00:43:35.445989 env[1216]: time="2025-05-17T00:43:35.445897097Z" level=info msg="StartContainer for \"24053554b1219ca574cf31101cb50acb3293558b3a48c755c21d41d5b08e7762\"" May 17 00:43:35.490285 systemd[1]: Started cri-containerd-24053554b1219ca574cf31101cb50acb3293558b3a48c755c21d41d5b08e7762.scope. May 17 00:43:35.617523 env[1216]: time="2025-05-17T00:43:35.617438105Z" level=info msg="StartContainer for \"24053554b1219ca574cf31101cb50acb3293558b3a48c755c21d41d5b08e7762\" returns successfully" May 17 00:43:35.627126 kubelet[1999]: W0517 00:43:35.626920 1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124c7e7d_d56f_447c_bee2_39aad57e8f90.slice/cri-containerd-e6266f828a5983f289e8e9c86fe7ae4c7cd3a10ef4ba02535982651281913825.scope WatchSource:0}: task e6266f828a5983f289e8e9c86fe7ae4c7cd3a10ef4ba02535982651281913825 not found: not found May 17 00:43:35.728141 kubelet[1999]: E0517 00:43:35.727963 1999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qwxkh" podUID="5dfa033c-e73c-4e3c-a9ad-171fd4228b39" May 17 00:43:36.194292 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:43:36.552430 sshd[4218]: Failed password for root from 46.32.178.46 port 43052 ssh2 May 17 00:43:36.984136 sshd[4218]: Connection closed by authenticating user root 46.32.178.46 port 43052 [preauth] May 17 00:43:36.986072 systemd[1]: sshd@42-10.128.0.28:22-46.32.178.46:43052.service: Deactivated successfully. May 17 00:43:37.437324 systemd[1]: Started sshd@43-10.128.0.28:22-46.32.178.46:43060.service. May 17 00:43:37.714338 env[1216]: time="2025-05-17T00:43:37.714184455Z" level=info msg="StopPodSandbox for \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\"" May 17 00:43:37.714984 env[1216]: time="2025-05-17T00:43:37.714329848Z" level=info msg="TearDown network for sandbox \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\" successfully" May 17 00:43:37.714984 env[1216]: time="2025-05-17T00:43:37.714388855Z" level=info msg="StopPodSandbox for \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\" returns successfully" May 17 00:43:37.715126 env[1216]: time="2025-05-17T00:43:37.715015571Z" level=info msg="RemovePodSandbox for \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\"" May 17 00:43:37.715126 env[1216]: time="2025-05-17T00:43:37.715055880Z" level=info msg="Forcibly stopping sandbox \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\"" May 17 00:43:37.715256 env[1216]: time="2025-05-17T00:43:37.715205698Z" level=info msg="TearDown network for sandbox \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\" successfully" May 17 00:43:37.724427 env[1216]: time="2025-05-17T00:43:37.724124796Z" level=info msg="RemovePodSandbox \"7123f71f60697f95c17f78691a094ffc211757efaeb8908340b0aa9803075c27\" returns successfully" May 17 00:43:37.725151 env[1216]: time="2025-05-17T00:43:37.725096307Z" level=info msg="StopPodSandbox for \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\"" May 17 00:43:37.725315 env[1216]: time="2025-05-17T00:43:37.725247410Z" level=info msg="TearDown network for sandbox \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\" successfully" May 17 00:43:37.725415 env[1216]: time="2025-05-17T00:43:37.725306739Z" level=info msg="StopPodSandbox for \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\" returns successfully" May 17 00:43:37.726920 env[1216]: time="2025-05-17T00:43:37.726861918Z" level=info msg="RemovePodSandbox for \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\"" May 17 00:43:37.727064 env[1216]: time="2025-05-17T00:43:37.726918575Z" level=info msg="Forcibly stopping sandbox \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\"" May 17 00:43:37.727064 env[1216]: time="2025-05-17T00:43:37.727039672Z" level=info msg="TearDown network for sandbox \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\" successfully" May 17 00:43:37.731388 kubelet[1999]: E0517 00:43:37.731214 1999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qwxkh" podUID="5dfa033c-e73c-4e3c-a9ad-171fd4228b39" May 17 00:43:37.741165 env[1216]: time="2025-05-17T00:43:37.741087760Z" level=info msg="RemovePodSandbox \"ce63efc1a458a86b60bfc8c9169dc17cbaaccb1b203d1066e40e83fd4ac70b59\" returns successfully" May 17 00:43:37.741979 env[1216]: time="2025-05-17T00:43:37.741924278Z" level=info msg="StopPodSandbox for \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\"" May 17 00:43:37.742144 env[1216]: time="2025-05-17T00:43:37.742065478Z" level=info msg="TearDown network for sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" successfully" May 17 00:43:37.742232 env[1216]: time="2025-05-17T00:43:37.742148666Z" level=info msg="StopPodSandbox for \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" returns successfully" May 17 00:43:37.742693 env[1216]: time="2025-05-17T00:43:37.742648538Z" level=info msg="RemovePodSandbox for \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\"" May 17 00:43:37.742807 env[1216]: time="2025-05-17T00:43:37.742702170Z" level=info msg="Forcibly stopping sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\"" May 17 00:43:37.742893 env[1216]: time="2025-05-17T00:43:37.742817895Z" level=info msg="TearDown network for sandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" successfully" May 17 00:43:37.753710 env[1216]: time="2025-05-17T00:43:37.753607472Z" level=info msg="RemovePodSandbox \"130c04a67775dc8e4edd1136829e1182fb1606169f9266278e4fd91dda72fc51\" returns successfully" May 17 00:43:38.621965 systemd[1]: run-containerd-runc-k8s.io-24053554b1219ca574cf31101cb50acb3293558b3a48c755c21d41d5b08e7762-runc.3xb60K.mount: Deactivated successfully. May 17 00:43:38.738442 kubelet[1999]: W0517 00:43:38.736448 1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124c7e7d_d56f_447c_bee2_39aad57e8f90.slice/cri-containerd-719470a20b5b965b88da35af7aba192b69f58719a72e367acf43434e18786a9d.scope WatchSource:0}: task 719470a20b5b965b88da35af7aba192b69f58719a72e367acf43434e18786a9d not found: not found May 17 00:43:39.618518 sshd[4411]: Failed password for root from 46.32.178.46 port 43060 ssh2 May 17 00:43:39.755047 systemd-networkd[1020]: lxc_health: Link UP May 17 00:43:39.778595 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:43:39.779866 systemd-networkd[1020]: lxc_health: Gained carrier May 17 00:43:39.829606 kubelet[1999]: I0517 00:43:39.829514 1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j7db9" podStartSLOduration=8.829466359 podStartE2EDuration="8.829466359s" podCreationTimestamp="2025-05-17 00:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:43:36.441355978 +0000 UTC m=+118.943928179" watchObservedRunningTime="2025-05-17 00:43:39.829466359 +0000 UTC m=+122.332038532" May 17 00:43:40.027333 sshd[4411]: Connection closed by authenticating user root 46.32.178.46 port 43060 [preauth] May 17 00:43:40.030233 systemd[1]: sshd@43-10.128.0.28:22-46.32.178.46:43060.service: Deactivated successfully. May 17 00:43:40.606981 systemd[1]: Started sshd@44-10.128.0.28:22-46.32.178.46:43064.service. May 17 00:43:41.718958 systemd-networkd[1020]: lxc_health: Gained IPv6LL May 17 00:43:41.858012 kubelet[1999]: W0517 00:43:41.857958 1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124c7e7d_d56f_447c_bee2_39aad57e8f90.slice/cri-containerd-21405c94cccd7f0e2a41b9aa2c053a302e9426e24bdc3b647a1e1c7ee2d62203.scope WatchSource:0}: task 21405c94cccd7f0e2a41b9aa2c053a302e9426e24bdc3b647a1e1c7ee2d62203 not found: not found May 17 00:43:42.132453 sshd[4852]: Failed password for root from 46.32.178.46 port 43064 ssh2 May 17 00:43:42.527749 sshd[4852]: Connection closed by authenticating user root 46.32.178.46 port 43064 [preauth] May 17 00:43:42.528572 systemd[1]: sshd@44-10.128.0.28:22-46.32.178.46:43064.service: Deactivated successfully. May 17 00:43:42.897802 systemd[1]: Started sshd@45-10.128.0.28:22-46.32.178.46:43276.service. May 17 00:43:43.168857 systemd[1]: run-containerd-runc-k8s.io-24053554b1219ca574cf31101cb50acb3293558b3a48c755c21d41d5b08e7762-runc.Ngnoyk.mount: Deactivated successfully. May 17 00:43:44.972389 kubelet[1999]: W0517 00:43:44.972334 1999 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124c7e7d_d56f_447c_bee2_39aad57e8f90.slice/cri-containerd-b10e924c2da5dc39feb1a0100d8e61b52afab55c6b842918ee1bf4a30cee68b8.scope WatchSource:0}: task b10e924c2da5dc39feb1a0100d8e61b52afab55c6b842918ee1bf4a30cee68b8 not found: not found May 17 00:43:45.506119 systemd[1]: run-containerd-runc-k8s.io-24053554b1219ca574cf31101cb50acb3293558b3a48c755c21d41d5b08e7762-runc.Q7tguo.mount: Deactivated successfully. May 17 00:43:45.613469 sshd[4886]: Failed password for root from 46.32.178.46 port 43276 ssh2 May 17 00:43:45.673946 sshd[3934]: pam_unix(sshd:session): session closed for user core May 17 00:43:45.680832 systemd[1]: sshd@40-10.128.0.28:22-139.178.89.65:53204.service: Deactivated successfully. May 17 00:43:45.682118 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:43:45.684661 systemd-logind[1207]: Session 24 logged out. Waiting for processes to exit. May 17 00:43:45.686963 systemd-logind[1207]: Removed session 24. May 17 00:43:45.995108 sshd[4886]: Connection closed by authenticating user root 46.32.178.46 port 43276 [preauth] May 17 00:43:45.995994 systemd[1]: sshd@45-10.128.0.28:22-46.32.178.46:43276.service: Deactivated successfully. May 17 00:43:46.493996 systemd[1]: Started sshd@46-10.128.0.28:22-46.32.178.46:43286.service. May 17 00:43:48.110387 sshd[4942]: Failed password for root from 46.32.178.46 port 43286 ssh2 May 17 00:43:48.408306 sshd[4942]: Connection closed by authenticating user root 46.32.178.46 port 43286 [preauth]