Mar 17 18:45:23.143220 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:45:23.143266 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:45:23.143284 kernel: BIOS-provided physical RAM map: Mar 17 18:45:23.143298 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 17 18:45:23.143310 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 17 18:45:23.143322 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 17 18:45:23.143342 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 17 18:45:23.143355 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 17 18:45:23.143368 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd277fff] usable Mar 17 18:45:23.143381 kernel: BIOS-e820: [mem 0x00000000bd278000-0x00000000bd281fff] ACPI data Mar 17 18:45:23.143395 kernel: BIOS-e820: [mem 0x00000000bd282000-0x00000000bf8ecfff] usable Mar 17 18:45:23.143408 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Mar 17 18:45:23.143421 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 17 18:45:23.143435 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 17 18:45:23.143457 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 17 18:45:23.143470 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 17 18:45:23.143484 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 17 18:45:23.143498 kernel: NX (Execute Disable) protection: active Mar 17 18:45:23.143670 kernel: efi: EFI v2.70 by EDK II Mar 17 18:45:23.143693 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd278018 Mar 17 18:45:23.143707 kernel: random: crng init done Mar 17 18:45:23.143721 kernel: SMBIOS 2.4 present. Mar 17 18:45:23.143741 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 Mar 17 18:45:23.143755 kernel: Hypervisor detected: KVM Mar 17 18:45:23.143778 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:45:23.143791 kernel: kvm-clock: cpu 0, msr 21219a001, primary cpu clock Mar 17 18:45:23.143804 kernel: kvm-clock: using sched offset of 13261271336 cycles Mar 17 18:45:23.143819 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:45:23.143833 kernel: tsc: Detected 2299.998 MHz processor Mar 17 18:45:23.143847 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:45:23.143860 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:45:23.143874 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 17 18:45:23.143892 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:45:23.143907 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 17 18:45:23.143921 kernel: Using GB pages for direct mapping Mar 17 18:45:23.143935 kernel: Secure boot disabled Mar 17 18:45:23.143950 kernel: ACPI: Early table checksum verification disabled Mar 17 18:45:23.143964 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 17 18:45:23.143978 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 17 18:45:23.143994 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 17 18:45:23.144017 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 17 18:45:23.144032 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 17 18:45:23.144048 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Mar 17 18:45:23.144062 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 17 18:45:23.144078 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 17 18:45:23.144093 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 17 18:45:23.144112 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 17 18:45:23.144128 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 17 18:45:23.144144 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 17 18:45:23.144160 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 17 18:45:23.144175 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 17 18:45:23.144190 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 17 18:45:23.144206 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 17 18:45:23.144221 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 17 18:45:23.144237 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 17 18:45:23.144256 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 17 18:45:23.144271 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 17 18:45:23.144286 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 18:45:23.144302 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 18:45:23.144317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 18:45:23.144332 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 17 18:45:23.144348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 17 18:45:23.144364 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Mar 17 18:45:23.144381 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Mar 17 18:45:23.144400 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Mar 17 18:45:23.144416 kernel: Zone ranges: Mar 17 18:45:23.144431 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:45:23.144447 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 18:45:23.144464 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 17 18:45:23.144480 kernel: Movable zone start for each node Mar 17 18:45:23.144497 kernel: Early memory node ranges Mar 17 18:45:23.144531 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 17 18:45:23.144547 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 17 18:45:23.144566 kernel: node 0: [mem 0x0000000000100000-0x00000000bd277fff] Mar 17 18:45:23.144581 kernel: node 0: [mem 0x00000000bd282000-0x00000000bf8ecfff] Mar 17 18:45:23.144595 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 17 18:45:23.146833 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 17 18:45:23.146859 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 17 18:45:23.146876 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:45:23.146891 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 17 18:45:23.146907 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 17 18:45:23.146922 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Mar 17 18:45:23.146944 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 17 18:45:23.146960 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 17 18:45:23.146977 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 17 18:45:23.146992 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:45:23.147009 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:45:23.147024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:45:23.147040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:45:23.147056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:45:23.147071 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:45:23.147091 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:45:23.147106 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 18:45:23.147122 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 17 18:45:23.147138 kernel: Booting paravirtualized kernel on KVM Mar 17 18:45:23.147155 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:45:23.147171 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Mar 17 18:45:23.147186 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Mar 17 18:45:23.147200 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Mar 17 18:45:23.147214 kernel: pcpu-alloc: [0] 0 1 Mar 17 18:45:23.147232 kernel: kvm-guest: PV spinlocks enabled Mar 17 18:45:23.147247 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 18:45:23.147263 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Mar 17 18:45:23.147278 kernel: Policy zone: Normal Mar 17 18:45:23.147297 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:45:23.147314 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:45:23.147329 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 17 18:45:23.147344 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:45:23.147360 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:45:23.147380 kernel: Memory: 7515412K/7860544K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 344872K reserved, 0K cma-reserved) Mar 17 18:45:23.147397 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:45:23.147413 kernel: Kernel/User page tables isolation: enabled Mar 17 18:45:23.147429 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:45:23.147446 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:45:23.147462 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:45:23.147479 kernel: rcu: RCU event tracing is enabled. Mar 17 18:45:23.147495 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:45:23.147531 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:45:23.147560 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:45:23.147577 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:45:23.147620 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:45:23.147640 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 18:45:23.147657 kernel: Console: colour dummy device 80x25 Mar 17 18:45:23.147673 kernel: printk: console [ttyS0] enabled Mar 17 18:45:23.147690 kernel: ACPI: Core revision 20210730 Mar 17 18:45:23.147706 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:45:23.147724 kernel: x2apic enabled Mar 17 18:45:23.147746 kernel: Switched APIC routing to physical x2apic. Mar 17 18:45:23.147772 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 17 18:45:23.147790 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 17 18:45:23.147809 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 17 18:45:23.147826 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 17 18:45:23.147843 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 17 18:45:23.147860 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:45:23.147880 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 17 18:45:23.147898 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 17 18:45:23.147916 kernel: Spectre V2 : Mitigation: IBRS Mar 17 18:45:23.147933 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:45:23.147951 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:45:23.147968 kernel: RETBleed: Mitigation: IBRS Mar 17 18:45:23.147986 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:45:23.148003 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Mar 17 18:45:23.148020 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:45:23.148041 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 18:45:23.148058 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 18:45:23.148076 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:45:23.148094 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:45:23.148112 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:45:23.148130 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:45:23.148147 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 18:45:23.148166 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:45:23.148183 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:45:23.148205 kernel: LSM: Security Framework initializing Mar 17 18:45:23.148223 kernel: SELinux: Initializing. Mar 17 18:45:23.148241 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 18:45:23.148258 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 18:45:23.148276 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 17 18:45:23.148294 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 17 18:45:23.148312 kernel: signal: max sigframe size: 1776 Mar 17 18:45:23.148329 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:45:23.148347 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 18:45:23.148369 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:45:23.148387 kernel: x86: Booting SMP configuration: Mar 17 18:45:23.148404 kernel: .... node #0, CPUs: #1 Mar 17 18:45:23.148423 kernel: kvm-clock: cpu 1, msr 21219a041, secondary cpu clock Mar 17 18:45:23.148442 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 17 18:45:23.148461 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 17 18:45:23.148478 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:45:23.148496 kernel: smpboot: Max logical packages: 1 Mar 17 18:45:23.153972 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 17 18:45:23.154001 kernel: devtmpfs: initialized Mar 17 18:45:23.154018 kernel: x86/mm: Memory block size: 128MB Mar 17 18:45:23.154036 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 17 18:45:23.154052 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:45:23.154068 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:45:23.154084 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:45:23.154100 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:45:23.154117 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:45:23.154141 kernel: audit: type=2000 audit(1742237123.354:1): state=initialized audit_enabled=0 res=1 Mar 17 18:45:23.154157 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:45:23.154173 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:45:23.154190 kernel: cpuidle: using governor menu Mar 17 18:45:23.154207 kernel: ACPI: bus type PCI registered Mar 17 18:45:23.154224 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:45:23.154240 kernel: dca service started, version 1.12.1 Mar 17 18:45:23.154257 kernel: PCI: Using configuration type 1 for base access Mar 17 18:45:23.154274 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:45:23.154295 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:45:23.154311 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:45:23.154327 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:45:23.154344 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:45:23.154360 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:45:23.154375 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:45:23.154392 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:45:23.154409 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:45:23.154426 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:45:23.154446 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 17 18:45:23.154463 kernel: ACPI: Interpreter enabled Mar 17 18:45:23.154479 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 18:45:23.154496 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:45:23.154559 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:45:23.154580 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 17 18:45:23.154597 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:45:23.154863 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:45:23.155045 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Mar 17 18:45:23.155068 kernel: PCI host bridge to bus 0000:00 Mar 17 18:45:23.155235 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:45:23.155389 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:45:23.162465 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:45:23.162778 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 17 18:45:23.162928 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:45:23.163111 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 18:45:23.163295 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Mar 17 18:45:23.163474 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 18:45:23.163660 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 17 18:45:23.163857 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Mar 17 18:45:23.164027 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 17 18:45:23.164203 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Mar 17 18:45:23.164387 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:45:23.164574 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Mar 17 18:45:23.164745 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Mar 17 18:45:23.164930 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:45:23.165098 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Mar 17 18:45:23.165268 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Mar 17 18:45:23.165297 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:45:23.165316 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:45:23.165334 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:45:23.165352 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:45:23.165371 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 18:45:23.165389 kernel: iommu: Default domain type: Translated Mar 17 18:45:23.165407 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:45:23.165425 kernel: vgaarb: loaded Mar 17 18:45:23.165444 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:45:23.165465 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:45:23.165483 kernel: PTP clock support registered Mar 17 18:45:23.165501 kernel: Registered efivars operations Mar 17 18:45:23.166595 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:45:23.166623 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:45:23.166642 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 17 18:45:23.166659 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 17 18:45:23.166728 kernel: e820: reserve RAM buffer [mem 0xbd278000-0xbfffffff] Mar 17 18:45:23.166746 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 17 18:45:23.166779 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 17 18:45:23.166796 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:45:23.166813 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:45:23.166830 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:45:23.166847 kernel: pnp: PnP ACPI init Mar 17 18:45:23.166863 kernel: pnp: PnP ACPI: found 7 devices Mar 17 18:45:23.166879 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:45:23.166896 kernel: NET: Registered PF_INET protocol family Mar 17 18:45:23.166912 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 18:45:23.166932 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 17 18:45:23.166948 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:45:23.166964 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:45:23.166980 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Mar 17 18:45:23.166996 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 17 18:45:23.167012 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 18:45:23.167029 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 18:45:23.167045 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:45:23.167066 kernel: NET: Registered PF_XDP protocol family Mar 17 18:45:23.167256 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:45:23.167412 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:45:23.176814 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:45:23.176971 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 17 18:45:23.177152 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 18:45:23.177179 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:45:23.177206 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 18:45:23.177225 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 17 18:45:23.177243 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 18:45:23.177262 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 17 18:45:23.177279 kernel: clocksource: Switched to clocksource tsc Mar 17 18:45:23.177298 kernel: Initialise system trusted keyrings Mar 17 18:45:23.177316 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 17 18:45:23.177334 kernel: Key type asymmetric registered Mar 17 18:45:23.177351 kernel: Asymmetric key parser 'x509' registered Mar 17 18:45:23.177371 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:45:23.177389 kernel: io scheduler mq-deadline registered Mar 17 18:45:23.177406 kernel: io scheduler kyber registered Mar 17 18:45:23.177424 kernel: io scheduler bfq registered Mar 17 18:45:23.177442 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:45:23.177461 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 18:45:23.177659 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 17 18:45:23.181017 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 17 18:45:23.181242 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 17 18:45:23.181275 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 18:45:23.181440 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 17 18:45:23.181462 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:45:23.181479 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:45:23.181495 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 17 18:45:23.181527 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 17 18:45:23.181544 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 17 18:45:23.181711 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 17 18:45:23.181739 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:45:23.181755 kernel: i8042: Warning: Keylock active Mar 17 18:45:23.181777 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:45:23.181794 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:45:23.181957 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 17 18:45:23.182110 kernel: rtc_cmos 00:00: registered as rtc0 Mar 17 18:45:23.182266 kernel: rtc_cmos 00:00: setting system clock to 2025-03-17T18:45:22 UTC (1742237122) Mar 17 18:45:23.182418 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 17 18:45:23.182444 kernel: intel_pstate: CPU model not supported Mar 17 18:45:23.182461 kernel: pstore: Registered efi as persistent store backend Mar 17 18:45:23.182477 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:45:23.182492 kernel: Segment Routing with IPv6 Mar 17 18:45:23.182507 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:45:23.184366 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:45:23.184388 kernel: Key type dns_resolver registered Mar 17 18:45:23.184406 kernel: IPI shorthand broadcast: enabled Mar 17 18:45:23.184425 kernel: sched_clock: Marking stable (734918794, 155718586)->(913318859, -22681479) Mar 17 18:45:23.184449 kernel: registered taskstats version 1 Mar 17 18:45:23.184467 kernel: Loading compiled-in X.509 certificates Mar 17 18:45:23.184485 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:45:23.184503 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:45:23.184596 kernel: Key type .fscrypt registered Mar 17 18:45:23.184612 kernel: Key type fscrypt-provisioning registered Mar 17 18:45:23.184627 kernel: pstore: Using crash dump compression: deflate Mar 17 18:45:23.184643 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:45:23.184661 kernel: ima: No architecture policies found Mar 17 18:45:23.184684 kernel: clk: Disabling unused clocks Mar 17 18:45:23.184700 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:45:23.184717 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:45:23.184733 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:45:23.184748 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:45:23.184774 kernel: Run /init as init process Mar 17 18:45:23.184793 kernel: with arguments: Mar 17 18:45:23.184812 kernel: /init Mar 17 18:45:23.184829 kernel: with environment: Mar 17 18:45:23.184847 kernel: HOME=/ Mar 17 18:45:23.184862 kernel: TERM=linux Mar 17 18:45:23.195753 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:45:23.195816 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:45:23.195839 systemd[1]: Detected virtualization kvm. Mar 17 18:45:23.195860 systemd[1]: Detected architecture x86-64. Mar 17 18:45:23.195878 systemd[1]: Running in initrd. Mar 17 18:45:23.195901 systemd[1]: No hostname configured, using default hostname. Mar 17 18:45:23.195916 systemd[1]: Hostname set to . Mar 17 18:45:23.195933 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:45:23.195950 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:45:23.195965 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:45:23.195983 systemd[1]: Reached target cryptsetup.target. Mar 17 18:45:23.196001 systemd[1]: Reached target paths.target. Mar 17 18:45:23.196018 systemd[1]: Reached target slices.target. Mar 17 18:45:23.196040 systemd[1]: Reached target swap.target. Mar 17 18:45:23.196059 systemd[1]: Reached target timers.target. Mar 17 18:45:23.196078 systemd[1]: Listening on iscsid.socket. Mar 17 18:45:23.196097 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:45:23.196115 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:45:23.196134 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:45:23.196152 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:45:23.196170 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:45:23.196193 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:45:23.196212 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:45:23.196248 systemd[1]: Reached target sockets.target. Mar 17 18:45:23.196271 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:45:23.196290 systemd[1]: Finished network-cleanup.service. Mar 17 18:45:23.196309 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:45:23.196329 systemd[1]: Starting systemd-journald.service... Mar 17 18:45:23.196351 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:45:23.196371 systemd[1]: Starting systemd-resolved.service... Mar 17 18:45:23.196390 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:45:23.196409 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:45:23.196429 kernel: audit: type=1130 audit(1742237123.147:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.196448 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:45:23.196468 kernel: audit: type=1130 audit(1742237123.154:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.196487 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:45:23.196550 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:45:23.196569 kernel: audit: type=1130 audit(1742237123.176:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.196588 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:45:23.196606 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:45:23.196626 kernel: audit: type=1130 audit(1742237123.185:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.196652 systemd-journald[189]: Journal started Mar 17 18:45:23.196766 systemd-journald[189]: Runtime Journal (/run/log/journal/0109e7617388cb92ba28a205503daa68) is 8.0M, max 148.8M, 140.8M free. Mar 17 18:45:23.202653 systemd[1]: Started systemd-journald.service. Mar 17 18:45:23.202724 kernel: audit: type=1130 audit(1742237123.196:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.183433 systemd-modules-load[190]: Inserted module 'overlay' Mar 17 18:45:23.217089 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:45:23.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.222827 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:45:23.231133 kernel: audit: type=1130 audit(1742237123.219:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.255600 systemd-resolved[191]: Positive Trust Anchors: Mar 17 18:45:23.255987 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:45:23.256046 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:45:23.272271 dracut-cmdline[205]: dracut-dracut-053 Mar 17 18:45:23.272271 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:45:23.285663 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:45:23.263316 systemd-resolved[191]: Defaulting to hostname 'linux'. Mar 17 18:45:23.265712 systemd[1]: Started systemd-resolved.service. Mar 17 18:45:23.290667 kernel: Bridge firewalling registered Mar 17 18:45:23.287414 systemd-modules-load[190]: Inserted module 'br_netfilter' Mar 17 18:45:23.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.304820 systemd[1]: Reached target nss-lookup.target. Mar 17 18:45:23.316668 kernel: audit: type=1130 audit(1742237123.303:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.323545 kernel: SCSI subsystem initialized Mar 17 18:45:23.341871 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:45:23.341952 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:45:23.341976 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:45:23.348412 systemd-modules-load[190]: Inserted module 'dm_multipath' Mar 17 18:45:23.349761 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:45:23.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.367564 kernel: audit: type=1130 audit(1742237123.359:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.361959 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:45:23.376541 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:45:23.380289 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:45:23.391676 kernel: audit: type=1130 audit(1742237123.382:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.401544 kernel: iscsi: registered transport (tcp) Mar 17 18:45:23.429028 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:45:23.429126 kernel: QLogic iSCSI HBA Driver Mar 17 18:45:23.474686 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:45:23.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.476419 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:45:23.534600 kernel: raid6: avx2x4 gen() 18224 MB/s Mar 17 18:45:23.551566 kernel: raid6: avx2x4 xor() 7691 MB/s Mar 17 18:45:23.568565 kernel: raid6: avx2x2 gen() 18272 MB/s Mar 17 18:45:23.585565 kernel: raid6: avx2x2 xor() 18580 MB/s Mar 17 18:45:23.602557 kernel: raid6: avx2x1 gen() 14282 MB/s Mar 17 18:45:23.619565 kernel: raid6: avx2x1 xor() 16125 MB/s Mar 17 18:45:23.636560 kernel: raid6: sse2x4 gen() 11036 MB/s Mar 17 18:45:23.653559 kernel: raid6: sse2x4 xor() 6574 MB/s Mar 17 18:45:23.670554 kernel: raid6: sse2x2 gen() 11998 MB/s Mar 17 18:45:23.689583 kernel: raid6: sse2x2 xor() 7405 MB/s Mar 17 18:45:23.707619 kernel: raid6: sse2x1 gen() 8663 MB/s Mar 17 18:45:23.725885 kernel: raid6: sse2x1 xor() 5088 MB/s Mar 17 18:45:23.726012 kernel: raid6: using algorithm avx2x2 gen() 18272 MB/s Mar 17 18:45:23.726036 kernel: raid6: .... xor() 18580 MB/s, rmw enabled Mar 17 18:45:23.727213 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:45:23.743549 kernel: xor: automatically using best checksumming function avx Mar 17 18:45:23.853568 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:45:23.865602 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:45:23.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.864000 audit: BPF prog-id=7 op=LOAD Mar 17 18:45:23.865000 audit: BPF prog-id=8 op=LOAD Mar 17 18:45:23.867234 systemd[1]: Starting systemd-udevd.service... Mar 17 18:45:23.884768 systemd-udevd[389]: Using default interface naming scheme 'v252'. Mar 17 18:45:23.892877 systemd[1]: Started systemd-udevd.service. Mar 17 18:45:23.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.902853 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:45:23.919533 dracut-pre-trigger[394]: rd.md=0: removing MD RAID activation Mar 17 18:45:23.958312 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:45:23.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:23.967848 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:45:24.038109 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:45:24.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:24.120544 kernel: scsi host0: Virtio SCSI HBA Mar 17 18:45:24.137545 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:45:24.178075 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 17 18:45:24.207597 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:45:24.224578 kernel: AES CTR mode by8 optimization enabled Mar 17 18:45:24.285228 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Mar 17 18:45:24.342989 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 17 18:45:24.343241 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 17 18:45:24.343462 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 17 18:45:24.343717 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 18:45:24.343927 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:45:24.343951 kernel: GPT:17805311 != 25165823 Mar 17 18:45:24.343972 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:45:24.343993 kernel: GPT:17805311 != 25165823 Mar 17 18:45:24.344024 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:45:24.344044 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:45:24.344066 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 17 18:45:24.397370 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:45:24.419875 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (432) Mar 17 18:45:24.414415 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:45:24.429732 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:45:24.459762 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:45:24.479811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:45:24.481113 systemd[1]: Starting disk-uuid.service... Mar 17 18:45:24.515700 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:45:24.515948 disk-uuid[507]: Primary Header is updated. Mar 17 18:45:24.515948 disk-uuid[507]: Secondary Entries is updated. Mar 17 18:45:24.515948 disk-uuid[507]: Secondary Header is updated. Mar 17 18:45:25.549607 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:45:25.549692 disk-uuid[508]: The operation has completed successfully. Mar 17 18:45:25.623397 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:45:25.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:25.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:25.623580 systemd[1]: Finished disk-uuid.service. Mar 17 18:45:25.637860 systemd[1]: Starting verity-setup.service... Mar 17 18:45:25.669559 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 18:45:25.762366 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:45:25.764951 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:45:25.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:25.787081 systemd[1]: Finished verity-setup.service. Mar 17 18:45:25.874566 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:45:25.875098 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:45:25.875588 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:45:25.924697 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:45:25.924742 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:45:25.924775 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:45:25.876608 systemd[1]: Starting ignition-setup.service... Mar 17 18:45:25.945692 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:45:25.889783 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:45:25.956631 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:45:25.980111 systemd[1]: Finished ignition-setup.service. Mar 17 18:45:25.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:25.981981 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:45:26.024529 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:45:26.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:26.032000 audit: BPF prog-id=9 op=LOAD Mar 17 18:45:26.034861 systemd[1]: Starting systemd-networkd.service... Mar 17 18:45:26.071789 systemd-networkd[683]: lo: Link UP Mar 17 18:45:26.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:26.071802 systemd-networkd[683]: lo: Gained carrier Mar 17 18:45:26.072639 systemd-networkd[683]: Enumeration completed Mar 17 18:45:26.072797 systemd[1]: Started systemd-networkd.service. Mar 17 18:45:26.073415 systemd-networkd[683]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:45:26.075681 systemd-networkd[683]: eth0: Link UP Mar 17 18:45:26.075689 systemd-networkd[683]: eth0: Gained carrier Mar 17 18:45:26.082049 systemd[1]: Reached target network.target. Mar 17 18:45:26.088745 systemd-networkd[683]: eth0: DHCPv4 address 10.128.0.78/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 17 18:45:26.112161 systemd[1]: Starting iscsiuio.service... Mar 17 18:45:26.122226 systemd[1]: Started iscsiuio.service. Mar 17 18:45:26.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:26.195474 systemd[1]: Starting iscsid.service... Mar 17 18:45:26.203963 systemd[1]: Started iscsid.service. Mar 17 18:45:26.216925 iscsid[692]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:45:26.216925 iscsid[692]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:45:26.216925 iscsid[692]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:45:26.216925 iscsid[692]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:45:26.216925 iscsid[692]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:45:26.216925 iscsid[692]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:45:26.216925 iscsid[692]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:45:26.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:26.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:26.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:26.225388 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:45:26.269943 ignition[646]: Ignition 2.14.0 Mar 17 18:45:26.246700 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:45:26.269958 ignition[646]: Stage: fetch-offline Mar 17 18:45:26.299097 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:45:26.270073 ignition[646]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:45:26.314977 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:45:26.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:26.270117 ignition[646]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:45:26.333839 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:45:26.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:26.288887 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:45:26.333957 systemd[1]: Reached target remote-fs.target. Mar 17 18:45:26.289100 ignition[646]: parsed url from cmdline: "" Mar 17 18:45:26.353130 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:45:26.289108 ignition[646]: no config URL provided Mar 17 18:45:26.376881 systemd[1]: Starting ignition-fetch.service... Mar 17 18:45:26.289117 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:45:26.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:26.392552 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:45:26.289129 ignition[646]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:45:26.419617 unknown[707]: fetched base config from "system" Mar 17 18:45:26.289139 ignition[646]: failed to fetch config: resource requires networking Mar 17 18:45:26.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:26.419631 unknown[707]: fetched base config from "system" Mar 17 18:45:26.289436 ignition[646]: Ignition finished successfully Mar 17 18:45:26.419644 unknown[707]: fetched user config from "gcp" Mar 17 18:45:26.389632 ignition[707]: Ignition 2.14.0 Mar 17 18:45:26.440238 systemd[1]: Finished ignition-fetch.service. Mar 17 18:45:26.389645 ignition[707]: Stage: fetch Mar 17 18:45:26.459174 systemd[1]: Starting ignition-kargs.service... Mar 17 18:45:26.389794 ignition[707]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:45:26.500240 systemd[1]: Finished ignition-kargs.service. Mar 17 18:45:26.389827 ignition[707]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:45:26.522415 systemd[1]: Starting ignition-disks.service... Mar 17 18:45:26.400050 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:45:26.555066 systemd[1]: Finished ignition-disks.service. Mar 17 18:45:26.400293 ignition[707]: parsed url from cmdline: "" Mar 17 18:45:26.570168 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:45:26.400301 ignition[707]: no config URL provided Mar 17 18:45:26.588761 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:45:26.400311 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:45:26.601802 systemd[1]: Reached target local-fs.target. Mar 17 18:45:26.400330 ignition[707]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:45:26.614782 systemd[1]: Reached target sysinit.target. Mar 17 18:45:26.400489 ignition[707]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 17 18:45:26.614938 systemd[1]: Reached target basic.target. Mar 17 18:45:26.408770 ignition[707]: GET result: OK Mar 17 18:45:26.636049 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:45:26.408890 ignition[707]: parsing config with SHA512: 22b118e25ada13413f7bf5df7c9e13679ba30ab88e41151b12907cb404970ceeeb0f1fcc6ebedacd69296e27c6e382dcada485c5eb94975c5f78447cdac0312b Mar 17 18:45:26.421044 ignition[707]: fetch: fetch complete Mar 17 18:45:26.421052 ignition[707]: fetch: fetch passed Mar 17 18:45:26.421127 ignition[707]: Ignition finished successfully Mar 17 18:45:26.473559 ignition[713]: Ignition 2.14.0 Mar 17 18:45:26.473567 ignition[713]: Stage: kargs Mar 17 18:45:26.473706 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:45:26.473740 ignition[713]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:45:26.482900 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:45:26.484652 ignition[713]: kargs: kargs passed Mar 17 18:45:26.484724 ignition[713]: Ignition finished successfully Mar 17 18:45:26.534164 ignition[719]: Ignition 2.14.0 Mar 17 18:45:26.534173 ignition[719]: Stage: disks Mar 17 18:45:26.534316 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:45:26.534347 ignition[719]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:45:26.542318 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:45:26.543815 ignition[719]: disks: disks passed Mar 17 18:45:26.543876 ignition[719]: Ignition finished successfully Mar 17 18:45:26.677226 systemd-fsck[727]: ROOT: clean, 623/1628000 files, 124059/1617920 blocks Mar 17 18:45:26.860626 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:45:26.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:26.862004 systemd[1]: Mounting sysroot.mount... Mar 17 18:45:26.896868 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:45:26.891004 systemd[1]: Mounted sysroot.mount. Mar 17 18:45:26.905002 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:45:26.923992 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:45:26.936424 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:45:26.936499 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:45:26.936566 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:45:27.024763 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (733) Mar 17 18:45:27.024811 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:45:27.024835 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:45:27.024858 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:45:26.952149 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:45:27.043256 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:45:26.975885 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:45:27.051750 initrd-setup-root[754]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:45:27.001188 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:45:27.077804 initrd-setup-root[764]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:45:27.054015 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:45:27.096794 initrd-setup-root[772]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:45:27.106721 initrd-setup-root[780]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:45:27.148779 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:45:27.188755 kernel: kauditd_printk_skb: 23 callbacks suppressed Mar 17 18:45:27.188800 kernel: audit: type=1130 audit(1742237127.147:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:27.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:27.150420 systemd[1]: Starting ignition-mount.service... Mar 17 18:45:27.196951 systemd[1]: Starting sysroot-boot.service... Mar 17 18:45:27.211025 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:45:27.211150 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:45:27.234685 ignition[799]: INFO : Ignition 2.14.0 Mar 17 18:45:27.234685 ignition[799]: INFO : Stage: mount Mar 17 18:45:27.234685 ignition[799]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:45:27.234685 ignition[799]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:45:27.288005 kernel: audit: type=1130 audit(1742237127.255:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:27.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:27.241213 systemd[1]: Finished sysroot-boot.service. Mar 17 18:45:27.339756 kernel: audit: type=1130 audit(1742237127.311:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:27.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:27.339887 ignition[799]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:45:27.339887 ignition[799]: INFO : mount: mount passed Mar 17 18:45:27.339887 ignition[799]: INFO : Ignition finished successfully Mar 17 18:45:27.400700 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (808) Mar 17 18:45:27.400739 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:45:27.400764 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:45:27.400779 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:45:27.257177 systemd[1]: Finished ignition-mount.service. Mar 17 18:45:27.414869 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:45:27.315139 systemd[1]: Starting ignition-files.service... Mar 17 18:45:27.351228 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:45:27.424858 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:45:27.454709 ignition[827]: INFO : Ignition 2.14.0 Mar 17 18:45:27.454709 ignition[827]: INFO : Stage: files Mar 17 18:45:27.454709 ignition[827]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:45:27.454709 ignition[827]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:45:27.482149 unknown[827]: wrote ssh authorized keys file for user: core Mar 17 18:45:27.508705 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:45:27.508705 ignition[827]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:45:27.508705 ignition[827]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:45:27.508705 ignition[827]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:45:27.508705 ignition[827]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:45:27.508705 ignition[827]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:45:27.508705 ignition[827]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:45:27.508705 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Mar 17 18:45:27.508705 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:45:27.508705 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2348572105" Mar 17 18:45:27.508705 ignition[827]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2348572105": device or resource busy Mar 17 18:45:27.508705 ignition[827]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2348572105", trying btrfs: device or resource busy Mar 17 18:45:27.508705 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2348572105" Mar 17 18:45:27.508705 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2348572105" Mar 17 18:45:27.508705 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem2348572105" Mar 17 18:45:27.508705 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem2348572105" Mar 17 18:45:27.508705 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Mar 17 18:45:27.508705 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:45:27.775757 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:45:27.775757 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Mar 17 18:45:27.753726 systemd-networkd[683]: eth0: Gained IPv6LL Mar 17 18:45:27.811774 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:45:27.811774 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:45:27.811774 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:45:28.116998 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Mar 17 18:45:28.290410 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2872522304" Mar 17 18:45:28.306695 ignition[827]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2872522304": device or resource busy Mar 17 18:45:28.306695 ignition[827]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2872522304", trying btrfs: device or resource busy Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2872522304" Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2872522304" Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem2872522304" Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem2872522304" Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:45:28.306695 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem781022055" Mar 17 18:45:28.556819 ignition[827]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem781022055": device or resource busy Mar 17 18:45:28.556819 ignition[827]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem781022055", trying btrfs: device or resource busy Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem781022055" Mar 17 18:45:28.556819 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem781022055" Mar 17 18:45:28.324352 systemd[1]: mnt-oem781022055.mount: Deactivated successfully. Mar 17 18:45:28.819722 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem781022055" Mar 17 18:45:28.819722 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem781022055" Mar 17 18:45:28.819722 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Mar 17 18:45:28.819722 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:45:28.819722 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 18:45:28.819722 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Mar 17 18:45:28.987929 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:45:29.006700 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Mar 17 18:45:29.006700 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:45:29.006700 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4189921590" Mar 17 18:45:29.006700 ignition[827]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4189921590": device or resource busy Mar 17 18:45:29.006700 ignition[827]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4189921590", trying btrfs: device or resource busy Mar 17 18:45:29.006700 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4189921590" Mar 17 18:45:29.006700 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4189921590" Mar 17 18:45:29.006700 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem4189921590" Mar 17 18:45:29.006700 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem4189921590" Mar 17 18:45:29.006700 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Mar 17 18:45:29.006700 ignition[827]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:45:29.006700 ignition[827]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:45:29.006700 ignition[827]: INFO : files: op(1d): [started] processing unit "oem-gce.service" Mar 17 18:45:29.006700 ignition[827]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" Mar 17 18:45:29.006700 ignition[827]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" Mar 17 18:45:29.006700 ignition[827]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" Mar 17 18:45:29.006700 ignition[827]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" Mar 17 18:45:29.443922 kernel: audit: type=1130 audit(1742237129.022:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.443967 kernel: audit: type=1130 audit(1742237129.133:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.443984 kernel: audit: type=1130 audit(1742237129.203:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.443999 kernel: audit: type=1131 audit(1742237129.203:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.444020 kernel: audit: type=1130 audit(1742237129.312:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.444035 kernel: audit: type=1131 audit(1742237129.312:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.444264 ignition[827]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:45:29.444264 ignition[827]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:45:29.444264 ignition[827]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" Mar 17 18:45:29.444264 ignition[827]: INFO : files: op(21): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:45:29.444264 ignition[827]: INFO : files: op(21): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:45:29.444264 ignition[827]: INFO : files: op(22): [started] setting preset to enabled for "oem-gce.service" Mar 17 18:45:29.444264 ignition[827]: INFO : files: op(22): [finished] setting preset to enabled for "oem-gce.service" Mar 17 18:45:29.444264 ignition[827]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Mar 17 18:45:29.444264 ignition[827]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Mar 17 18:45:29.444264 ignition[827]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:45:29.444264 ignition[827]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:45:29.444264 ignition[827]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:45:29.444264 ignition[827]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:45:29.444264 ignition[827]: INFO : files: files passed Mar 17 18:45:29.444264 ignition[827]: INFO : Ignition finished successfully Mar 17 18:45:29.791857 kernel: audit: type=1130 audit(1742237129.459:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.008896 systemd[1]: mnt-oem4189921590.mount: Deactivated successfully. Mar 17 18:45:29.023384 systemd[1]: Finished ignition-files.service. Mar 17 18:45:29.822831 initrd-setup-root-after-ignition[850]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:45:29.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.034885 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:45:29.089751 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:45:29.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.091017 systemd[1]: Starting ignition-quench.service... Mar 17 18:45:29.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.110342 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:45:29.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.135413 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:45:29.135591 systemd[1]: Finished ignition-quench.service. Mar 17 18:45:29.205157 systemd[1]: Reached target ignition-complete.target. Mar 17 18:45:29.948755 ignition[865]: INFO : Ignition 2.14.0 Mar 17 18:45:29.948755 ignition[865]: INFO : Stage: umount Mar 17 18:45:29.948755 ignition[865]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:45:29.948755 ignition[865]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:45:29.948755 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:45:29.948755 ignition[865]: INFO : umount: umount passed Mar 17 18:45:29.948755 ignition[865]: INFO : Ignition finished successfully Mar 17 18:45:29.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:30.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:30.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.262640 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:45:30.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.303455 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:45:30.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.303599 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:45:30.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.314057 systemd[1]: Reached target initrd-fs.target. Mar 17 18:45:30.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.384773 systemd[1]: Reached target initrd.target. Mar 17 18:45:30.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.401877 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:45:29.403111 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:45:30.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.426248 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:45:29.462482 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:45:29.513044 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:45:29.531002 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:45:29.542183 systemd[1]: Stopped target timers.target. Mar 17 18:45:29.570097 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:45:29.570330 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:45:29.593236 systemd[1]: Stopped target initrd.target. Mar 17 18:45:29.612966 systemd[1]: Stopped target basic.target. Mar 17 18:45:29.632998 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:45:30.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.656023 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:45:30.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.679012 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:45:29.701154 systemd[1]: Stopped target remote-fs.target. Mar 17 18:45:29.723129 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:45:30.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.746121 systemd[1]: Stopped target sysinit.target. Mar 17 18:45:30.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.769057 systemd[1]: Stopped target local-fs.target. Mar 17 18:45:29.784074 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:45:30.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:30.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:30.366000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:45:29.800113 systemd[1]: Stopped target swap.target. Mar 17 18:45:29.815979 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:45:29.816191 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:45:30.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.831295 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:45:30.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.852993 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:45:30.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.853201 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:45:29.871115 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:45:29.871304 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:45:30.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.898230 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:45:29.898452 systemd[1]: Stopped ignition-files.service. Mar 17 18:45:29.914620 systemd[1]: Stopping ignition-mount.service... Mar 17 18:45:30.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.955960 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:45:30.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.956359 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:45:30.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:29.964687 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:45:29.992857 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:45:29.993133 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:45:30.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:30.013453 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:45:30.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:30.013730 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:45:30.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:30.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:30.034509 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:45:30.035837 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:45:30.035957 systemd[1]: Stopped ignition-mount.service. Mar 17 18:45:30.058603 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:45:30.058724 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:45:30.073546 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:45:30.732402 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Mar 17 18:45:30.073702 systemd[1]: Stopped ignition-disks.service. Mar 17 18:45:30.739738 iscsid[692]: iscsid shutting down. Mar 17 18:45:30.089815 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:45:30.089904 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:45:30.104918 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:45:30.104988 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:45:30.120984 systemd[1]: Stopped target network.target. Mar 17 18:45:30.135855 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:45:30.135968 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:45:30.155994 systemd[1]: Stopped target paths.target. Mar 17 18:45:30.170834 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:45:30.176681 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:45:30.177850 systemd[1]: Stopped target slices.target. Mar 17 18:45:30.191937 systemd[1]: Stopped target sockets.target. Mar 17 18:45:30.218911 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:45:30.218957 systemd[1]: Closed iscsid.socket. Mar 17 18:45:30.225974 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:45:30.226020 systemd[1]: Closed iscsiuio.socket. Mar 17 18:45:30.252916 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:45:30.253002 systemd[1]: Stopped ignition-setup.service. Mar 17 18:45:30.272951 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:45:30.273027 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:45:30.289080 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:45:30.292629 systemd-networkd[683]: eth0: DHCPv6 lease lost Mar 17 18:45:30.304941 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:45:30.320372 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:45:30.320506 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:45:30.336626 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:45:30.336765 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:45:30.354763 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:45:30.354893 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:45:30.369022 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:45:30.369068 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:45:30.390938 systemd[1]: Stopping network-cleanup.service... Mar 17 18:45:30.404697 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:45:30.404838 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:45:30.421051 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:45:30.421149 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:45:30.436045 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:45:30.436120 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:45:30.451992 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:45:30.468808 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:45:30.469720 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:45:30.469887 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:45:30.494205 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:45:30.494316 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:45:30.507822 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:45:30.507889 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:45:30.523772 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:45:30.523871 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:45:30.538886 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:45:30.538964 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:45:30.554904 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:45:30.554994 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:45:30.575302 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:45:30.595884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:45:30.596020 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:45:30.615485 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:45:30.615647 systemd[1]: Stopped network-cleanup.service. Mar 17 18:45:30.631213 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:45:30.631328 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:45:30.649294 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:45:30.667143 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:45:30.698736 systemd[1]: Switching root. Mar 17 18:45:30.750064 systemd-journald[189]: Journal stopped Mar 17 18:45:35.615908 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:45:35.616034 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:45:35.616068 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:45:35.616117 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:45:35.616142 kernel: SELinux: policy capability open_perms=1 Mar 17 18:45:35.616166 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:45:35.616188 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:45:35.616209 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:45:35.616231 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:45:35.616258 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:45:35.616278 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:45:35.616301 systemd[1]: Successfully loaded SELinux policy in 115.633ms. Mar 17 18:45:35.616337 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.870ms. Mar 17 18:45:35.616362 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:45:35.616386 systemd[1]: Detected virtualization kvm. Mar 17 18:45:35.616570 systemd[1]: Detected architecture x86-64. Mar 17 18:45:35.616597 systemd[1]: Detected first boot. Mar 17 18:45:35.616631 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:45:35.616671 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:45:35.616695 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:45:35.616722 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:45:35.616751 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:45:35.616778 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:45:35.616807 kernel: kauditd_printk_skb: 49 callbacks suppressed Mar 17 18:45:35.616829 kernel: audit: type=1334 audit(1742237134.667:86): prog-id=12 op=LOAD Mar 17 18:45:35.616854 kernel: audit: type=1334 audit(1742237134.667:87): prog-id=3 op=UNLOAD Mar 17 18:45:35.616878 kernel: audit: type=1334 audit(1742237134.667:88): prog-id=13 op=LOAD Mar 17 18:45:35.616900 kernel: audit: type=1334 audit(1742237134.667:89): prog-id=14 op=LOAD Mar 17 18:45:35.616923 kernel: audit: type=1334 audit(1742237134.667:90): prog-id=4 op=UNLOAD Mar 17 18:45:35.616945 kernel: audit: type=1334 audit(1742237134.667:91): prog-id=5 op=UNLOAD Mar 17 18:45:35.616966 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:45:35.616987 kernel: audit: type=1334 audit(1742237134.668:92): prog-id=15 op=LOAD Mar 17 18:45:35.617008 kernel: audit: type=1334 audit(1742237134.668:93): prog-id=12 op=UNLOAD Mar 17 18:45:35.617032 kernel: audit: type=1334 audit(1742237134.668:94): prog-id=16 op=LOAD Mar 17 18:45:35.617052 systemd[1]: Stopped iscsiuio.service. Mar 17 18:45:35.617083 kernel: audit: type=1334 audit(1742237134.668:95): prog-id=17 op=LOAD Mar 17 18:45:35.617106 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:45:35.617131 systemd[1]: Stopped iscsid.service. Mar 17 18:45:35.617158 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:45:35.617182 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:45:35.617214 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:45:35.617247 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:45:35.617273 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:45:35.617297 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 18:45:35.617323 systemd[1]: Created slice system-getty.slice. Mar 17 18:45:35.617347 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:45:35.617372 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:45:35.617397 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:45:35.617422 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:45:35.617451 systemd[1]: Created slice user.slice. Mar 17 18:45:35.617475 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:45:35.617498 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:45:35.617557 systemd[1]: Set up automount boot.automount. Mar 17 18:45:35.617580 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:45:35.617603 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:45:35.617626 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:45:35.617660 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:45:35.617684 systemd[1]: Reached target integritysetup.target. Mar 17 18:45:35.617713 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:45:35.617739 systemd[1]: Reached target remote-fs.target. Mar 17 18:45:35.617763 systemd[1]: Reached target slices.target. Mar 17 18:45:35.617789 systemd[1]: Reached target swap.target. Mar 17 18:45:35.617935 systemd[1]: Reached target torcx.target. Mar 17 18:45:35.617970 systemd[1]: Reached target veritysetup.target. Mar 17 18:45:35.617997 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:45:35.618023 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:45:35.618049 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:45:35.618075 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:45:35.618111 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:45:35.618136 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:45:35.618162 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:45:35.618186 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:45:35.618208 systemd[1]: Mounting media.mount... Mar 17 18:45:35.618230 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:45:35.618253 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:45:35.618276 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:45:35.618298 systemd[1]: Mounting tmp.mount... Mar 17 18:45:35.618328 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:45:35.618355 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:45:35.618378 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:45:35.618401 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:45:35.618424 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:45:35.618447 systemd[1]: Starting modprobe@drm.service... Mar 17 18:45:35.618471 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:45:35.618494 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:45:35.618545 systemd[1]: Starting modprobe@loop.service... Mar 17 18:45:35.618574 kernel: fuse: init (API version 7.34) Mar 17 18:45:35.618600 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:45:35.618623 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:45:35.618646 kernel: loop: module loaded Mar 17 18:45:35.618678 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:45:35.618702 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:45:35.618726 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:45:35.618749 systemd[1]: Stopped systemd-journald.service. Mar 17 18:45:35.618772 systemd[1]: Starting systemd-journald.service... Mar 17 18:45:35.618800 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:45:35.618823 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:45:35.618854 systemd-journald[989]: Journal started Mar 17 18:45:35.618963 systemd-journald[989]: Runtime Journal (/run/log/journal/0109e7617388cb92ba28a205503daa68) is 8.0M, max 148.8M, 140.8M free. Mar 17 18:45:30.749000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:45:31.077000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:45:31.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:45:31.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:45:31.244000 audit: BPF prog-id=10 op=LOAD Mar 17 18:45:31.244000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:45:31.244000 audit: BPF prog-id=11 op=LOAD Mar 17 18:45:31.244000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:45:31.408000 audit[898]: AVC avc: denied { associate } for pid=898 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:45:31.408000 audit[898]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8ac a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=881 pid=898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:45:31.408000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:45:31.418000 audit[898]: AVC avc: denied { associate } for pid=898 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:45:31.418000 audit[898]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d985 a2=1ed a3=0 items=2 ppid=881 pid=898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:45:31.418000 audit: CWD cwd="/" Mar 17 18:45:31.418000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:31.418000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:31.418000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:45:34.667000 audit: BPF prog-id=12 op=LOAD Mar 17 18:45:34.667000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:45:34.667000 audit: BPF prog-id=13 op=LOAD Mar 17 18:45:34.667000 audit: BPF prog-id=14 op=LOAD Mar 17 18:45:34.667000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:45:34.667000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:45:34.668000 audit: BPF prog-id=15 op=LOAD Mar 17 18:45:34.668000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:45:34.668000 audit: BPF prog-id=16 op=LOAD Mar 17 18:45:34.668000 audit: BPF prog-id=17 op=LOAD Mar 17 18:45:34.668000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:45:34.668000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:45:34.681000 audit: BPF prog-id=18 op=LOAD Mar 17 18:45:34.681000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:45:34.688000 audit: BPF prog-id=19 op=LOAD Mar 17 18:45:34.695000 audit: BPF prog-id=20 op=LOAD Mar 17 18:45:34.695000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:45:34.695000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:45:34.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:34.756000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:45:34.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:34.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:34.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:34.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.569000 audit: BPF prog-id=21 op=LOAD Mar 17 18:45:35.569000 audit: BPF prog-id=22 op=LOAD Mar 17 18:45:35.569000 audit: BPF prog-id=23 op=LOAD Mar 17 18:45:35.569000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:45:35.569000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:45:35.611000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:45:35.611000 audit[989]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffeecda8e60 a2=4000 a3=7ffeecda8efc items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:45:35.611000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:45:34.665636 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:45:31.404975 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:45:34.665653 systemd[1]: Unnecessary job was removed for dev-sda6.device. Mar 17 18:45:31.405968 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:45:34.698225 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:45:31.406004 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:45:31.406059 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:45:31.406080 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:45:31.406145 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:45:31.406170 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:45:31.406507 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:45:31.406615 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:45:31.406642 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:45:31.408121 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:45:31.408189 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:45:31.408227 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:45:31.408256 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:45:31.408289 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:45:31.408326 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:45:34.014034 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:34Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:45:34.014344 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:34Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:45:34.014559 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:34Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:45:34.014871 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:34Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:45:34.014933 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:34Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:45:34.015007 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2025-03-17T18:45:34Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:45:35.629589 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:45:35.643782 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:45:35.662890 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:45:35.663127 systemd[1]: Stopped verity-setup.service. Mar 17 18:45:35.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.682862 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:45:35.691577 systemd[1]: Started systemd-journald.service. Mar 17 18:45:35.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.701062 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:45:35.708973 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:45:35.716993 systemd[1]: Mounted media.mount. Mar 17 18:45:35.724962 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:45:35.733972 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:45:35.742984 systemd[1]: Mounted tmp.mount. Mar 17 18:45:35.750094 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:45:35.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.759165 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:45:35.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.768297 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:45:35.768569 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:45:35.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.777271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:45:35.777527 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:45:35.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.787259 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:45:35.787484 systemd[1]: Finished modprobe@drm.service. Mar 17 18:45:35.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.797286 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:45:35.797660 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:45:35.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.807371 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:45:35.807644 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:45:35.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.816476 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:45:35.816803 systemd[1]: Finished modprobe@loop.service. Mar 17 18:45:35.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.826276 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:45:35.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.836200 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:45:35.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.850301 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:45:35.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.860225 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:45:35.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.871846 systemd[1]: Reached target network-pre.target. Mar 17 18:45:35.882451 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:45:35.893431 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:45:35.900696 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:45:35.904286 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:45:35.913501 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:45:35.921574 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:45:35.924142 systemd-journald[989]: Time spent on flushing to /var/log/journal/0109e7617388cb92ba28a205503daa68 is 64.146ms for 1161 entries. Mar 17 18:45:35.924142 systemd-journald[989]: System Journal (/var/log/journal/0109e7617388cb92ba28a205503daa68) is 8.0M, max 584.8M, 576.8M free. Mar 17 18:45:36.020205 systemd-journald[989]: Received client request to flush runtime journal. Mar 17 18:45:35.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.925827 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:45:35.939782 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:45:36.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:35.941829 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:45:35.954205 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:45:35.963675 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:45:35.974109 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:45:35.982974 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:45:35.992067 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:45:36.004259 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:45:36.013287 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:45:36.022617 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:45:36.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:36.031665 udevadm[1003]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 18:45:36.036607 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:45:36.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:36.650616 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:45:36.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:36.659000 audit: BPF prog-id=24 op=LOAD Mar 17 18:45:36.659000 audit: BPF prog-id=25 op=LOAD Mar 17 18:45:36.659000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:45:36.659000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:45:36.661809 systemd[1]: Starting systemd-udevd.service... Mar 17 18:45:36.685685 systemd-udevd[1006]: Using default interface naming scheme 'v252'. Mar 17 18:45:36.737415 systemd[1]: Started systemd-udevd.service. Mar 17 18:45:36.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:36.746000 audit: BPF prog-id=26 op=LOAD Mar 17 18:45:36.748960 systemd[1]: Starting systemd-networkd.service... Mar 17 18:45:36.765000 audit: BPF prog-id=27 op=LOAD Mar 17 18:45:36.765000 audit: BPF prog-id=28 op=LOAD Mar 17 18:45:36.765000 audit: BPF prog-id=29 op=LOAD Mar 17 18:45:36.768407 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:45:36.836752 systemd[1]: Started systemd-userdbd.service. Mar 17 18:45:36.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:36.856965 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 18:45:36.963621 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:45:36.970087 systemd-networkd[1021]: lo: Link UP Mar 17 18:45:36.970101 systemd-networkd[1021]: lo: Gained carrier Mar 17 18:45:36.970886 systemd-networkd[1021]: Enumeration completed Mar 17 18:45:36.971049 systemd[1]: Started systemd-networkd.service. Mar 17 18:45:36.971451 systemd-networkd[1021]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:45:36.973962 systemd-networkd[1021]: eth0: Link UP Mar 17 18:45:36.973980 systemd-networkd[1021]: eth0: Gained carrier Mar 17 18:45:36.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:36.984765 systemd-networkd[1021]: eth0: DHCPv4 address 10.128.0.78/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 17 18:45:36.993541 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:45:36.986000 audit[1028]: AVC avc: denied { confidentiality } for pid=1028 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:45:36.986000 audit[1028]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5591625940f0 a1=338ac a2=7fcadc9acbc5 a3=5 items=110 ppid=1006 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:45:36.986000 audit: CWD cwd="/" Mar 17 18:45:36.986000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=1 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=2 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=3 name=(null) inode=14626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=4 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=5 name=(null) inode=14627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=6 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=7 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=8 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=9 name=(null) inode=14629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=10 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=11 name=(null) inode=14630 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=12 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=13 name=(null) inode=14631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=14 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=15 name=(null) inode=14632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=16 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=17 name=(null) inode=14633 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=18 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=19 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=20 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=21 name=(null) inode=14635 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=22 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=23 name=(null) inode=14636 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=24 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=25 name=(null) inode=14637 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=26 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=27 name=(null) inode=14638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=28 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=29 name=(null) inode=14639 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=30 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=31 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=32 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=33 name=(null) inode=14641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=34 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=35 name=(null) inode=14642 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=36 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=37 name=(null) inode=14643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=38 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=39 name=(null) inode=14644 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=40 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=41 name=(null) inode=14645 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=42 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=43 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=44 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=45 name=(null) inode=14647 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=46 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=47 name=(null) inode=14648 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=48 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=49 name=(null) inode=14649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=50 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=51 name=(null) inode=14650 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=52 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=53 name=(null) inode=14651 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=55 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=56 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=57 name=(null) inode=14653 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=58 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=59 name=(null) inode=14654 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=60 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=61 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=62 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=63 name=(null) inode=14656 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=64 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=65 name=(null) inode=14657 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=66 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=67 name=(null) inode=14658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=68 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=69 name=(null) inode=14659 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=70 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=71 name=(null) inode=14660 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=72 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=73 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=74 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=75 name=(null) inode=14662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=76 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=77 name=(null) inode=14663 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=78 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=79 name=(null) inode=14664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=80 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=81 name=(null) inode=14665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=82 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=83 name=(null) inode=14666 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=84 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=85 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=86 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=87 name=(null) inode=14668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=88 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=89 name=(null) inode=14669 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=90 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=91 name=(null) inode=14670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=92 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=93 name=(null) inode=14671 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=94 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=95 name=(null) inode=14672 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=96 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=97 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=98 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=99 name=(null) inode=14674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=100 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=101 name=(null) inode=14675 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=102 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=103 name=(null) inode=14676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=104 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=105 name=(null) inode=14677 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=106 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=107 name=(null) inode=14678 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PATH item=109 name=(null) inode=14679 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:45:36.986000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:45:37.030306 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Mar 17 18:45:37.038549 kernel: ACPI: button: Sleep Button [SLPF] Mar 17 18:45:37.074581 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 17 18:45:37.122474 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 18:45:37.122577 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:45:37.134553 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:45:37.151461 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:45:37.165300 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:45:37.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.175569 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:45:37.207101 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:45:37.243086 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:45:37.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.252002 systemd[1]: Reached target cryptsetup.target. Mar 17 18:45:37.263076 systemd[1]: Starting lvm2-activation.service... Mar 17 18:45:37.268561 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:45:37.301999 systemd[1]: Finished lvm2-activation.service. Mar 17 18:45:37.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.312071 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:45:37.320805 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:45:37.320871 systemd[1]: Reached target local-fs.target. Mar 17 18:45:37.329753 systemd[1]: Reached target machines.target. Mar 17 18:45:37.340657 systemd[1]: Starting ldconfig.service... Mar 17 18:45:37.350968 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:45:37.351097 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:45:37.353303 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:45:37.363557 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:45:37.376131 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:45:37.378790 systemd[1]: Starting systemd-sysext.service... Mar 17 18:45:37.379865 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1046 (bootctl) Mar 17 18:45:37.382791 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:45:37.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.412408 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:45:37.416679 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:45:37.426612 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:45:37.426898 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:45:37.452567 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 18:45:37.543185 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) Mar 17 18:45:37.543185 systemd-fsck[1055]: /dev/sda1: 789 files, 119299/258078 clusters Mar 17 18:45:37.546206 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:45:37.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.557986 systemd[1]: Mounting boot.mount... Mar 17 18:45:37.608066 systemd[1]: Mounted boot.mount. Mar 17 18:45:37.632378 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:45:37.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.777637 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:45:37.778368 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:45:37.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.811568 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:45:37.839666 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 18:45:37.864242 (sd-sysext)[1062]: Using extensions 'kubernetes'. Mar 17 18:45:37.865364 (sd-sysext)[1062]: Merged extensions into '/usr'. Mar 17 18:45:37.893094 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:45:37.896036 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:45:37.904055 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:45:37.906272 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:45:37.916777 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:45:37.925565 systemd[1]: Starting modprobe@loop.service... Mar 17 18:45:37.932831 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:45:37.933119 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:45:37.933334 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:45:37.938078 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:45:37.946606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:45:37.946833 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:45:37.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.956557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:45:37.956785 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:45:37.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.967477 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:45:37.967705 systemd[1]: Finished modprobe@loop.service. Mar 17 18:45:37.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.978948 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:45:37.979173 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:45:37.983318 systemd[1]: Finished systemd-sysext.service. Mar 17 18:45:37.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:37.993043 systemd[1]: Starting ensure-sysext.service... Mar 17 18:45:38.002821 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:45:38.016082 systemd[1]: Reloading. Mar 17 18:45:38.049355 ldconfig[1045]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:45:38.061657 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:45:38.071773 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:45:38.085788 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:45:38.114294 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2025-03-17T18:45:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:45:38.119219 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2025-03-17T18:45:38Z" level=info msg="torcx already run" Mar 17 18:45:38.301051 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:45:38.301088 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:45:38.341616 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:45:38.377673 systemd-networkd[1021]: eth0: Gained IPv6LL Mar 17 18:45:38.426000 audit: BPF prog-id=30 op=LOAD Mar 17 18:45:38.426000 audit: BPF prog-id=31 op=LOAD Mar 17 18:45:38.426000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:45:38.426000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:45:38.427000 audit: BPF prog-id=32 op=LOAD Mar 17 18:45:38.427000 audit: BPF prog-id=27 op=UNLOAD Mar 17 18:45:38.427000 audit: BPF prog-id=33 op=LOAD Mar 17 18:45:38.427000 audit: BPF prog-id=34 op=LOAD Mar 17 18:45:38.427000 audit: BPF prog-id=28 op=UNLOAD Mar 17 18:45:38.427000 audit: BPF prog-id=29 op=UNLOAD Mar 17 18:45:38.431000 audit: BPF prog-id=35 op=LOAD Mar 17 18:45:38.431000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:45:38.431000 audit: BPF prog-id=36 op=LOAD Mar 17 18:45:38.431000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:45:38.432000 audit: BPF prog-id=37 op=LOAD Mar 17 18:45:38.432000 audit: BPF prog-id=38 op=LOAD Mar 17 18:45:38.432000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:45:38.432000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:45:38.440349 systemd[1]: Finished ldconfig.service. Mar 17 18:45:38.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:38.449667 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:45:38.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:38.464355 systemd[1]: Starting audit-rules.service... Mar 17 18:45:38.473577 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:45:38.484206 systemd[1]: Starting oem-gce-enable-oslogin.service... Mar 17 18:45:38.494678 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:45:38.502000 audit: BPF prog-id=39 op=LOAD Mar 17 18:45:38.506095 systemd[1]: Starting systemd-resolved.service... Mar 17 18:45:38.513000 audit: BPF prog-id=40 op=LOAD Mar 17 18:45:38.516924 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:45:38.526110 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:45:38.535600 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:45:38.538000 audit[1156]: SYSTEM_BOOT pid=1156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:45:38.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:38.544571 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Mar 17 18:45:38.544858 systemd[1]: Finished oem-gce-enable-oslogin.service. Mar 17 18:45:38.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:38.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:45:38.566274 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:45:38.566839 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:45:38.567000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:45:38.567000 audit[1163]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc835b8f0 a2=420 a3=0 items=0 ppid=1133 pid=1163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:45:38.567000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:45:38.570594 augenrules[1163]: No rules Mar 17 18:45:38.570432 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:45:38.580299 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:45:38.590172 systemd[1]: Starting modprobe@loop.service... Mar 17 18:45:38.600190 systemd[1]: Starting oem-gce-enable-oslogin.service... Mar 17 18:45:38.608764 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:45:38.609171 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:45:38.609503 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:45:38.609789 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:45:38.613338 systemd[1]: Finished audit-rules.service. Mar 17 18:45:38.614782 enable-oslogin[1171]: /etc/pam.d/sshd already exists. Not enabling OS Login Mar 17 18:45:38.621728 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:45:38.632725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:45:38.632951 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:45:38.642709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:45:38.642927 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:45:38.652660 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:45:38.652883 systemd[1]: Finished modprobe@loop.service. Mar 17 18:45:38.662771 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Mar 17 18:45:38.663018 systemd[1]: Finished oem-gce-enable-oslogin.service. Mar 17 18:45:38.673023 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:45:38.673344 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:45:38.676816 systemd[1]: Starting systemd-update-done.service... Mar 17 18:45:38.685123 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:45:38.702734 systemd-resolved[1147]: Positive Trust Anchors: Mar 17 18:45:38.703129 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:45:38.703337 systemd-resolved[1147]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:45:38.704309 systemd-resolved[1147]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:45:38.704488 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:45:40.170445 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:45:40.170862 systemd-timesyncd[1152]: Contacted time server 169.254.169.254:123 (169.254.169.254). Mar 17 18:45:40.171385 systemd-timesyncd[1152]: Initial clock synchronization to Mon 2025-03-17 18:45:40.170331 UTC. Mar 17 18:45:40.179930 systemd[1]: Starting modprobe@drm.service... Mar 17 18:45:40.190187 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:45:40.200180 systemd[1]: Starting modprobe@loop.service... Mar 17 18:45:40.209115 systemd-resolved[1147]: Defaulting to hostname 'linux'. Mar 17 18:45:40.209996 systemd[1]: Starting oem-gce-enable-oslogin.service... Mar 17 18:45:40.216249 enable-oslogin[1178]: /etc/pam.d/sshd already exists. Not enabling OS Login Mar 17 18:45:40.218995 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:45:40.219312 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:45:40.221462 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:45:40.229910 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:45:40.230172 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:45:40.232232 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:45:40.242289 systemd[1]: Started systemd-resolved.service. Mar 17 18:45:40.251851 systemd[1]: Finished systemd-update-done.service. Mar 17 18:45:40.261629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:45:40.261893 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:45:40.272116 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:45:40.272423 systemd[1]: Finished modprobe@drm.service. Mar 17 18:45:40.281764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:45:40.282013 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:45:40.291619 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:45:40.291878 systemd[1]: Finished modprobe@loop.service. Mar 17 18:45:40.301551 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Mar 17 18:45:40.301825 systemd[1]: Finished oem-gce-enable-oslogin.service. Mar 17 18:45:40.311806 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:45:40.323344 systemd[1]: Reached target network.target. Mar 17 18:45:40.331980 systemd[1]: Reached target network-online.target. Mar 17 18:45:40.340973 systemd[1]: Reached target nss-lookup.target. Mar 17 18:45:40.349888 systemd[1]: Reached target time-set.target. Mar 17 18:45:40.358998 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:45:40.359075 systemd[1]: Reached target sysinit.target. Mar 17 18:45:40.368067 systemd[1]: Started motdgen.path. Mar 17 18:45:40.375013 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:45:40.385294 systemd[1]: Started logrotate.timer. Mar 17 18:45:40.393169 systemd[1]: Started mdadm.timer. Mar 17 18:45:40.399924 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:45:40.408958 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:45:40.409038 systemd[1]: Reached target paths.target. Mar 17 18:45:40.415982 systemd[1]: Reached target timers.target. Mar 17 18:45:40.423668 systemd[1]: Listening on dbus.socket. Mar 17 18:45:40.432398 systemd[1]: Starting docker.socket... Mar 17 18:45:40.444824 systemd[1]: Listening on sshd.socket. Mar 17 18:45:40.452066 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:45:40.452178 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:45:40.453228 systemd[1]: Finished ensure-sysext.service. Mar 17 18:45:40.462203 systemd[1]: Listening on docker.socket. Mar 17 18:45:40.471544 systemd[1]: Reached target sockets.target. Mar 17 18:45:40.479901 systemd[1]: Reached target basic.target. Mar 17 18:45:40.486989 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:45:40.487036 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:45:40.488833 systemd[1]: Starting containerd.service... Mar 17 18:45:40.497503 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 18:45:40.508903 systemd[1]: Starting dbus.service... Mar 17 18:45:40.516727 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:45:40.527256 systemd[1]: Starting extend-filesystems.service... Mar 17 18:45:40.534712 jq[1185]: false Mar 17 18:45:40.534896 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:45:40.537382 systemd[1]: Starting kubelet.service... Mar 17 18:45:40.546837 systemd[1]: Starting motdgen.service... Mar 17 18:45:40.560029 systemd[1]: Starting oem-gce.service... Mar 17 18:45:40.572243 extend-filesystems[1186]: Found loop1 Mar 17 18:45:40.573477 systemd[1]: Starting prepare-helm.service... Mar 17 18:45:40.597545 extend-filesystems[1186]: Found sda Mar 17 18:45:40.597545 extend-filesystems[1186]: Found sda1 Mar 17 18:45:40.597545 extend-filesystems[1186]: Found sda2 Mar 17 18:45:40.597545 extend-filesystems[1186]: Found sda3 Mar 17 18:45:40.597545 extend-filesystems[1186]: Found usr Mar 17 18:45:40.597545 extend-filesystems[1186]: Found sda4 Mar 17 18:45:40.597545 extend-filesystems[1186]: Found sda6 Mar 17 18:45:40.597545 extend-filesystems[1186]: Found sda7 Mar 17 18:45:40.597545 extend-filesystems[1186]: Found sda9 Mar 17 18:45:40.597545 extend-filesystems[1186]: Checking size of /dev/sda9 Mar 17 18:45:40.585304 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:45:40.712696 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Mar 17 18:45:40.712749 extend-filesystems[1186]: Resized partition /dev/sda9 Mar 17 18:45:40.607159 systemd[1]: Starting sshd-keygen.service... Mar 17 18:45:40.727275 extend-filesystems[1218]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:45:40.626553 systemd[1]: Starting systemd-logind.service... Mar 17 18:45:40.629077 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:45:40.629808 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Mar 17 18:45:40.632199 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:45:40.737453 jq[1211]: true Mar 17 18:45:40.633796 systemd[1]: Starting update-engine.service... Mar 17 18:45:40.642635 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:45:40.654441 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:45:40.654832 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:45:40.661802 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:45:40.741816 jq[1219]: true Mar 17 18:45:40.662119 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:45:40.742255 mkfs.ext4[1222]: mke2fs 1.46.5 (30-Dec-2021) Mar 17 18:45:40.742255 mkfs.ext4[1222]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Mar 17 18:45:40.742255 mkfs.ext4[1222]: Creating filesystem with 262144 4k blocks and 65536 inodes Mar 17 18:45:40.742255 mkfs.ext4[1222]: Filesystem UUID: cb271e88-6fd5-40dd-9f8e-eaa961871f6f Mar 17 18:45:40.742255 mkfs.ext4[1222]: Superblock backups stored on blocks: Mar 17 18:45:40.742255 mkfs.ext4[1222]: 32768, 98304, 163840, 229376 Mar 17 18:45:40.742255 mkfs.ext4[1222]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Mar 17 18:45:40.742255 mkfs.ext4[1222]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Mar 17 18:45:40.742255 mkfs.ext4[1222]: Creating journal (8192 blocks): done Mar 17 18:45:40.742255 mkfs.ext4[1222]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Mar 17 18:45:40.700404 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:45:40.700710 systemd[1]: Finished motdgen.service. Mar 17 18:45:40.746186 dbus-daemon[1184]: [system] SELinux support is enabled Mar 17 18:45:40.747180 systemd[1]: Started dbus.service. Mar 17 18:45:40.751226 dbus-daemon[1184]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1021 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 18:45:40.759211 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:45:40.759310 systemd[1]: Reached target system-config.target. Mar 17 18:45:40.771106 umount[1232]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Mar 17 18:45:40.767992 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:45:40.768042 systemd[1]: Reached target user-config.target. Mar 17 18:45:40.780431 dbus-daemon[1184]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:45:40.787979 systemd[1]: Starting systemd-hostnamed.service... Mar 17 18:45:40.803539 kernel: loop2: detected capacity change from 0 to 2097152 Mar 17 18:45:40.825750 tar[1217]: linux-amd64/helm Mar 17 18:45:40.848716 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Mar 17 18:45:40.918708 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:45:40.921715 update_engine[1209]: I0317 18:45:40.921623 1209 main.cc:92] Flatcar Update Engine starting Mar 17 18:45:40.930983 systemd[1]: Started update-engine.service. Mar 17 18:45:40.931521 update_engine[1209]: I0317 18:45:40.931483 1209 update_check_scheduler.cc:74] Next update check in 7m37s Mar 17 18:45:40.940522 extend-filesystems[1218]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 18:45:40.940522 extend-filesystems[1218]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 17 18:45:40.940522 extend-filesystems[1218]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Mar 17 18:45:40.984289 extend-filesystems[1186]: Resized filesystem in /dev/sda9 Mar 17 18:45:41.017894 bash[1251]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:45:40.944173 systemd[1]: Started locksmithd.service. Mar 17 18:45:40.953057 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:45:40.953558 systemd[1]: Finished extend-filesystems.service. Mar 17 18:45:40.992440 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:45:41.047827 env[1221]: time="2025-03-17T18:45:41.047756023Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:45:41.126024 systemd-logind[1207]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:45:41.126071 systemd-logind[1207]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 17 18:45:41.126104 systemd-logind[1207]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:45:41.127343 systemd-logind[1207]: New seat seat0. Mar 17 18:45:41.134697 systemd[1]: Started systemd-logind.service. Mar 17 18:45:41.277818 env[1221]: time="2025-03-17T18:45:41.277602136Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:45:41.277979 env[1221]: time="2025-03-17T18:45:41.277908959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:45:41.285545 env[1221]: time="2025-03-17T18:45:41.285469560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:45:41.285545 env[1221]: time="2025-03-17T18:45:41.285543570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:45:41.285959 env[1221]: time="2025-03-17T18:45:41.285921564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:45:41.286033 env[1221]: time="2025-03-17T18:45:41.285960259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:45:41.286033 env[1221]: time="2025-03-17T18:45:41.285984094Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:45:41.286033 env[1221]: time="2025-03-17T18:45:41.286002406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:45:41.286171 env[1221]: time="2025-03-17T18:45:41.286125206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:45:41.286489 env[1221]: time="2025-03-17T18:45:41.286450334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:45:41.294401 env[1221]: time="2025-03-17T18:45:41.294332236Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:45:41.294401 env[1221]: time="2025-03-17T18:45:41.294398525Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:45:41.294624 env[1221]: time="2025-03-17T18:45:41.294562771Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:45:41.294624 env[1221]: time="2025-03-17T18:45:41.294590616Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:45:41.308812 env[1221]: time="2025-03-17T18:45:41.308735284Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:45:41.308984 env[1221]: time="2025-03-17T18:45:41.308822738Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:45:41.308984 env[1221]: time="2025-03-17T18:45:41.308861600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:45:41.309087 env[1221]: time="2025-03-17T18:45:41.308989078Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:45:41.309087 env[1221]: time="2025-03-17T18:45:41.309017456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:45:41.309087 env[1221]: time="2025-03-17T18:45:41.309068126Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:45:41.309241 env[1221]: time="2025-03-17T18:45:41.309094054Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:45:41.309241 env[1221]: time="2025-03-17T18:45:41.309120173Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:45:41.309241 env[1221]: time="2025-03-17T18:45:41.309143437Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:45:41.309241 env[1221]: time="2025-03-17T18:45:41.309167109Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:45:41.309241 env[1221]: time="2025-03-17T18:45:41.309211268Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:45:41.309241 env[1221]: time="2025-03-17T18:45:41.309237232Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:45:41.309485 env[1221]: time="2025-03-17T18:45:41.309445870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:45:41.309634 env[1221]: time="2025-03-17T18:45:41.309601063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310274600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310337237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310364148Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310455119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310488859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310629662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310652307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310674161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310720977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310743460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310769579Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310963261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.310989237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.311282 env[1221]: time="2025-03-17T18:45:41.311012184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.312016 env[1221]: time="2025-03-17T18:45:41.311034620Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:45:41.312016 env[1221]: time="2025-03-17T18:45:41.311061651Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:45:41.312016 env[1221]: time="2025-03-17T18:45:41.311083123Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:45:41.312016 env[1221]: time="2025-03-17T18:45:41.311112547Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:45:41.312016 env[1221]: time="2025-03-17T18:45:41.311161337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:45:41.312469 env[1221]: time="2025-03-17T18:45:41.312372765Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:45:41.315957 env[1221]: time="2025-03-17T18:45:41.312493879Z" level=info msg="Connect containerd service" Mar 17 18:45:41.315957 env[1221]: time="2025-03-17T18:45:41.312555370Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:45:41.318486 env[1221]: time="2025-03-17T18:45:41.318426463Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:45:41.318905 env[1221]: time="2025-03-17T18:45:41.318869101Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:45:41.319014 env[1221]: time="2025-03-17T18:45:41.318956552Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:45:41.319139 systemd[1]: Started containerd.service. Mar 17 18:45:41.319626 env[1221]: time="2025-03-17T18:45:41.319309878Z" level=info msg="containerd successfully booted in 0.446732s" Mar 17 18:45:41.321595 env[1221]: time="2025-03-17T18:45:41.321538432Z" level=info msg="Start subscribing containerd event" Mar 17 18:45:41.321714 env[1221]: time="2025-03-17T18:45:41.321637039Z" level=info msg="Start recovering state" Mar 17 18:45:41.339756 coreos-metadata[1183]: Mar 17 18:45:41.339 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Mar 17 18:45:41.342603 env[1221]: time="2025-03-17T18:45:41.342555366Z" level=info msg="Start event monitor" Mar 17 18:45:41.342774 env[1221]: time="2025-03-17T18:45:41.342624127Z" level=info msg="Start snapshots syncer" Mar 17 18:45:41.342774 env[1221]: time="2025-03-17T18:45:41.342644227Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:45:41.342774 env[1221]: time="2025-03-17T18:45:41.342662464Z" level=info msg="Start streaming server" Mar 17 18:45:41.345344 coreos-metadata[1183]: Mar 17 18:45:41.345 INFO Fetch failed with 404: resource not found Mar 17 18:45:41.345344 coreos-metadata[1183]: Mar 17 18:45:41.345 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Mar 17 18:45:41.346953 coreos-metadata[1183]: Mar 17 18:45:41.346 INFO Fetch successful Mar 17 18:45:41.346953 coreos-metadata[1183]: Mar 17 18:45:41.346 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Mar 17 18:45:41.348209 coreos-metadata[1183]: Mar 17 18:45:41.348 INFO Fetch failed with 404: resource not found Mar 17 18:45:41.348209 coreos-metadata[1183]: Mar 17 18:45:41.348 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Mar 17 18:45:41.349228 coreos-metadata[1183]: Mar 17 18:45:41.349 INFO Fetch failed with 404: resource not found Mar 17 18:45:41.349228 coreos-metadata[1183]: Mar 17 18:45:41.349 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Mar 17 18:45:41.350855 coreos-metadata[1183]: Mar 17 18:45:41.350 INFO Fetch successful Mar 17 18:45:41.353638 unknown[1183]: wrote ssh authorized keys file for user: core Mar 17 18:45:41.383219 update-ssh-keys[1260]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:45:41.384061 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 18:45:41.388674 dbus-daemon[1184]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 18:45:41.389468 dbus-daemon[1184]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1236 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 18:45:41.394485 systemd[1]: Started systemd-hostnamed.service. Mar 17 18:45:41.407960 systemd[1]: Starting polkit.service... Mar 17 18:45:41.459477 polkitd[1262]: Started polkitd version 121 Mar 17 18:45:41.485511 polkitd[1262]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 18:45:41.485629 polkitd[1262]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 18:45:41.490583 polkitd[1262]: Finished loading, compiling and executing 2 rules Mar 17 18:45:41.491370 dbus-daemon[1184]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 18:45:41.491612 systemd[1]: Started polkit.service. Mar 17 18:45:41.492052 polkitd[1262]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 18:45:41.530530 systemd-hostnamed[1236]: Hostname set to (transient) Mar 17 18:45:41.533717 systemd-resolved[1147]: System hostname changed to 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal'. Mar 17 18:45:42.729412 tar[1217]: linux-amd64/LICENSE Mar 17 18:45:42.730009 tar[1217]: linux-amd64/README.md Mar 17 18:45:42.747691 systemd[1]: Finished prepare-helm.service. Mar 17 18:45:42.977667 sshd_keygen[1223]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:45:42.995108 systemd[1]: Started kubelet.service. Mar 17 18:45:43.061309 systemd[1]: Finished sshd-keygen.service. Mar 17 18:45:43.071423 systemd[1]: Starting issuegen.service... Mar 17 18:45:43.082318 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:45:43.082578 systemd[1]: Finished issuegen.service. Mar 17 18:45:43.095288 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:45:43.118320 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:45:43.129639 systemd[1]: Started getty@tty1.service. Mar 17 18:45:43.141488 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:45:43.150289 systemd[1]: Reached target getty.target. Mar 17 18:45:43.255726 locksmithd[1253]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:45:44.134816 kubelet[1287]: E0317 18:45:44.134751 1287 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:45:44.137621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:45:44.137912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:45:44.138370 systemd[1]: kubelet.service: Consumed 1.429s CPU time. Mar 17 18:45:46.737538 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Mar 17 18:45:48.783734 kernel: loop2: detected capacity change from 0 to 2097152 Mar 17 18:45:48.810416 systemd-nspawn[1307]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Mar 17 18:45:48.810416 systemd-nspawn[1307]: Press ^] three times within 1s to kill container. Mar 17 18:45:48.827735 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:45:48.913206 systemd[1]: Started oem-gce.service. Mar 17 18:45:48.913716 systemd[1]: Reached target multi-user.target. Mar 17 18:45:48.916428 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:45:48.927896 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:45:48.928190 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:45:48.928443 systemd[1]: Startup finished in 1.052s (kernel) + 8.108s (initrd) + 16.526s (userspace) = 25.688s. Mar 17 18:45:48.987164 systemd-nspawn[1307]: + '[' -e /etc/default/instance_configs.cfg.template ']' Mar 17 18:45:48.987559 systemd-nspawn[1307]: + echo -e '[InstanceSetup]\nset_host_keys = false' Mar 17 18:45:48.987559 systemd-nspawn[1307]: + /usr/bin/google_instance_setup Mar 17 18:45:49.705844 instance-setup[1313]: INFO Running google_set_multiqueue. Mar 17 18:45:49.723213 instance-setup[1313]: INFO Set channels for eth0 to 2. Mar 17 18:45:49.727256 instance-setup[1313]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Mar 17 18:45:49.728921 instance-setup[1313]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Mar 17 18:45:49.729454 instance-setup[1313]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Mar 17 18:45:49.731076 instance-setup[1313]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Mar 17 18:45:49.731504 instance-setup[1313]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Mar 17 18:45:49.732936 instance-setup[1313]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Mar 17 18:45:49.733425 instance-setup[1313]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Mar 17 18:45:49.734974 instance-setup[1313]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Mar 17 18:45:49.747150 instance-setup[1313]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Mar 17 18:45:49.747586 instance-setup[1313]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Mar 17 18:45:49.784603 systemd[1]: Created slice system-sshd.slice. Mar 17 18:45:49.788307 systemd[1]: Started sshd@0-10.128.0.78:22-139.178.89.65:52240.service. Mar 17 18:45:49.801422 systemd-nspawn[1307]: + /usr/bin/google_metadata_script_runner --script-type startup Mar 17 18:45:50.127374 sshd[1345]: Accepted publickey for core from 139.178.89.65 port 52240 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:45:50.131598 sshd[1345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:50.155019 systemd[1]: Created slice user-500.slice. Mar 17 18:45:50.157240 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:45:50.175430 systemd-logind[1207]: New session 1 of user core. Mar 17 18:45:50.184615 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:45:50.188543 systemd[1]: Starting user@500.service... Mar 17 18:45:50.208046 (systemd)[1351]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:50.210141 startup-script[1346]: INFO Starting startup scripts. Mar 17 18:45:50.228343 startup-script[1346]: INFO No startup scripts found in metadata. Mar 17 18:45:50.228509 startup-script[1346]: INFO Finished running startup scripts. Mar 17 18:45:50.289047 systemd-nspawn[1307]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Mar 17 18:45:50.289595 systemd-nspawn[1307]: + daemon_pids=() Mar 17 18:45:50.289595 systemd-nspawn[1307]: + for d in accounts clock_skew network Mar 17 18:45:50.289595 systemd-nspawn[1307]: + daemon_pids+=($!) Mar 17 18:45:50.289853 systemd-nspawn[1307]: + for d in accounts clock_skew network Mar 17 18:45:50.290077 systemd-nspawn[1307]: + daemon_pids+=($!) Mar 17 18:45:50.290201 systemd-nspawn[1307]: + for d in accounts clock_skew network Mar 17 18:45:50.290537 systemd-nspawn[1307]: + daemon_pids+=($!) Mar 17 18:45:50.290713 systemd-nspawn[1307]: + NOTIFY_SOCKET=/run/systemd/notify Mar 17 18:45:50.290808 systemd-nspawn[1307]: + /usr/bin/systemd-notify --ready Mar 17 18:45:50.291411 systemd-nspawn[1307]: + /usr/bin/google_network_daemon Mar 17 18:45:50.292713 systemd-nspawn[1307]: + /usr/bin/google_clock_skew_daemon Mar 17 18:45:50.301622 systemd-nspawn[1307]: + /usr/bin/google_accounts_daemon Mar 17 18:45:50.356132 systemd-nspawn[1307]: + wait -n 36 37 38 Mar 17 18:45:50.379016 systemd[1351]: Queued start job for default target default.target. Mar 17 18:45:50.379986 systemd[1351]: Reached target paths.target. Mar 17 18:45:50.380023 systemd[1351]: Reached target sockets.target. Mar 17 18:45:50.380049 systemd[1351]: Reached target timers.target. Mar 17 18:45:50.380074 systemd[1351]: Reached target basic.target. Mar 17 18:45:50.380252 systemd[1]: Started user@500.service. Mar 17 18:45:50.381906 systemd[1]: Started session-1.scope. Mar 17 18:45:50.382616 systemd[1351]: Reached target default.target. Mar 17 18:45:50.382715 systemd[1351]: Startup finished in 160ms. Mar 17 18:45:50.619882 systemd[1]: Started sshd@1-10.128.0.78:22-139.178.89.65:52244.service. Mar 17 18:45:50.928942 sshd[1364]: Accepted publickey for core from 139.178.89.65 port 52244 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:45:50.930080 sshd[1364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:50.938788 systemd[1]: Started session-2.scope. Mar 17 18:45:50.941386 systemd-logind[1207]: New session 2 of user core. Mar 17 18:45:51.078422 google-clock-skew[1358]: INFO Starting Google Clock Skew daemon. Mar 17 18:45:51.102335 google-clock-skew[1358]: INFO Clock drift token has changed: 0. Mar 17 18:45:51.118913 systemd-nspawn[1307]: hwclock: Cannot access the Hardware Clock via any known method. Mar 17 18:45:51.119272 systemd-nspawn[1307]: hwclock: Use the --verbose option to see the details of our search for an access method. Mar 17 18:45:51.120368 google-clock-skew[1358]: WARNING Failed to sync system time with hardware clock. Mar 17 18:45:51.147831 sshd[1364]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:51.152633 systemd[1]: sshd@1-10.128.0.78:22-139.178.89.65:52244.service: Deactivated successfully. Mar 17 18:45:51.154013 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:45:51.157490 systemd-logind[1207]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:45:51.159550 systemd-logind[1207]: Removed session 2. Mar 17 18:45:51.195219 systemd[1]: Started sshd@2-10.128.0.78:22-139.178.89.65:55354.service. Mar 17 18:45:51.252565 groupadd[1379]: group added to /etc/group: name=google-sudoers, GID=1000 Mar 17 18:45:51.256293 groupadd[1379]: group added to /etc/gshadow: name=google-sudoers Mar 17 18:45:51.264568 groupadd[1379]: new group: name=google-sudoers, GID=1000 Mar 17 18:45:51.280246 google-networking[1359]: INFO Starting Google Networking daemon. Mar 17 18:45:51.282324 google-accounts[1357]: INFO Starting Google Accounts daemon. Mar 17 18:45:51.321213 google-accounts[1357]: WARNING OS Login not installed. Mar 17 18:45:51.322301 google-accounts[1357]: INFO Creating a new user account for 0. Mar 17 18:45:51.327330 systemd-nspawn[1307]: useradd: invalid user name '0': use --badname to ignore Mar 17 18:45:51.328584 google-accounts[1357]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Mar 17 18:45:51.499851 sshd[1377]: Accepted publickey for core from 139.178.89.65 port 55354 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:45:51.502362 sshd[1377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:51.510527 systemd[1]: Started session-3.scope. Mar 17 18:45:51.511198 systemd-logind[1207]: New session 3 of user core. Mar 17 18:45:51.708053 sshd[1377]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:51.712793 systemd-logind[1207]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:45:51.713096 systemd[1]: sshd@2-10.128.0.78:22-139.178.89.65:55354.service: Deactivated successfully. Mar 17 18:45:51.714254 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:45:51.715361 systemd-logind[1207]: Removed session 3. Mar 17 18:45:51.754183 systemd[1]: Started sshd@3-10.128.0.78:22-139.178.89.65:55364.service. Mar 17 18:45:52.045337 sshd[1394]: Accepted publickey for core from 139.178.89.65 port 55364 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:45:52.047172 sshd[1394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:52.052762 systemd-logind[1207]: New session 4 of user core. Mar 17 18:45:52.054261 systemd[1]: Started session-4.scope. Mar 17 18:45:52.259840 sshd[1394]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:52.263982 systemd[1]: sshd@3-10.128.0.78:22-139.178.89.65:55364.service: Deactivated successfully. Mar 17 18:45:52.265080 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:45:52.265952 systemd-logind[1207]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:45:52.267347 systemd-logind[1207]: Removed session 4. Mar 17 18:45:52.305627 systemd[1]: Started sshd@4-10.128.0.78:22-139.178.89.65:55366.service. Mar 17 18:45:52.594195 sshd[1400]: Accepted publickey for core from 139.178.89.65 port 55366 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:45:52.596275 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:52.602777 systemd-logind[1207]: New session 5 of user core. Mar 17 18:45:52.603361 systemd[1]: Started session-5.scope. Mar 17 18:45:52.795983 sudo[1403]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:45:52.796435 sudo[1403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:45:52.832596 systemd[1]: Starting docker.service... Mar 17 18:45:52.882776 env[1413]: time="2025-03-17T18:45:52.882701942Z" level=info msg="Starting up" Mar 17 18:45:52.884744 env[1413]: time="2025-03-17T18:45:52.884667754Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:45:52.884744 env[1413]: time="2025-03-17T18:45:52.884729896Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:45:52.884915 env[1413]: time="2025-03-17T18:45:52.884768933Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:45:52.884915 env[1413]: time="2025-03-17T18:45:52.884788249Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:45:52.887560 env[1413]: time="2025-03-17T18:45:52.887507661Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:45:52.887560 env[1413]: time="2025-03-17T18:45:52.887532937Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:45:52.887560 env[1413]: time="2025-03-17T18:45:52.887556667Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:45:52.887818 env[1413]: time="2025-03-17T18:45:52.887570820Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:45:52.925847 env[1413]: time="2025-03-17T18:45:52.925788764Z" level=info msg="Loading containers: start." Mar 17 18:45:53.108720 kernel: Initializing XFRM netlink socket Mar 17 18:45:53.155440 env[1413]: time="2025-03-17T18:45:53.155286005Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:45:53.245927 systemd-networkd[1021]: docker0: Link UP Mar 17 18:45:53.266981 env[1413]: time="2025-03-17T18:45:53.266911116Z" level=info msg="Loading containers: done." Mar 17 18:45:53.288017 env[1413]: time="2025-03-17T18:45:53.287942754Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:45:53.288322 env[1413]: time="2025-03-17T18:45:53.288271810Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:45:53.288467 env[1413]: time="2025-03-17T18:45:53.288436171Z" level=info msg="Daemon has completed initialization" Mar 17 18:45:53.309352 systemd[1]: Started docker.service. Mar 17 18:45:53.322044 env[1413]: time="2025-03-17T18:45:53.321957455Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:45:54.359116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:45:54.359466 systemd[1]: Stopped kubelet.service. Mar 17 18:45:54.359538 systemd[1]: kubelet.service: Consumed 1.429s CPU time. Mar 17 18:45:54.364132 systemd[1]: Starting kubelet.service... Mar 17 18:45:54.606421 env[1221]: time="2025-03-17T18:45:54.606343347Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:45:54.633536 systemd[1]: Started kubelet.service. Mar 17 18:45:54.727996 kubelet[1545]: E0317 18:45:54.727927 1545 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:45:54.732448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:45:54.732702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:45:55.129053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3278921107.mount: Deactivated successfully. Mar 17 18:45:57.020828 env[1221]: time="2025-03-17T18:45:57.020745428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:45:57.023619 env[1221]: time="2025-03-17T18:45:57.023564424Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:45:57.026520 env[1221]: time="2025-03-17T18:45:57.026463565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:45:57.030146 env[1221]: time="2025-03-17T18:45:57.030076300Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:45:57.031805 env[1221]: time="2025-03-17T18:45:57.031738279Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 18:45:57.049139 env[1221]: time="2025-03-17T18:45:57.049077835Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:45:58.829326 env[1221]: time="2025-03-17T18:45:58.829239397Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:45:58.833108 env[1221]: time="2025-03-17T18:45:58.833049365Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:45:58.835637 env[1221]: time="2025-03-17T18:45:58.835575176Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:45:58.838216 env[1221]: time="2025-03-17T18:45:58.838162920Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:45:58.839773 env[1221]: time="2025-03-17T18:45:58.839705138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 18:45:58.855912 env[1221]: time="2025-03-17T18:45:58.855860749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:46:00.073254 env[1221]: time="2025-03-17T18:46:00.073172515Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:00.076708 env[1221]: time="2025-03-17T18:46:00.076630292Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:00.079891 env[1221]: time="2025-03-17T18:46:00.079811550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:00.083418 env[1221]: time="2025-03-17T18:46:00.083349148Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:00.084965 env[1221]: time="2025-03-17T18:46:00.084900521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 18:46:00.104031 env[1221]: time="2025-03-17T18:46:00.103953261Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:46:01.349017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766196418.mount: Deactivated successfully. Mar 17 18:46:02.095008 env[1221]: time="2025-03-17T18:46:02.094885624Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:02.097955 env[1221]: time="2025-03-17T18:46:02.097830844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:02.100711 env[1221]: time="2025-03-17T18:46:02.100620323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:02.102946 env[1221]: time="2025-03-17T18:46:02.102871817Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:02.103449 env[1221]: time="2025-03-17T18:46:02.103389451Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 18:46:02.118602 env[1221]: time="2025-03-17T18:46:02.118525295Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:46:02.477821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887638203.mount: Deactivated successfully. Mar 17 18:46:03.619702 env[1221]: time="2025-03-17T18:46:03.619607981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:03.622353 env[1221]: time="2025-03-17T18:46:03.622284601Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:03.625076 env[1221]: time="2025-03-17T18:46:03.625011979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:03.627446 env[1221]: time="2025-03-17T18:46:03.627390646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:03.628596 env[1221]: time="2025-03-17T18:46:03.628540407Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:46:03.643286 env[1221]: time="2025-03-17T18:46:03.643169631Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:46:04.011087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3030262824.mount: Deactivated successfully. Mar 17 18:46:04.021460 env[1221]: time="2025-03-17T18:46:04.021385391Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:04.024157 env[1221]: time="2025-03-17T18:46:04.024087886Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:04.027710 env[1221]: time="2025-03-17T18:46:04.027602146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:04.033616 env[1221]: time="2025-03-17T18:46:04.033551024Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 18:46:04.034615 env[1221]: time="2025-03-17T18:46:04.034467861Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:04.053811 env[1221]: time="2025-03-17T18:46:04.053759169Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:46:04.455105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount264525012.mount: Deactivated successfully. Mar 17 18:46:04.859166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:46:04.859508 systemd[1]: Stopped kubelet.service. Mar 17 18:46:04.862662 systemd[1]: Starting kubelet.service... Mar 17 18:46:05.116603 systemd[1]: Started kubelet.service. Mar 17 18:46:05.211073 kubelet[1588]: E0317 18:46:05.211018 1588 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:46:05.215337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:46:05.215568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:46:07.317717 env[1221]: time="2025-03-17T18:46:07.317610787Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:07.321132 env[1221]: time="2025-03-17T18:46:07.321054300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:07.323835 env[1221]: time="2025-03-17T18:46:07.323778918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:07.326593 env[1221]: time="2025-03-17T18:46:07.326529369Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:07.327710 env[1221]: time="2025-03-17T18:46:07.327639808Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 18:46:11.070457 systemd[1]: Stopped kubelet.service. Mar 17 18:46:11.074907 systemd[1]: Starting kubelet.service... Mar 17 18:46:11.115204 systemd[1]: Reloading. Mar 17 18:46:11.282357 /usr/lib/systemd/system-generators/torcx-generator[1678]: time="2025-03-17T18:46:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:46:11.283884 /usr/lib/systemd/system-generators/torcx-generator[1678]: time="2025-03-17T18:46:11Z" level=info msg="torcx already run" Mar 17 18:46:11.403228 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:46:11.403257 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:46:11.427406 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:46:11.569314 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 18:46:11.583640 systemd[1]: Started kubelet.service. Mar 17 18:46:11.594819 systemd[1]: Stopping kubelet.service... Mar 17 18:46:11.595421 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:46:11.596019 systemd[1]: Stopped kubelet.service. Mar 17 18:46:11.599515 systemd[1]: Starting kubelet.service... Mar 17 18:46:11.791196 systemd[1]: Started kubelet.service. Mar 17 18:46:11.870016 kubelet[1736]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:46:11.870510 kubelet[1736]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:46:11.870577 kubelet[1736]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:46:11.870793 kubelet[1736]: I0317 18:46:11.870748 1736 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:46:12.311973 kubelet[1736]: I0317 18:46:12.311902 1736 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:46:12.311973 kubelet[1736]: I0317 18:46:12.311954 1736 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:46:12.312616 kubelet[1736]: I0317 18:46:12.312495 1736 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:46:12.350648 kubelet[1736]: I0317 18:46:12.350112 1736 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:46:12.352162 kubelet[1736]: E0317 18:46:12.352120 1736 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:12.374939 kubelet[1736]: I0317 18:46:12.374884 1736 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:46:12.375302 kubelet[1736]: I0317 18:46:12.375261 1736 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:46:12.375578 kubelet[1736]: I0317 18:46:12.375304 1736 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:46:12.375800 kubelet[1736]: I0317 18:46:12.375585 1736 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:46:12.375800 kubelet[1736]: I0317 18:46:12.375606 1736 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:46:12.375800 kubelet[1736]: I0317 18:46:12.375791 1736 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:46:12.377383 kubelet[1736]: I0317 18:46:12.377349 1736 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:46:12.377383 kubelet[1736]: I0317 18:46:12.377381 1736 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:46:12.377569 kubelet[1736]: I0317 18:46:12.377418 1736 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:46:12.377569 kubelet[1736]: I0317 18:46:12.377442 1736 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:46:12.389122 kubelet[1736]: W0317 18:46:12.388895 1736 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.78:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:12.389122 kubelet[1736]: E0317 18:46:12.389032 1736 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.78:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:12.389393 kubelet[1736]: W0317 18:46:12.389215 1736 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:12.389393 kubelet[1736]: E0317 18:46:12.389279 1736 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:12.389513 kubelet[1736]: I0317 18:46:12.389400 1736 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:46:12.399045 kubelet[1736]: I0317 18:46:12.398993 1736 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:46:12.399369 kubelet[1736]: W0317 18:46:12.399349 1736 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:46:12.400647 kubelet[1736]: I0317 18:46:12.400585 1736 server.go:1264] "Started kubelet" Mar 17 18:46:12.401946 kubelet[1736]: I0317 18:46:12.401893 1736 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:46:12.403422 kubelet[1736]: I0317 18:46:12.403375 1736 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:46:12.420455 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:46:12.421524 kubelet[1736]: I0317 18:46:12.420774 1736 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:46:12.421980 kubelet[1736]: I0317 18:46:12.421906 1736 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:46:12.422429 kubelet[1736]: I0317 18:46:12.422405 1736 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:46:12.424945 kubelet[1736]: E0317 18:46:12.424782 1736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.78:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.78:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal.182dab7a3f6a8926 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,UID:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,},FirstTimestamp:2025-03-17 18:46:12.400556326 +0000 UTC m=+0.589377342,LastTimestamp:2025-03-17 18:46:12.400556326 +0000 UTC m=+0.589377342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,}" Mar 17 18:46:12.430471 kubelet[1736]: I0317 18:46:12.430431 1736 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:46:12.433538 kubelet[1736]: E0317 18:46:12.433485 1736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.78:6443: connect: connection refused" interval="200ms" Mar 17 18:46:12.434082 kubelet[1736]: I0317 18:46:12.434043 1736 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:46:12.434367 kubelet[1736]: I0317 18:46:12.434338 1736 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:46:12.435770 kubelet[1736]: I0317 18:46:12.435643 1736 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:46:12.435770 kubelet[1736]: I0317 18:46:12.435748 1736 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:46:12.438215 kubelet[1736]: I0317 18:46:12.438180 1736 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:46:12.441514 kubelet[1736]: W0317 18:46:12.441448 1736 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:12.441734 kubelet[1736]: E0317 18:46:12.441714 1736 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:12.460960 kubelet[1736]: E0317 18:46:12.460910 1736 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:46:12.463992 kubelet[1736]: I0317 18:46:12.463939 1736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:46:12.474311 kubelet[1736]: I0317 18:46:12.474275 1736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:46:12.474500 kubelet[1736]: I0317 18:46:12.474487 1736 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:46:12.474589 kubelet[1736]: I0317 18:46:12.474579 1736 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:46:12.474760 kubelet[1736]: E0317 18:46:12.474733 1736 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:46:12.480487 kubelet[1736]: W0317 18:46:12.479912 1736 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:12.480487 kubelet[1736]: E0317 18:46:12.479992 1736 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:12.483288 kubelet[1736]: I0317 18:46:12.483244 1736 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:46:12.483288 kubelet[1736]: I0317 18:46:12.483268 1736 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:46:12.483288 kubelet[1736]: I0317 18:46:12.483296 1736 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:46:12.486822 kubelet[1736]: I0317 18:46:12.486773 1736 policy_none.go:49] "None policy: Start" Mar 17 18:46:12.487744 kubelet[1736]: I0317 18:46:12.487718 1736 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:46:12.487883 kubelet[1736]: I0317 18:46:12.487758 1736 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:46:12.501348 systemd[1]: Created slice kubepods.slice. Mar 17 18:46:12.508518 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:46:12.519195 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:46:12.521632 kubelet[1736]: I0317 18:46:12.521597 1736 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:46:12.521937 kubelet[1736]: I0317 18:46:12.521871 1736 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:46:12.522075 kubelet[1736]: I0317 18:46:12.522049 1736 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:46:12.528648 kubelet[1736]: E0317 18:46:12.528608 1736 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" not found" Mar 17 18:46:12.540868 kubelet[1736]: I0317 18:46:12.540836 1736 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.541698 kubelet[1736]: E0317 18:46:12.541650 1736 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.78:6443/api/v1/nodes\": dial tcp 10.128.0.78:6443: connect: connection refused" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.575614 kubelet[1736]: I0317 18:46:12.575112 1736 topology_manager.go:215] "Topology Admit Handler" podUID="014415e314806c598284956477dd0de6" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.590599 kubelet[1736]: I0317 18:46:12.590540 1736 topology_manager.go:215] "Topology Admit Handler" podUID="a954f8e8eff2206c6e54c0a6b495e88b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.595956 kubelet[1736]: I0317 18:46:12.595892 1736 topology_manager.go:215] "Topology Admit Handler" podUID="b3e1492b7f7adb8556a7aeb47d8a1cc5" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.604404 systemd[1]: Created slice kubepods-burstable-pod014415e314806c598284956477dd0de6.slice. Mar 17 18:46:12.614714 systemd[1]: Created slice kubepods-burstable-poda954f8e8eff2206c6e54c0a6b495e88b.slice. Mar 17 18:46:12.625998 systemd[1]: Created slice kubepods-burstable-podb3e1492b7f7adb8556a7aeb47d8a1cc5.slice. Mar 17 18:46:12.635291 kubelet[1736]: E0317 18:46:12.635212 1736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.78:6443: connect: connection refused" interval="400ms" Mar 17 18:46:12.636696 kubelet[1736]: I0317 18:46:12.636645 1736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a954f8e8eff2206c6e54c0a6b495e88b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"a954f8e8eff2206c6e54c0a6b495e88b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.636858 kubelet[1736]: I0317 18:46:12.636728 1736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a954f8e8eff2206c6e54c0a6b495e88b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"a954f8e8eff2206c6e54c0a6b495e88b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.636858 kubelet[1736]: I0317 18:46:12.636765 1736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a954f8e8eff2206c6e54c0a6b495e88b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"a954f8e8eff2206c6e54c0a6b495e88b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.636858 kubelet[1736]: I0317 18:46:12.636795 1736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/014415e314806c598284956477dd0de6-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"014415e314806c598284956477dd0de6\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.636858 kubelet[1736]: I0317 18:46:12.636842 1736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/014415e314806c598284956477dd0de6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"014415e314806c598284956477dd0de6\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.637098 kubelet[1736]: I0317 18:46:12.636869 1736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a954f8e8eff2206c6e54c0a6b495e88b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"a954f8e8eff2206c6e54c0a6b495e88b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.637098 kubelet[1736]: I0317 18:46:12.636899 1736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a954f8e8eff2206c6e54c0a6b495e88b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"a954f8e8eff2206c6e54c0a6b495e88b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.637098 kubelet[1736]: I0317 18:46:12.636933 1736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3e1492b7f7adb8556a7aeb47d8a1cc5-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"b3e1492b7f7adb8556a7aeb47d8a1cc5\") " pod="kube-system/kube-scheduler-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.637098 kubelet[1736]: I0317 18:46:12.636963 1736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/014415e314806c598284956477dd0de6-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"014415e314806c598284956477dd0de6\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.751672 kubelet[1736]: I0317 18:46:12.751633 1736 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.752211 kubelet[1736]: E0317 18:46:12.752152 1736 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.78:6443/api/v1/nodes\": dial tcp 10.128.0.78:6443: connect: connection refused" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:12.913201 env[1221]: time="2025-03-17T18:46:12.913130033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,Uid:014415e314806c598284956477dd0de6,Namespace:kube-system,Attempt:0,}" Mar 17 18:46:12.919823 env[1221]: time="2025-03-17T18:46:12.919755791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,Uid:a954f8e8eff2206c6e54c0a6b495e88b,Namespace:kube-system,Attempt:0,}" Mar 17 18:46:12.931908 env[1221]: time="2025-03-17T18:46:12.931845470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,Uid:b3e1492b7f7adb8556a7aeb47d8a1cc5,Namespace:kube-system,Attempt:0,}" Mar 17 18:46:13.035924 kubelet[1736]: E0317 18:46:13.035838 1736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.78:6443: connect: connection refused" interval="800ms" Mar 17 18:46:13.159930 kubelet[1736]: I0317 18:46:13.159288 1736 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:13.159930 kubelet[1736]: E0317 18:46:13.159881 1736 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.78:6443/api/v1/nodes\": dial tcp 10.128.0.78:6443: connect: connection refused" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:13.284565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150774636.mount: Deactivated successfully. Mar 17 18:46:13.296274 env[1221]: time="2025-03-17T18:46:13.296191885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.299379 env[1221]: time="2025-03-17T18:46:13.299316394Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.300791 env[1221]: time="2025-03-17T18:46:13.300734801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.301770 env[1221]: time="2025-03-17T18:46:13.301728405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.306276 env[1221]: time="2025-03-17T18:46:13.306198786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.307570 env[1221]: time="2025-03-17T18:46:13.307520752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.308853 env[1221]: time="2025-03-17T18:46:13.308757754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.309891 env[1221]: time="2025-03-17T18:46:13.309849488Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.313502 env[1221]: time="2025-03-17T18:46:13.313443423Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.314580 env[1221]: time="2025-03-17T18:46:13.314538632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.318486 env[1221]: time="2025-03-17T18:46:13.318426730Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.321233 env[1221]: time="2025-03-17T18:46:13.321173238Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:13.367618 env[1221]: time="2025-03-17T18:46:13.367516940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:46:13.367876 env[1221]: time="2025-03-17T18:46:13.367635233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:46:13.367876 env[1221]: time="2025-03-17T18:46:13.367707618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:46:13.368017 env[1221]: time="2025-03-17T18:46:13.367949695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/534ebc204f6f2b5d4aedb10d5128b1191ea5bcb02df8ff77dd25578458abfe8b pid=1775 runtime=io.containerd.runc.v2 Mar 17 18:46:13.379669 env[1221]: time="2025-03-17T18:46:13.379561864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:46:13.379996 env[1221]: time="2025-03-17T18:46:13.379952686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:46:13.380192 env[1221]: time="2025-03-17T18:46:13.380152166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:46:13.380547 env[1221]: time="2025-03-17T18:46:13.380498503Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/311c4eeb1981b79ab29fc7ba589f7b806b8fd54fe67c75d10bb103de7d3baab5 pid=1797 runtime=io.containerd.runc.v2 Mar 17 18:46:13.390005 env[1221]: time="2025-03-17T18:46:13.389879015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:46:13.390247 env[1221]: time="2025-03-17T18:46:13.390046474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:46:13.390247 env[1221]: time="2025-03-17T18:46:13.390121895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:46:13.390648 env[1221]: time="2025-03-17T18:46:13.390446908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f528c4e645f4f579b4add43e0fce192b65984d44875bf0b425972724b670e1a3 pid=1809 runtime=io.containerd.runc.v2 Mar 17 18:46:13.409955 systemd[1]: Started cri-containerd-534ebc204f6f2b5d4aedb10d5128b1191ea5bcb02df8ff77dd25578458abfe8b.scope. Mar 17 18:46:13.426319 systemd[1]: Started cri-containerd-311c4eeb1981b79ab29fc7ba589f7b806b8fd54fe67c75d10bb103de7d3baab5.scope. Mar 17 18:46:13.450158 systemd[1]: Started cri-containerd-f528c4e645f4f579b4add43e0fce192b65984d44875bf0b425972724b670e1a3.scope. Mar 17 18:46:13.511497 kubelet[1736]: W0317 18:46:13.511339 1736 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:13.511497 kubelet[1736]: E0317 18:46:13.511417 1736 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:13.543736 env[1221]: time="2025-03-17T18:46:13.541936302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,Uid:014415e314806c598284956477dd0de6,Namespace:kube-system,Attempt:0,} returns sandbox id \"311c4eeb1981b79ab29fc7ba589f7b806b8fd54fe67c75d10bb103de7d3baab5\"" Mar 17 18:46:13.547102 env[1221]: time="2025-03-17T18:46:13.547048908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,Uid:a954f8e8eff2206c6e54c0a6b495e88b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f528c4e645f4f579b4add43e0fce192b65984d44875bf0b425972724b670e1a3\"" Mar 17 18:46:13.548647 kubelet[1736]: E0317 18:46:13.548035 1736 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-21291" Mar 17 18:46:13.553635 kubelet[1736]: E0317 18:46:13.553386 1736 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flat" Mar 17 18:46:13.556470 env[1221]: time="2025-03-17T18:46:13.556417676Z" level=info msg="CreateContainer within sandbox \"311c4eeb1981b79ab29fc7ba589f7b806b8fd54fe67c75d10bb103de7d3baab5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:46:13.558611 env[1221]: time="2025-03-17T18:46:13.558561116Z" level=info msg="CreateContainer within sandbox \"f528c4e645f4f579b4add43e0fce192b65984d44875bf0b425972724b670e1a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:46:13.595779 env[1221]: time="2025-03-17T18:46:13.595722536Z" level=info msg="CreateContainer within sandbox \"f528c4e645f4f579b4add43e0fce192b65984d44875bf0b425972724b670e1a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2fc326da36e415f2d4dbb4d69a53c217cdb6ca0b14e9c1e569aa98f1101ccbd1\"" Mar 17 18:46:13.597046 env[1221]: time="2025-03-17T18:46:13.596984003Z" level=info msg="StartContainer for \"2fc326da36e415f2d4dbb4d69a53c217cdb6ca0b14e9c1e569aa98f1101ccbd1\"" Mar 17 18:46:13.599188 env[1221]: time="2025-03-17T18:46:13.599139316Z" level=info msg="CreateContainer within sandbox \"311c4eeb1981b79ab29fc7ba589f7b806b8fd54fe67c75d10bb103de7d3baab5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d0d7daa90c73dbfc13b0df25f38fefbe5de27857a06ac2c12edafb5ba45c1c1c\"" Mar 17 18:46:13.600146 env[1221]: time="2025-03-17T18:46:13.600109973Z" level=info msg="StartContainer for \"d0d7daa90c73dbfc13b0df25f38fefbe5de27857a06ac2c12edafb5ba45c1c1c\"" Mar 17 18:46:13.609227 env[1221]: time="2025-03-17T18:46:13.609165073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,Uid:b3e1492b7f7adb8556a7aeb47d8a1cc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"534ebc204f6f2b5d4aedb10d5128b1191ea5bcb02df8ff77dd25578458abfe8b\"" Mar 17 18:46:13.612752 kubelet[1736]: E0317 18:46:13.612227 1736 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-21291" Mar 17 18:46:13.613873 env[1221]: time="2025-03-17T18:46:13.613824322Z" level=info msg="CreateContainer within sandbox \"534ebc204f6f2b5d4aedb10d5128b1191ea5bcb02df8ff77dd25578458abfe8b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:46:13.635386 env[1221]: time="2025-03-17T18:46:13.635317907Z" level=info msg="CreateContainer within sandbox \"534ebc204f6f2b5d4aedb10d5128b1191ea5bcb02df8ff77dd25578458abfe8b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d237e6f3fa2e534e9a6d5615c55aece107e5f88c73c5d5b7af7d4f094f1a3c7f\"" Mar 17 18:46:13.636613 env[1221]: time="2025-03-17T18:46:13.636550103Z" level=info msg="StartContainer for \"d237e6f3fa2e534e9a6d5615c55aece107e5f88c73c5d5b7af7d4f094f1a3c7f\"" Mar 17 18:46:13.639306 kubelet[1736]: W0317 18:46:13.639145 1736 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.78:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:13.639306 kubelet[1736]: E0317 18:46:13.639236 1736 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.78:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:13.642887 systemd[1]: Started cri-containerd-2fc326da36e415f2d4dbb4d69a53c217cdb6ca0b14e9c1e569aa98f1101ccbd1.scope. Mar 17 18:46:13.678480 systemd[1]: Started cri-containerd-d0d7daa90c73dbfc13b0df25f38fefbe5de27857a06ac2c12edafb5ba45c1c1c.scope. Mar 17 18:46:13.699023 systemd[1]: Started cri-containerd-d237e6f3fa2e534e9a6d5615c55aece107e5f88c73c5d5b7af7d4f094f1a3c7f.scope. Mar 17 18:46:13.713736 kubelet[1736]: W0317 18:46:13.713554 1736 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:13.713736 kubelet[1736]: E0317 18:46:13.713638 1736 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:13.783926 kubelet[1736]: W0317 18:46:13.783364 1736 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:13.783926 kubelet[1736]: E0317 18:46:13.783481 1736 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.78:6443: connect: connection refused Mar 17 18:46:13.817185 env[1221]: time="2025-03-17T18:46:13.817022615Z" level=info msg="StartContainer for \"2fc326da36e415f2d4dbb4d69a53c217cdb6ca0b14e9c1e569aa98f1101ccbd1\" returns successfully" Mar 17 18:46:13.833215 env[1221]: time="2025-03-17T18:46:13.833149781Z" level=info msg="StartContainer for \"d0d7daa90c73dbfc13b0df25f38fefbe5de27857a06ac2c12edafb5ba45c1c1c\" returns successfully" Mar 17 18:46:13.838709 kubelet[1736]: E0317 18:46:13.838580 1736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.78:6443: connect: connection refused" interval="1.6s" Mar 17 18:46:13.845213 env[1221]: time="2025-03-17T18:46:13.845151163Z" level=info msg="StartContainer for \"d237e6f3fa2e534e9a6d5615c55aece107e5f88c73c5d5b7af7d4f094f1a3c7f\" returns successfully" Mar 17 18:46:13.966191 kubelet[1736]: I0317 18:46:13.965522 1736 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:13.966191 kubelet[1736]: E0317 18:46:13.966125 1736 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.78:6443/api/v1/nodes\": dial tcp 10.128.0.78:6443: connect: connection refused" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:15.571792 kubelet[1736]: I0317 18:46:15.571746 1736 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:17.105942 kubelet[1736]: E0317 18:46:17.105864 1736 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" not found" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:17.163883 kubelet[1736]: E0317 18:46:17.163668 1736 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal.182dab7a3f6a8926 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,UID:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,},FirstTimestamp:2025-03-17 18:46:12.400556326 +0000 UTC m=+0.589377342,LastTimestamp:2025-03-17 18:46:12.400556326 +0000 UTC m=+0.589377342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal,}" Mar 17 18:46:17.178809 kubelet[1736]: I0317 18:46:17.178748 1736 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:17.389913 kubelet[1736]: I0317 18:46:17.389866 1736 apiserver.go:52] "Watching apiserver" Mar 17 18:46:17.436411 kubelet[1736]: I0317 18:46:17.436370 1736 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:46:19.271191 systemd[1]: Reloading. Mar 17 18:46:19.417121 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2025-03-17T18:46:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:46:19.417173 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2025-03-17T18:46:19Z" level=info msg="torcx already run" Mar 17 18:46:19.523777 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:46:19.523806 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:46:19.549513 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:46:19.719581 systemd[1]: Stopping kubelet.service... Mar 17 18:46:19.741413 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:46:19.741717 systemd[1]: Stopped kubelet.service. Mar 17 18:46:19.741803 systemd[1]: kubelet.service: Consumed 1.118s CPU time. Mar 17 18:46:19.745350 systemd[1]: Starting kubelet.service... Mar 17 18:46:19.978463 systemd[1]: Started kubelet.service. Mar 17 18:46:20.068297 kubelet[2076]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:46:20.068897 kubelet[2076]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:46:20.068897 kubelet[2076]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:46:20.069032 kubelet[2076]: I0317 18:46:20.068964 2076 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:46:20.078758 kubelet[2076]: I0317 18:46:20.078661 2076 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:46:20.078758 kubelet[2076]: I0317 18:46:20.078718 2076 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:46:20.079153 kubelet[2076]: I0317 18:46:20.079111 2076 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:46:20.081549 kubelet[2076]: I0317 18:46:20.081513 2076 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:46:20.085389 kubelet[2076]: I0317 18:46:20.085329 2076 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:46:20.097520 kubelet[2076]: I0317 18:46:20.097462 2076 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:46:20.097944 kubelet[2076]: I0317 18:46:20.097886 2076 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:46:20.098220 kubelet[2076]: I0317 18:46:20.097932 2076 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:46:20.098396 kubelet[2076]: I0317 18:46:20.098233 2076 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:46:20.098396 kubelet[2076]: I0317 18:46:20.098254 2076 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:46:20.098396 kubelet[2076]: I0317 18:46:20.098331 2076 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:46:20.098571 kubelet[2076]: I0317 18:46:20.098476 2076 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:46:20.098571 kubelet[2076]: I0317 18:46:20.098497 2076 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:46:20.098571 kubelet[2076]: I0317 18:46:20.098531 2076 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:46:20.098571 kubelet[2076]: I0317 18:46:20.098561 2076 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:46:20.100261 kubelet[2076]: I0317 18:46:20.100234 2076 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:46:20.115381 kubelet[2076]: I0317 18:46:20.115315 2076 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:46:20.117572 kubelet[2076]: I0317 18:46:20.116024 2076 server.go:1264] "Started kubelet" Mar 17 18:46:20.119706 kubelet[2076]: I0317 18:46:20.119164 2076 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:46:20.134270 kubelet[2076]: I0317 18:46:20.133427 2076 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:46:20.135589 kubelet[2076]: I0317 18:46:20.135564 2076 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:46:20.137129 kubelet[2076]: I0317 18:46:20.137104 2076 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:46:20.137316 kubelet[2076]: I0317 18:46:20.135579 2076 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:46:20.139658 kubelet[2076]: I0317 18:46:20.139475 2076 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:46:20.139658 kubelet[2076]: I0317 18:46:20.135624 2076 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:46:20.139658 kubelet[2076]: I0317 18:46:20.140330 2076 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:46:20.143622 kubelet[2076]: I0317 18:46:20.143594 2076 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:46:20.143833 kubelet[2076]: I0317 18:46:20.143755 2076 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:46:20.149711 kubelet[2076]: I0317 18:46:20.149638 2076 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:46:20.153847 kubelet[2076]: I0317 18:46:20.153809 2076 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:46:20.154223 kubelet[2076]: I0317 18:46:20.154203 2076 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:46:20.154376 kubelet[2076]: I0317 18:46:20.154356 2076 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:46:20.154762 kubelet[2076]: E0317 18:46:20.154660 2076 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:46:20.157248 kubelet[2076]: I0317 18:46:20.157160 2076 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:46:20.168600 kubelet[2076]: E0317 18:46:20.167915 2076 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:46:20.235009 kubelet[2076]: I0317 18:46:20.233480 2076 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:46:20.235009 kubelet[2076]: I0317 18:46:20.233508 2076 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:46:20.235009 kubelet[2076]: I0317 18:46:20.233545 2076 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:46:20.235009 kubelet[2076]: I0317 18:46:20.233827 2076 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:46:20.235009 kubelet[2076]: I0317 18:46:20.233845 2076 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:46:20.235009 kubelet[2076]: I0317 18:46:20.233877 2076 policy_none.go:49] "None policy: Start" Mar 17 18:46:20.235873 kubelet[2076]: I0317 18:46:20.235837 2076 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:46:20.235873 kubelet[2076]: I0317 18:46:20.235872 2076 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:46:20.236118 kubelet[2076]: I0317 18:46:20.236097 2076 state_mem.go:75] "Updated machine memory state" Mar 17 18:46:20.248359 kubelet[2076]: I0317 18:46:20.245585 2076 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.257396 kubelet[2076]: I0317 18:46:20.257360 2076 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:46:20.257938 kubelet[2076]: I0317 18:46:20.257856 2076 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:46:20.258333 kubelet[2076]: I0317 18:46:20.258284 2076 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:46:20.258729 kubelet[2076]: I0317 18:46:20.258663 2076 topology_manager.go:215] "Topology Admit Handler" podUID="a954f8e8eff2206c6e54c0a6b495e88b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.258859 kubelet[2076]: I0317 18:46:20.258826 2076 topology_manager.go:215] "Topology Admit Handler" podUID="b3e1492b7f7adb8556a7aeb47d8a1cc5" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.258934 kubelet[2076]: I0317 18:46:20.258920 2076 topology_manager.go:215] "Topology Admit Handler" podUID="014415e314806c598284956477dd0de6" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.271844 kubelet[2076]: I0317 18:46:20.271805 2076 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.272045 kubelet[2076]: I0317 18:46:20.271916 2076 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.272181 kubelet[2076]: W0317 18:46:20.272143 2076 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 18:46:20.275336 kubelet[2076]: E0317 18:46:20.275292 2076 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.283509 sudo[2106]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:46:20.284749 sudo[2106]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:46:20.295476 kubelet[2076]: W0317 18:46:20.294509 2076 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 18:46:20.295476 kubelet[2076]: W0317 18:46:20.294876 2076 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 18:46:20.340934 kubelet[2076]: I0317 18:46:20.339805 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a954f8e8eff2206c6e54c0a6b495e88b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"a954f8e8eff2206c6e54c0a6b495e88b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.340934 kubelet[2076]: I0317 18:46:20.339867 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a954f8e8eff2206c6e54c0a6b495e88b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"a954f8e8eff2206c6e54c0a6b495e88b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.340934 kubelet[2076]: I0317 18:46:20.339901 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a954f8e8eff2206c6e54c0a6b495e88b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"a954f8e8eff2206c6e54c0a6b495e88b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.340934 kubelet[2076]: I0317 18:46:20.339936 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a954f8e8eff2206c6e54c0a6b495e88b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"a954f8e8eff2206c6e54c0a6b495e88b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.341328 kubelet[2076]: I0317 18:46:20.339970 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/014415e314806c598284956477dd0de6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"014415e314806c598284956477dd0de6\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.341328 kubelet[2076]: I0317 18:46:20.340003 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a954f8e8eff2206c6e54c0a6b495e88b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"a954f8e8eff2206c6e54c0a6b495e88b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.341328 kubelet[2076]: I0317 18:46:20.340063 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3e1492b7f7adb8556a7aeb47d8a1cc5-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"b3e1492b7f7adb8556a7aeb47d8a1cc5\") " pod="kube-system/kube-scheduler-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.341328 kubelet[2076]: I0317 18:46:20.340093 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/014415e314806c598284956477dd0de6-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"014415e314806c598284956477dd0de6\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:20.341546 kubelet[2076]: I0317 18:46:20.340123 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/014415e314806c598284956477dd0de6-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" (UID: \"014415e314806c598284956477dd0de6\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:21.059891 sudo[2106]: pam_unix(sudo:session): session closed for user root Mar 17 18:46:21.116811 kubelet[2076]: I0317 18:46:21.116771 2076 apiserver.go:52] "Watching apiserver" Mar 17 18:46:21.137551 kubelet[2076]: I0317 18:46:21.137501 2076 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:46:21.217900 kubelet[2076]: W0317 18:46:21.216950 2076 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 18:46:21.217900 kubelet[2076]: E0317 18:46:21.217040 2076 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" Mar 17 18:46:21.252725 kubelet[2076]: I0317 18:46:21.252623 2076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" podStartSLOduration=1.252595689 podStartE2EDuration="1.252595689s" podCreationTimestamp="2025-03-17 18:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:46:21.240521942 +0000 UTC m=+1.253972488" watchObservedRunningTime="2025-03-17 18:46:21.252595689 +0000 UTC m=+1.266046203" Mar 17 18:46:21.253023 kubelet[2076]: I0317 18:46:21.252809 2076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" podStartSLOduration=1.252799979 podStartE2EDuration="1.252799979s" podCreationTimestamp="2025-03-17 18:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:46:21.250229121 +0000 UTC m=+1.263679648" watchObservedRunningTime="2025-03-17 18:46:21.252799979 +0000 UTC m=+1.266250508" Mar 17 18:46:21.266665 kubelet[2076]: I0317 18:46:21.266573 2076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" podStartSLOduration=2.266543544 podStartE2EDuration="2.266543544s" podCreationTimestamp="2025-03-17 18:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:46:21.263822859 +0000 UTC m=+1.277273373" watchObservedRunningTime="2025-03-17 18:46:21.266543544 +0000 UTC m=+1.279994052" Mar 17 18:46:23.528338 sudo[1403]: pam_unix(sudo:session): session closed for user root Mar 17 18:46:23.571361 sshd[1400]: pam_unix(sshd:session): session closed for user core Mar 17 18:46:23.576481 systemd-logind[1207]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:46:23.576851 systemd[1]: sshd@4-10.128.0.78:22-139.178.89.65:55366.service: Deactivated successfully. Mar 17 18:46:23.578080 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:46:23.578318 systemd[1]: session-5.scope: Consumed 7.088s CPU time. Mar 17 18:46:23.579637 systemd-logind[1207]: Removed session 5. Mar 17 18:46:26.243006 update_engine[1209]: I0317 18:46:26.242930 1209 update_attempter.cc:509] Updating boot flags... Mar 17 18:46:34.646035 kubelet[2076]: I0317 18:46:34.645992 2076 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:46:34.647354 env[1221]: time="2025-03-17T18:46:34.647303416Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:46:34.648195 kubelet[2076]: I0317 18:46:34.648157 2076 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:46:34.911094 kubelet[2076]: I0317 18:46:34.910928 2076 topology_manager.go:215] "Topology Admit Handler" podUID="5f579173-ffe8-498c-8fd0-724601cc8a41" podNamespace="kube-system" podName="kube-proxy-6sfrh" Mar 17 18:46:34.919063 systemd[1]: Created slice kubepods-besteffort-pod5f579173_ffe8_498c_8fd0_724601cc8a41.slice. Mar 17 18:46:34.930799 kubelet[2076]: W0317 18:46:34.930755 2076 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:46:34.931042 kubelet[2076]: E0317 18:46:34.930813 2076 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:46:34.939879 kubelet[2076]: I0317 18:46:34.939821 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkt6d\" (UniqueName: \"kubernetes.io/projected/5f579173-ffe8-498c-8fd0-724601cc8a41-kube-api-access-jkt6d\") pod \"kube-proxy-6sfrh\" (UID: \"5f579173-ffe8-498c-8fd0-724601cc8a41\") " pod="kube-system/kube-proxy-6sfrh" Mar 17 18:46:34.939879 kubelet[2076]: I0317 18:46:34.939885 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f579173-ffe8-498c-8fd0-724601cc8a41-xtables-lock\") pod \"kube-proxy-6sfrh\" (UID: \"5f579173-ffe8-498c-8fd0-724601cc8a41\") " pod="kube-system/kube-proxy-6sfrh" Mar 17 18:46:34.940176 kubelet[2076]: I0317 18:46:34.939911 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f579173-ffe8-498c-8fd0-724601cc8a41-lib-modules\") pod \"kube-proxy-6sfrh\" (UID: \"5f579173-ffe8-498c-8fd0-724601cc8a41\") " pod="kube-system/kube-proxy-6sfrh" Mar 17 18:46:34.940176 kubelet[2076]: I0317 18:46:34.939937 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f579173-ffe8-498c-8fd0-724601cc8a41-kube-proxy\") pod \"kube-proxy-6sfrh\" (UID: \"5f579173-ffe8-498c-8fd0-724601cc8a41\") " pod="kube-system/kube-proxy-6sfrh" Mar 17 18:46:34.944348 kubelet[2076]: I0317 18:46:34.944298 2076 topology_manager.go:215] "Topology Admit Handler" podUID="6919bef5-1c5f-4605-bbce-bf53f5124720" podNamespace="kube-system" podName="cilium-7kfln" Mar 17 18:46:34.952999 systemd[1]: Created slice kubepods-burstable-pod6919bef5_1c5f_4605_bbce_bf53f5124720.slice. Mar 17 18:46:34.967695 kubelet[2076]: W0317 18:46:34.967626 2076 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:46:34.968149 kubelet[2076]: E0317 18:46:34.968105 2076 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:46:35.040467 kubelet[2076]: I0317 18:46:35.040391 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cni-path\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.040467 kubelet[2076]: I0317 18:46:35.040463 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-config-path\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.040870 kubelet[2076]: I0317 18:46:35.040580 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-host-proc-sys-kernel\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.040870 kubelet[2076]: I0317 18:46:35.040638 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-etc-cni-netd\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.040870 kubelet[2076]: I0317 18:46:35.040667 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-host-proc-sys-net\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.040870 kubelet[2076]: I0317 18:46:35.040720 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77spc\" (UniqueName: \"kubernetes.io/projected/6919bef5-1c5f-4605-bbce-bf53f5124720-kube-api-access-77spc\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.040870 kubelet[2076]: I0317 18:46:35.040749 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-xtables-lock\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.041163 kubelet[2076]: I0317 18:46:35.040775 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6919bef5-1c5f-4605-bbce-bf53f5124720-hubble-tls\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.041163 kubelet[2076]: I0317 18:46:35.040859 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-run\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.041163 kubelet[2076]: I0317 18:46:35.040889 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6919bef5-1c5f-4605-bbce-bf53f5124720-clustermesh-secrets\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.041163 kubelet[2076]: I0317 18:46:35.040931 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-hostproc\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.041163 kubelet[2076]: I0317 18:46:35.040958 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-cgroup\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.041163 kubelet[2076]: I0317 18:46:35.040988 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-bpf-maps\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.041535 kubelet[2076]: I0317 18:46:35.041033 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-lib-modules\") pod \"cilium-7kfln\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " pod="kube-system/cilium-7kfln" Mar 17 18:46:35.137390 kubelet[2076]: I0317 18:46:35.137337 2076 topology_manager.go:215] "Topology Admit Handler" podUID="e0d16c84-2526-4ff9-8c33-9061d637f468" podNamespace="kube-system" podName="cilium-operator-599987898-qlcc4" Mar 17 18:46:35.155524 systemd[1]: Created slice kubepods-besteffort-pode0d16c84_2526_4ff9_8c33_9061d637f468.slice. Mar 17 18:46:35.243229 kubelet[2076]: I0317 18:46:35.243059 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0d16c84-2526-4ff9-8c33-9061d637f468-cilium-config-path\") pod \"cilium-operator-599987898-qlcc4\" (UID: \"e0d16c84-2526-4ff9-8c33-9061d637f468\") " pod="kube-system/cilium-operator-599987898-qlcc4" Mar 17 18:46:35.243229 kubelet[2076]: I0317 18:46:35.243171 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rql75\" (UniqueName: \"kubernetes.io/projected/e0d16c84-2526-4ff9-8c33-9061d637f468-kube-api-access-rql75\") pod \"cilium-operator-599987898-qlcc4\" (UID: \"e0d16c84-2526-4ff9-8c33-9061d637f468\") " pod="kube-system/cilium-operator-599987898-qlcc4" Mar 17 18:46:36.051324 kubelet[2076]: E0317 18:46:36.051251 2076 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:46:36.051324 kubelet[2076]: E0317 18:46:36.051316 2076 projected.go:200] Error preparing data for projected volume kube-api-access-jkt6d for pod kube-system/kube-proxy-6sfrh: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:46:36.052053 kubelet[2076]: E0317 18:46:36.051430 2076 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f579173-ffe8-498c-8fd0-724601cc8a41-kube-api-access-jkt6d podName:5f579173-ffe8-498c-8fd0-724601cc8a41 nodeName:}" failed. No retries permitted until 2025-03-17 18:46:36.55139741 +0000 UTC m=+16.564847907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jkt6d" (UniqueName: "kubernetes.io/projected/5f579173-ffe8-498c-8fd0-724601cc8a41-kube-api-access-jkt6d") pod "kube-proxy-6sfrh" (UID: "5f579173-ffe8-498c-8fd0-724601cc8a41") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:46:36.203969 kubelet[2076]: E0317 18:46:36.203898 2076 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:46:36.203969 kubelet[2076]: E0317 18:46:36.203963 2076 projected.go:200] Error preparing data for projected volume kube-api-access-77spc for pod kube-system/cilium-7kfln: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:46:36.204296 kubelet[2076]: E0317 18:46:36.204071 2076 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6919bef5-1c5f-4605-bbce-bf53f5124720-kube-api-access-77spc podName:6919bef5-1c5f-4605-bbce-bf53f5124720 nodeName:}" failed. No retries permitted until 2025-03-17 18:46:36.704042847 +0000 UTC m=+16.717493368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-77spc" (UniqueName: "kubernetes.io/projected/6919bef5-1c5f-4605-bbce-bf53f5124720-kube-api-access-77spc") pod "cilium-7kfln" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:46:36.374027 env[1221]: time="2025-03-17T18:46:36.373965811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qlcc4,Uid:e0d16c84-2526-4ff9-8c33-9061d637f468,Namespace:kube-system,Attempt:0,}" Mar 17 18:46:36.407784 env[1221]: time="2025-03-17T18:46:36.407598956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:46:36.408109 env[1221]: time="2025-03-17T18:46:36.408046615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:46:36.408287 env[1221]: time="2025-03-17T18:46:36.408230824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:46:36.409413 env[1221]: time="2025-03-17T18:46:36.408760882Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04 pid=2178 runtime=io.containerd.runc.v2 Mar 17 18:46:36.440708 systemd[1]: run-containerd-runc-k8s.io-cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04-runc.aJry55.mount: Deactivated successfully. Mar 17 18:46:36.446635 systemd[1]: Started cri-containerd-cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04.scope. Mar 17 18:46:36.518808 env[1221]: time="2025-03-17T18:46:36.518736954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qlcc4,Uid:e0d16c84-2526-4ff9-8c33-9061d637f468,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\"" Mar 17 18:46:36.525247 env[1221]: time="2025-03-17T18:46:36.525187724Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:46:36.728896 env[1221]: time="2025-03-17T18:46:36.728724191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6sfrh,Uid:5f579173-ffe8-498c-8fd0-724601cc8a41,Namespace:kube-system,Attempt:0,}" Mar 17 18:46:36.756214 env[1221]: time="2025-03-17T18:46:36.756045394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:46:36.756214 env[1221]: time="2025-03-17T18:46:36.756151181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:46:36.756576 env[1221]: time="2025-03-17T18:46:36.756175004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:46:36.756576 env[1221]: time="2025-03-17T18:46:36.756516756Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/564456138f796478eb1aed71f84a43788188909aaaaaf07db1e62f177709ddbf pid=2223 runtime=io.containerd.runc.v2 Mar 17 18:46:36.775636 systemd[1]: Started cri-containerd-564456138f796478eb1aed71f84a43788188909aaaaaf07db1e62f177709ddbf.scope. Mar 17 18:46:36.830425 env[1221]: time="2025-03-17T18:46:36.830361231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6sfrh,Uid:5f579173-ffe8-498c-8fd0-724601cc8a41,Namespace:kube-system,Attempt:0,} returns sandbox id \"564456138f796478eb1aed71f84a43788188909aaaaaf07db1e62f177709ddbf\"" Mar 17 18:46:36.834921 env[1221]: time="2025-03-17T18:46:36.834803990Z" level=info msg="CreateContainer within sandbox \"564456138f796478eb1aed71f84a43788188909aaaaaf07db1e62f177709ddbf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:46:36.857375 env[1221]: time="2025-03-17T18:46:36.857291547Z" level=info msg="CreateContainer within sandbox \"564456138f796478eb1aed71f84a43788188909aaaaaf07db1e62f177709ddbf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b95cdda7eb9d24df903ffe9589d3f0c21411e9d0e1168940548ca75c44f1f9e\"" Mar 17 18:46:36.860392 env[1221]: time="2025-03-17T18:46:36.858984899Z" level=info msg="StartContainer for \"8b95cdda7eb9d24df903ffe9589d3f0c21411e9d0e1168940548ca75c44f1f9e\"" Mar 17 18:46:36.889598 systemd[1]: Started cri-containerd-8b95cdda7eb9d24df903ffe9589d3f0c21411e9d0e1168940548ca75c44f1f9e.scope. Mar 17 18:46:36.937112 env[1221]: time="2025-03-17T18:46:36.937041026Z" level=info msg="StartContainer for \"8b95cdda7eb9d24df903ffe9589d3f0c21411e9d0e1168940548ca75c44f1f9e\" returns successfully" Mar 17 18:46:37.058991 env[1221]: time="2025-03-17T18:46:37.058823266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7kfln,Uid:6919bef5-1c5f-4605-bbce-bf53f5124720,Namespace:kube-system,Attempt:0,}" Mar 17 18:46:37.083226 env[1221]: time="2025-03-17T18:46:37.083038140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:46:37.083463 env[1221]: time="2025-03-17T18:46:37.083240637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:46:37.083463 env[1221]: time="2025-03-17T18:46:37.083283032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:46:37.083778 env[1221]: time="2025-03-17T18:46:37.083641123Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f pid=2331 runtime=io.containerd.runc.v2 Mar 17 18:46:37.104215 systemd[1]: Started cri-containerd-a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f.scope. Mar 17 18:46:37.148292 env[1221]: time="2025-03-17T18:46:37.148041146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7kfln,Uid:6919bef5-1c5f-4605-bbce-bf53f5124720,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\"" Mar 17 18:46:37.252987 kubelet[2076]: I0317 18:46:37.252891 2076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6sfrh" podStartSLOduration=3.252865678 podStartE2EDuration="3.252865678s" podCreationTimestamp="2025-03-17 18:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:46:37.251788241 +0000 UTC m=+17.265238774" watchObservedRunningTime="2025-03-17 18:46:37.252865678 +0000 UTC m=+17.266316194" Mar 17 18:46:37.615939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009077217.mount: Deactivated successfully. Mar 17 18:46:40.916498 env[1221]: time="2025-03-17T18:46:40.916406649Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:40.920293 env[1221]: time="2025-03-17T18:46:40.920155761Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:40.923119 env[1221]: time="2025-03-17T18:46:40.923053792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:40.924078 env[1221]: time="2025-03-17T18:46:40.924020984Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:46:40.928166 env[1221]: time="2025-03-17T18:46:40.926949208Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:46:40.930112 env[1221]: time="2025-03-17T18:46:40.930052726Z" level=info msg="CreateContainer within sandbox \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:46:40.959867 env[1221]: time="2025-03-17T18:46:40.959795367Z" level=info msg="CreateContainer within sandbox \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\"" Mar 17 18:46:40.962286 env[1221]: time="2025-03-17T18:46:40.960801271Z" level=info msg="StartContainer for \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\"" Mar 17 18:46:41.010236 systemd[1]: Started cri-containerd-3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92.scope. Mar 17 18:46:41.053461 env[1221]: time="2025-03-17T18:46:41.053323892Z" level=info msg="StartContainer for \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\" returns successfully" Mar 17 18:46:41.945642 systemd[1]: run-containerd-runc-k8s.io-3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92-runc.AgjHVi.mount: Deactivated successfully. Mar 17 18:46:47.415796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2254672977.mount: Deactivated successfully. Mar 17 18:46:50.984774 env[1221]: time="2025-03-17T18:46:50.984673341Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:50.994186 env[1221]: time="2025-03-17T18:46:50.994112258Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:50.996862 env[1221]: time="2025-03-17T18:46:50.996811701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:50.998141 env[1221]: time="2025-03-17T18:46:50.998094613Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:46:51.003367 env[1221]: time="2025-03-17T18:46:51.003316800Z" level=info msg="CreateContainer within sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:46:51.025504 env[1221]: time="2025-03-17T18:46:51.025429762Z" level=info msg="CreateContainer within sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8\"" Mar 17 18:46:51.027939 env[1221]: time="2025-03-17T18:46:51.026378295Z" level=info msg="StartContainer for \"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8\"" Mar 17 18:46:51.068242 systemd[1]: Started cri-containerd-535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8.scope. Mar 17 18:46:51.115859 env[1221]: time="2025-03-17T18:46:51.115792783Z" level=info msg="StartContainer for \"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8\" returns successfully" Mar 17 18:46:51.133697 systemd[1]: cri-containerd-535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8.scope: Deactivated successfully. Mar 17 18:46:51.330246 kubelet[2076]: I0317 18:46:51.304869 2076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-qlcc4" podStartSLOduration=11.900688052 podStartE2EDuration="16.304839854s" podCreationTimestamp="2025-03-17 18:46:35 +0000 UTC" firstStartedPulling="2025-03-17 18:46:36.52141714 +0000 UTC m=+16.534867630" lastFinishedPulling="2025-03-17 18:46:40.925568917 +0000 UTC m=+20.939019432" observedRunningTime="2025-03-17 18:46:41.330791338 +0000 UTC m=+21.344241855" watchObservedRunningTime="2025-03-17 18:46:51.304839854 +0000 UTC m=+31.318290373" Mar 17 18:46:52.016541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8-rootfs.mount: Deactivated successfully. Mar 17 18:46:53.201041 env[1221]: time="2025-03-17T18:46:53.200946294Z" level=info msg="shim disconnected" id=535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8 Mar 17 18:46:53.201041 env[1221]: time="2025-03-17T18:46:53.201037044Z" level=warning msg="cleaning up after shim disconnected" id=535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8 namespace=k8s.io Mar 17 18:46:53.201761 env[1221]: time="2025-03-17T18:46:53.201052628Z" level=info msg="cleaning up dead shim" Mar 17 18:46:53.214530 env[1221]: time="2025-03-17T18:46:53.214457114Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:46:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2545 runtime=io.containerd.runc.v2\n" Mar 17 18:46:53.293661 env[1221]: time="2025-03-17T18:46:53.293585697Z" level=info msg="CreateContainer within sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:46:53.317400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971227158.mount: Deactivated successfully. Mar 17 18:46:53.333008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319548435.mount: Deactivated successfully. Mar 17 18:46:53.338600 env[1221]: time="2025-03-17T18:46:53.338537668Z" level=info msg="CreateContainer within sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802\"" Mar 17 18:46:53.339622 env[1221]: time="2025-03-17T18:46:53.339578792Z" level=info msg="StartContainer for \"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802\"" Mar 17 18:46:53.376359 systemd[1]: Started cri-containerd-23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802.scope. Mar 17 18:46:53.426824 env[1221]: time="2025-03-17T18:46:53.426759505Z" level=info msg="StartContainer for \"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802\" returns successfully" Mar 17 18:46:53.441313 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:46:53.443288 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:46:53.443711 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:46:53.446236 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:46:53.453308 systemd[1]: cri-containerd-23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802.scope: Deactivated successfully. Mar 17 18:46:53.468247 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:46:53.491936 env[1221]: time="2025-03-17T18:46:53.491859546Z" level=info msg="shim disconnected" id=23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802 Mar 17 18:46:53.491936 env[1221]: time="2025-03-17T18:46:53.491931915Z" level=warning msg="cleaning up after shim disconnected" id=23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802 namespace=k8s.io Mar 17 18:46:53.491936 env[1221]: time="2025-03-17T18:46:53.491947614Z" level=info msg="cleaning up dead shim" Mar 17 18:46:53.504307 env[1221]: time="2025-03-17T18:46:53.504241629Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:46:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2610 runtime=io.containerd.runc.v2\n" Mar 17 18:46:54.296881 env[1221]: time="2025-03-17T18:46:54.296618228Z" level=info msg="CreateContainer within sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:46:54.310554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802-rootfs.mount: Deactivated successfully. Mar 17 18:46:54.333659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435401914.mount: Deactivated successfully. Mar 17 18:46:54.339793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1747477754.mount: Deactivated successfully. Mar 17 18:46:54.340522 env[1221]: time="2025-03-17T18:46:54.340424478Z" level=info msg="CreateContainer within sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01\"" Mar 17 18:46:54.342717 env[1221]: time="2025-03-17T18:46:54.341802969Z" level=info msg="StartContainer for \"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01\"" Mar 17 18:46:54.379114 systemd[1]: Started cri-containerd-e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01.scope. Mar 17 18:46:54.435650 env[1221]: time="2025-03-17T18:46:54.435584920Z" level=info msg="StartContainer for \"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01\" returns successfully" Mar 17 18:46:54.437593 systemd[1]: cri-containerd-e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01.scope: Deactivated successfully. Mar 17 18:46:54.472261 env[1221]: time="2025-03-17T18:46:54.472192975Z" level=info msg="shim disconnected" id=e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01 Mar 17 18:46:54.472261 env[1221]: time="2025-03-17T18:46:54.472264674Z" level=warning msg="cleaning up after shim disconnected" id=e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01 namespace=k8s.io Mar 17 18:46:54.472777 env[1221]: time="2025-03-17T18:46:54.472278881Z" level=info msg="cleaning up dead shim" Mar 17 18:46:54.484801 env[1221]: time="2025-03-17T18:46:54.484695849Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:46:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2668 runtime=io.containerd.runc.v2\n" Mar 17 18:46:55.302216 env[1221]: time="2025-03-17T18:46:55.302151004Z" level=info msg="CreateContainer within sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:46:55.312014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01-rootfs.mount: Deactivated successfully. Mar 17 18:46:55.334197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1361488530.mount: Deactivated successfully. Mar 17 18:46:55.343860 env[1221]: time="2025-03-17T18:46:55.343803581Z" level=info msg="CreateContainer within sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c\"" Mar 17 18:46:55.345541 env[1221]: time="2025-03-17T18:46:55.345498711Z" level=info msg="StartContainer for \"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c\"" Mar 17 18:46:55.401358 systemd[1]: Started cri-containerd-b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c.scope. Mar 17 18:46:55.460947 systemd[1]: cri-containerd-b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c.scope: Deactivated successfully. Mar 17 18:46:55.465805 env[1221]: time="2025-03-17T18:46:55.465737702Z" level=info msg="StartContainer for \"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c\" returns successfully" Mar 17 18:46:55.498668 env[1221]: time="2025-03-17T18:46:55.498588064Z" level=info msg="shim disconnected" id=b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c Mar 17 18:46:55.498668 env[1221]: time="2025-03-17T18:46:55.498659569Z" level=warning msg="cleaning up after shim disconnected" id=b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c namespace=k8s.io Mar 17 18:46:55.499089 env[1221]: time="2025-03-17T18:46:55.498704983Z" level=info msg="cleaning up dead shim" Mar 17 18:46:55.515920 env[1221]: time="2025-03-17T18:46:55.515817214Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:46:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2724 runtime=io.containerd.runc.v2\n" Mar 17 18:46:56.307954 env[1221]: time="2025-03-17T18:46:56.307893137Z" level=info msg="CreateContainer within sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:46:56.313444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c-rootfs.mount: Deactivated successfully. Mar 17 18:46:56.335984 env[1221]: time="2025-03-17T18:46:56.335911937Z" level=info msg="CreateContainer within sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\"" Mar 17 18:46:56.341321 env[1221]: time="2025-03-17T18:46:56.340842943Z" level=info msg="StartContainer for \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\"" Mar 17 18:46:56.345455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount849384450.mount: Deactivated successfully. Mar 17 18:46:56.382635 systemd[1]: Started cri-containerd-e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c.scope. Mar 17 18:46:56.443158 env[1221]: time="2025-03-17T18:46:56.443093315Z" level=info msg="StartContainer for \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\" returns successfully" Mar 17 18:46:56.585048 kubelet[2076]: I0317 18:46:56.584906 2076 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:46:56.627050 kubelet[2076]: I0317 18:46:56.626977 2076 topology_manager.go:215] "Topology Admit Handler" podUID="e705f535-057d-4e38-abef-46fc9a619616" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wk2jf" Mar 17 18:46:56.635848 systemd[1]: Created slice kubepods-burstable-pode705f535_057d_4e38_abef_46fc9a619616.slice. Mar 17 18:46:56.645285 kubelet[2076]: I0317 18:46:56.645168 2076 topology_manager.go:215] "Topology Admit Handler" podUID="931ac7a4-bb94-44dd-8f15-d099a3639d32" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cdqj7" Mar 17 18:46:56.654454 systemd[1]: Created slice kubepods-burstable-pod931ac7a4_bb94_44dd_8f15_d099a3639d32.slice. Mar 17 18:46:56.662315 kubelet[2076]: W0317 18:46:56.662257 2076 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:46:56.662315 kubelet[2076]: E0317 18:46:56.662325 2076 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:46:56.724932 kubelet[2076]: I0317 18:46:56.724843 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e705f535-057d-4e38-abef-46fc9a619616-config-volume\") pod \"coredns-7db6d8ff4d-wk2jf\" (UID: \"e705f535-057d-4e38-abef-46fc9a619616\") " pod="kube-system/coredns-7db6d8ff4d-wk2jf" Mar 17 18:46:56.725242 kubelet[2076]: I0317 18:46:56.724950 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59wz6\" (UniqueName: \"kubernetes.io/projected/e705f535-057d-4e38-abef-46fc9a619616-kube-api-access-59wz6\") pod \"coredns-7db6d8ff4d-wk2jf\" (UID: \"e705f535-057d-4e38-abef-46fc9a619616\") " pod="kube-system/coredns-7db6d8ff4d-wk2jf" Mar 17 18:46:56.825756 kubelet[2076]: I0317 18:46:56.825616 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/931ac7a4-bb94-44dd-8f15-d099a3639d32-config-volume\") pod \"coredns-7db6d8ff4d-cdqj7\" (UID: \"931ac7a4-bb94-44dd-8f15-d099a3639d32\") " pod="kube-system/coredns-7db6d8ff4d-cdqj7" Mar 17 18:46:56.826397 kubelet[2076]: I0317 18:46:56.826360 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t264z\" (UniqueName: \"kubernetes.io/projected/931ac7a4-bb94-44dd-8f15-d099a3639d32-kube-api-access-t264z\") pod \"coredns-7db6d8ff4d-cdqj7\" (UID: \"931ac7a4-bb94-44dd-8f15-d099a3639d32\") " pod="kube-system/coredns-7db6d8ff4d-cdqj7" Mar 17 18:46:57.862495 env[1221]: time="2025-03-17T18:46:57.862429046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wk2jf,Uid:e705f535-057d-4e38-abef-46fc9a619616,Namespace:kube-system,Attempt:0,}" Mar 17 18:46:57.872740 env[1221]: time="2025-03-17T18:46:57.872640048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cdqj7,Uid:931ac7a4-bb94-44dd-8f15-d099a3639d32,Namespace:kube-system,Attempt:0,}" Mar 17 18:46:58.769338 systemd-networkd[1021]: cilium_host: Link UP Mar 17 18:46:58.778064 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:46:58.779345 systemd-networkd[1021]: cilium_net: Link UP Mar 17 18:46:58.780993 systemd-networkd[1021]: cilium_net: Gained carrier Mar 17 18:46:58.788824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:46:58.788304 systemd-networkd[1021]: cilium_host: Gained carrier Mar 17 18:46:58.921053 systemd-networkd[1021]: cilium_net: Gained IPv6LL Mar 17 18:46:58.955341 systemd-networkd[1021]: cilium_vxlan: Link UP Mar 17 18:46:58.955352 systemd-networkd[1021]: cilium_vxlan: Gained carrier Mar 17 18:46:59.176906 systemd-networkd[1021]: cilium_host: Gained IPv6LL Mar 17 18:46:59.264715 kernel: NET: Registered PF_ALG protocol family Mar 17 18:47:00.211767 systemd-networkd[1021]: lxc_health: Link UP Mar 17 18:47:00.244748 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:47:00.245903 systemd-networkd[1021]: lxc_health: Gained carrier Mar 17 18:47:00.459948 systemd-networkd[1021]: lxcb69b37541bea: Link UP Mar 17 18:47:00.477726 kernel: eth0: renamed from tmpa187d Mar 17 18:47:00.500716 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb69b37541bea: link becomes ready Mar 17 18:47:00.502092 systemd-networkd[1021]: lxcb69b37541bea: Gained carrier Mar 17 18:47:00.728992 systemd-networkd[1021]: cilium_vxlan: Gained IPv6LL Mar 17 18:47:00.940239 systemd-networkd[1021]: lxca03032aa27b7: Link UP Mar 17 18:47:00.955859 kernel: eth0: renamed from tmp63334 Mar 17 18:47:00.972807 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca03032aa27b7: link becomes ready Mar 17 18:47:00.975445 systemd-networkd[1021]: lxca03032aa27b7: Gained carrier Mar 17 18:47:01.104889 kubelet[2076]: I0317 18:47:01.104809 2076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7kfln" podStartSLOduration=13.255359323 podStartE2EDuration="27.104782805s" podCreationTimestamp="2025-03-17 18:46:34 +0000 UTC" firstStartedPulling="2025-03-17 18:46:37.151210121 +0000 UTC m=+17.164660619" lastFinishedPulling="2025-03-17 18:46:51.000633563 +0000 UTC m=+31.014084101" observedRunningTime="2025-03-17 18:46:57.34203477 +0000 UTC m=+37.355485288" watchObservedRunningTime="2025-03-17 18:47:01.104782805 +0000 UTC m=+41.118233322" Mar 17 18:47:02.008935 systemd-networkd[1021]: lxc_health: Gained IPv6LL Mar 17 18:47:02.201376 systemd-networkd[1021]: lxcb69b37541bea: Gained IPv6LL Mar 17 18:47:02.713746 systemd-networkd[1021]: lxca03032aa27b7: Gained IPv6LL Mar 17 18:47:06.225460 env[1221]: time="2025-03-17T18:47:06.225358803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:47:06.226168 env[1221]: time="2025-03-17T18:47:06.226121526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:47:06.226351 env[1221]: time="2025-03-17T18:47:06.226316853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:47:06.226744 env[1221]: time="2025-03-17T18:47:06.226692035Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/633347a5721f59b2f1b157a00fd2e0f4581aba1c5d25e5756e7ae4693e839bd2 pid=3276 runtime=io.containerd.runc.v2 Mar 17 18:47:06.250326 env[1221]: time="2025-03-17T18:47:06.250201784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:47:06.250326 env[1221]: time="2025-03-17T18:47:06.250265465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:47:06.250779 env[1221]: time="2025-03-17T18:47:06.250691820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:47:06.251309 env[1221]: time="2025-03-17T18:47:06.251207821Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a187da8d628b7bd9969a739c1c4bd1bf683542c6875dae40d1b1ae8b34c24128 pid=3275 runtime=io.containerd.runc.v2 Mar 17 18:47:06.286745 systemd[1]: Started cri-containerd-633347a5721f59b2f1b157a00fd2e0f4581aba1c5d25e5756e7ae4693e839bd2.scope. Mar 17 18:47:06.308558 systemd[1]: Started cri-containerd-a187da8d628b7bd9969a739c1c4bd1bf683542c6875dae40d1b1ae8b34c24128.scope. Mar 17 18:47:06.333199 systemd[1]: run-containerd-runc-k8s.io-a187da8d628b7bd9969a739c1c4bd1bf683542c6875dae40d1b1ae8b34c24128-runc.hPQqvL.mount: Deactivated successfully. Mar 17 18:47:06.424579 env[1221]: time="2025-03-17T18:47:06.424519141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cdqj7,Uid:931ac7a4-bb94-44dd-8f15-d099a3639d32,Namespace:kube-system,Attempt:0,} returns sandbox id \"a187da8d628b7bd9969a739c1c4bd1bf683542c6875dae40d1b1ae8b34c24128\"" Mar 17 18:47:06.431253 env[1221]: time="2025-03-17T18:47:06.431202765Z" level=info msg="CreateContainer within sandbox \"a187da8d628b7bd9969a739c1c4bd1bf683542c6875dae40d1b1ae8b34c24128\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:47:06.457236 env[1221]: time="2025-03-17T18:47:06.457166101Z" level=info msg="CreateContainer within sandbox \"a187da8d628b7bd9969a739c1c4bd1bf683542c6875dae40d1b1ae8b34c24128\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b21273204879222d8701eeb6677ab06a8d7ad32ee30aeb31dc1cd70f49f8cbf3\"" Mar 17 18:47:06.458969 env[1221]: time="2025-03-17T18:47:06.458859691Z" level=info msg="StartContainer for \"b21273204879222d8701eeb6677ab06a8d7ad32ee30aeb31dc1cd70f49f8cbf3\"" Mar 17 18:47:06.472382 env[1221]: time="2025-03-17T18:47:06.472323077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wk2jf,Uid:e705f535-057d-4e38-abef-46fc9a619616,Namespace:kube-system,Attempt:0,} returns sandbox id \"633347a5721f59b2f1b157a00fd2e0f4581aba1c5d25e5756e7ae4693e839bd2\"" Mar 17 18:47:06.478617 env[1221]: time="2025-03-17T18:47:06.477573230Z" level=info msg="CreateContainer within sandbox \"633347a5721f59b2f1b157a00fd2e0f4581aba1c5d25e5756e7ae4693e839bd2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:47:06.501437 env[1221]: time="2025-03-17T18:47:06.501362103Z" level=info msg="CreateContainer within sandbox \"633347a5721f59b2f1b157a00fd2e0f4581aba1c5d25e5756e7ae4693e839bd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8246fdaae4174ac48dd7fbafda8f9de8b736234779cf0ef0c6fe8e8d0fd23390\"" Mar 17 18:47:06.504655 env[1221]: time="2025-03-17T18:47:06.504484608Z" level=info msg="StartContainer for \"8246fdaae4174ac48dd7fbafda8f9de8b736234779cf0ef0c6fe8e8d0fd23390\"" Mar 17 18:47:06.512367 systemd[1]: Started cri-containerd-b21273204879222d8701eeb6677ab06a8d7ad32ee30aeb31dc1cd70f49f8cbf3.scope. Mar 17 18:47:06.552521 systemd[1]: Started cri-containerd-8246fdaae4174ac48dd7fbafda8f9de8b736234779cf0ef0c6fe8e8d0fd23390.scope. Mar 17 18:47:06.626994 env[1221]: time="2025-03-17T18:47:06.626925792Z" level=info msg="StartContainer for \"8246fdaae4174ac48dd7fbafda8f9de8b736234779cf0ef0c6fe8e8d0fd23390\" returns successfully" Mar 17 18:47:06.635895 env[1221]: time="2025-03-17T18:47:06.635837438Z" level=info msg="StartContainer for \"b21273204879222d8701eeb6677ab06a8d7ad32ee30aeb31dc1cd70f49f8cbf3\" returns successfully" Mar 17 18:47:07.377807 kubelet[2076]: I0317 18:47:07.374607 2076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cdqj7" podStartSLOduration=32.374581132 podStartE2EDuration="32.374581132s" podCreationTimestamp="2025-03-17 18:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:47:07.373513593 +0000 UTC m=+47.386964109" watchObservedRunningTime="2025-03-17 18:47:07.374581132 +0000 UTC m=+47.388031648" Mar 17 18:47:07.395051 kubelet[2076]: I0317 18:47:07.394918 2076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wk2jf" podStartSLOduration=32.394890959 podStartE2EDuration="32.394890959s" podCreationTimestamp="2025-03-17 18:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:47:07.392999435 +0000 UTC m=+47.406449959" watchObservedRunningTime="2025-03-17 18:47:07.394890959 +0000 UTC m=+47.408341481" Mar 17 18:47:18.700030 systemd[1]: Started sshd@5-10.128.0.78:22-139.178.89.65:44028.service. Mar 17 18:47:18.994270 sshd[3433]: Accepted publickey for core from 139.178.89.65 port 44028 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:18.996964 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:19.005640 systemd[1]: Started session-6.scope. Mar 17 18:47:19.006319 systemd-logind[1207]: New session 6 of user core. Mar 17 18:47:19.322594 sshd[3433]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:19.328569 systemd-logind[1207]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:47:19.329088 systemd[1]: sshd@5-10.128.0.78:22-139.178.89.65:44028.service: Deactivated successfully. Mar 17 18:47:19.330362 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:47:19.332263 systemd-logind[1207]: Removed session 6. Mar 17 18:47:24.372292 systemd[1]: Started sshd@6-10.128.0.78:22-139.178.89.65:53306.service. Mar 17 18:47:24.662179 sshd[3448]: Accepted publickey for core from 139.178.89.65 port 53306 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:24.664086 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:24.672961 systemd[1]: Started session-7.scope. Mar 17 18:47:24.673624 systemd-logind[1207]: New session 7 of user core. Mar 17 18:47:24.954118 sshd[3448]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:24.959151 systemd[1]: sshd@6-10.128.0.78:22-139.178.89.65:53306.service: Deactivated successfully. Mar 17 18:47:24.960456 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:47:24.961530 systemd-logind[1207]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:47:24.962921 systemd-logind[1207]: Removed session 7. Mar 17 18:47:30.001431 systemd[1]: Started sshd@7-10.128.0.78:22-139.178.89.65:53316.service. Mar 17 18:47:30.290872 sshd[3461]: Accepted publickey for core from 139.178.89.65 port 53316 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:30.293406 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:30.301372 systemd[1]: Started session-8.scope. Mar 17 18:47:30.302068 systemd-logind[1207]: New session 8 of user core. Mar 17 18:47:30.604235 sshd[3461]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:30.608884 systemd[1]: sshd@7-10.128.0.78:22-139.178.89.65:53316.service: Deactivated successfully. Mar 17 18:47:30.610164 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:47:30.611145 systemd-logind[1207]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:47:30.612461 systemd-logind[1207]: Removed session 8. Mar 17 18:47:35.652091 systemd[1]: Started sshd@8-10.128.0.78:22-139.178.89.65:51656.service. Mar 17 18:47:35.944074 sshd[3473]: Accepted publickey for core from 139.178.89.65 port 51656 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:35.946476 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:35.953744 systemd[1]: Started session-9.scope. Mar 17 18:47:35.954884 systemd-logind[1207]: New session 9 of user core. Mar 17 18:47:36.245407 sshd[3473]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:36.250322 systemd[1]: sshd@8-10.128.0.78:22-139.178.89.65:51656.service: Deactivated successfully. Mar 17 18:47:36.251578 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:47:36.252588 systemd-logind[1207]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:47:36.254738 systemd-logind[1207]: Removed session 9. Mar 17 18:47:36.291990 systemd[1]: Started sshd@9-10.128.0.78:22-139.178.89.65:51664.service. Mar 17 18:47:36.584672 sshd[3487]: Accepted publickey for core from 139.178.89.65 port 51664 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:36.586441 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:36.593952 systemd[1]: Started session-10.scope. Mar 17 18:47:36.594611 systemd-logind[1207]: New session 10 of user core. Mar 17 18:47:36.927828 sshd[3487]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:36.934263 systemd[1]: sshd@9-10.128.0.78:22-139.178.89.65:51664.service: Deactivated successfully. Mar 17 18:47:36.935468 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:47:36.936925 systemd-logind[1207]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:47:36.938893 systemd-logind[1207]: Removed session 10. Mar 17 18:47:36.975506 systemd[1]: Started sshd@10-10.128.0.78:22-139.178.89.65:51670.service. Mar 17 18:47:37.266181 sshd[3497]: Accepted publickey for core from 139.178.89.65 port 51670 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:37.267958 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:37.277239 systemd[1]: Started session-11.scope. Mar 17 18:47:37.278012 systemd-logind[1207]: New session 11 of user core. Mar 17 18:47:37.560070 sshd[3497]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:37.565346 systemd-logind[1207]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:47:37.565646 systemd[1]: sshd@10-10.128.0.78:22-139.178.89.65:51670.service: Deactivated successfully. Mar 17 18:47:37.566968 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:47:37.568263 systemd-logind[1207]: Removed session 11. Mar 17 18:47:42.608462 systemd[1]: Started sshd@11-10.128.0.78:22-139.178.89.65:45610.service. Mar 17 18:47:42.900324 sshd[3511]: Accepted publickey for core from 139.178.89.65 port 45610 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:42.902560 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:42.909815 systemd-logind[1207]: New session 12 of user core. Mar 17 18:47:42.909945 systemd[1]: Started session-12.scope. Mar 17 18:47:43.196078 sshd[3511]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:43.201840 systemd[1]: sshd@11-10.128.0.78:22-139.178.89.65:45610.service: Deactivated successfully. Mar 17 18:47:43.202993 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:47:43.203811 systemd-logind[1207]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:47:43.205146 systemd-logind[1207]: Removed session 12. Mar 17 18:47:48.243347 systemd[1]: Started sshd@12-10.128.0.78:22-139.178.89.65:45624.service. Mar 17 18:47:48.534986 sshd[3523]: Accepted publickey for core from 139.178.89.65 port 45624 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:48.537342 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:48.544880 systemd[1]: Started session-13.scope. Mar 17 18:47:48.545542 systemd-logind[1207]: New session 13 of user core. Mar 17 18:47:48.826463 sshd[3523]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:48.831271 systemd[1]: sshd@12-10.128.0.78:22-139.178.89.65:45624.service: Deactivated successfully. Mar 17 18:47:48.832523 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:47:48.833742 systemd-logind[1207]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:47:48.835257 systemd-logind[1207]: Removed session 13. Mar 17 18:47:48.876129 systemd[1]: Started sshd@13-10.128.0.78:22-139.178.89.65:45630.service. Mar 17 18:47:49.172146 sshd[3535]: Accepted publickey for core from 139.178.89.65 port 45630 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:49.174249 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:49.181582 systemd[1]: Started session-14.scope. Mar 17 18:47:49.182737 systemd-logind[1207]: New session 14 of user core. Mar 17 18:47:49.547526 sshd[3535]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:49.552346 systemd[1]: sshd@13-10.128.0.78:22-139.178.89.65:45630.service: Deactivated successfully. Mar 17 18:47:49.553622 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:47:49.554541 systemd-logind[1207]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:47:49.556444 systemd-logind[1207]: Removed session 14. Mar 17 18:47:49.595024 systemd[1]: Started sshd@14-10.128.0.78:22-139.178.89.65:45632.service. Mar 17 18:47:49.889380 sshd[3545]: Accepted publickey for core from 139.178.89.65 port 45632 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:49.891771 sshd[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:49.898854 systemd[1]: Started session-15.scope. Mar 17 18:47:49.899724 systemd-logind[1207]: New session 15 of user core. Mar 17 18:47:51.793863 sshd[3545]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:51.800630 systemd[1]: sshd@14-10.128.0.78:22-139.178.89.65:45632.service: Deactivated successfully. Mar 17 18:47:51.802528 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:47:51.803976 systemd-logind[1207]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:47:51.806394 systemd-logind[1207]: Removed session 15. Mar 17 18:47:51.844861 systemd[1]: Started sshd@15-10.128.0.78:22-139.178.89.65:59104.service. Mar 17 18:47:52.134810 sshd[3562]: Accepted publickey for core from 139.178.89.65 port 59104 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:52.137309 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:52.144617 systemd[1]: Started session-16.scope. Mar 17 18:47:52.146186 systemd-logind[1207]: New session 16 of user core. Mar 17 18:47:52.581016 sshd[3562]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:52.586171 systemd[1]: sshd@15-10.128.0.78:22-139.178.89.65:59104.service: Deactivated successfully. Mar 17 18:47:52.587424 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:47:52.588336 systemd-logind[1207]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:47:52.589646 systemd-logind[1207]: Removed session 16. Mar 17 18:47:52.628888 systemd[1]: Started sshd@16-10.128.0.78:22-139.178.89.65:59114.service. Mar 17 18:47:52.918117 sshd[3572]: Accepted publickey for core from 139.178.89.65 port 59114 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:52.920199 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:52.927454 systemd[1]: Started session-17.scope. Mar 17 18:47:52.928537 systemd-logind[1207]: New session 17 of user core. Mar 17 18:47:53.204218 sshd[3572]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:53.208995 systemd[1]: sshd@16-10.128.0.78:22-139.178.89.65:59114.service: Deactivated successfully. Mar 17 18:47:53.210225 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:47:53.211368 systemd-logind[1207]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:47:53.212652 systemd-logind[1207]: Removed session 17. Mar 17 18:47:58.252068 systemd[1]: Started sshd@17-10.128.0.78:22-139.178.89.65:59120.service. Mar 17 18:47:58.552293 sshd[3587]: Accepted publickey for core from 139.178.89.65 port 59120 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:47:58.554378 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:47:58.561477 systemd[1]: Started session-18.scope. Mar 17 18:47:58.562740 systemd-logind[1207]: New session 18 of user core. Mar 17 18:47:58.842242 sshd[3587]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:58.848138 systemd[1]: sshd@17-10.128.0.78:22-139.178.89.65:59120.service: Deactivated successfully. Mar 17 18:47:58.849327 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:47:58.850398 systemd-logind[1207]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:47:58.852319 systemd-logind[1207]: Removed session 18. Mar 17 18:48:03.890254 systemd[1]: Started sshd@18-10.128.0.78:22-139.178.89.65:37386.service. Mar 17 18:48:04.180976 sshd[3599]: Accepted publickey for core from 139.178.89.65 port 37386 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:48:04.183414 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:04.190351 systemd[1]: Started session-19.scope. Mar 17 18:48:04.191056 systemd-logind[1207]: New session 19 of user core. Mar 17 18:48:04.470083 sshd[3599]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:04.475114 systemd[1]: sshd@18-10.128.0.78:22-139.178.89.65:37386.service: Deactivated successfully. Mar 17 18:48:04.476269 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:48:04.477352 systemd-logind[1207]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:48:04.478911 systemd-logind[1207]: Removed session 19. Mar 17 18:48:09.518958 systemd[1]: Started sshd@19-10.128.0.78:22-139.178.89.65:37396.service. Mar 17 18:48:09.805900 sshd[3613]: Accepted publickey for core from 139.178.89.65 port 37396 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:48:09.808148 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:09.815902 systemd[1]: Started session-20.scope. Mar 17 18:48:09.816621 systemd-logind[1207]: New session 20 of user core. Mar 17 18:48:10.087035 sshd[3613]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:10.092155 systemd[1]: sshd@19-10.128.0.78:22-139.178.89.65:37396.service: Deactivated successfully. Mar 17 18:48:10.093365 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:48:10.094425 systemd-logind[1207]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:48:10.096038 systemd-logind[1207]: Removed session 20. Mar 17 18:48:10.136445 systemd[1]: Started sshd@20-10.128.0.78:22-139.178.89.65:37402.service. Mar 17 18:48:10.442745 sshd[3625]: Accepted publickey for core from 139.178.89.65 port 37402 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:48:10.445005 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:10.452577 systemd[1]: Started session-21.scope. Mar 17 18:48:10.453728 systemd-logind[1207]: New session 21 of user core. Mar 17 18:48:12.853424 env[1221]: time="2025-03-17T18:48:12.853363961Z" level=info msg="StopContainer for \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\" with timeout 30 (s)" Mar 17 18:48:12.854657 env[1221]: time="2025-03-17T18:48:12.854616162Z" level=info msg="Stop container \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\" with signal terminated" Mar 17 18:48:12.883806 systemd[1]: cri-containerd-3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92.scope: Deactivated successfully. Mar 17 18:48:12.907302 env[1221]: time="2025-03-17T18:48:12.907201247Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:48:12.917716 env[1221]: time="2025-03-17T18:48:12.917552785Z" level=info msg="StopContainer for \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\" with timeout 2 (s)" Mar 17 18:48:12.918085 env[1221]: time="2025-03-17T18:48:12.918038817Z" level=info msg="Stop container \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\" with signal terminated" Mar 17 18:48:12.933440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92-rootfs.mount: Deactivated successfully. Mar 17 18:48:12.943484 systemd-networkd[1021]: lxc_health: Link DOWN Mar 17 18:48:12.943499 systemd-networkd[1021]: lxc_health: Lost carrier Mar 17 18:48:12.965329 env[1221]: time="2025-03-17T18:48:12.965236622Z" level=info msg="shim disconnected" id=3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92 Mar 17 18:48:12.965329 env[1221]: time="2025-03-17T18:48:12.965312646Z" level=warning msg="cleaning up after shim disconnected" id=3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92 namespace=k8s.io Mar 17 18:48:12.965329 env[1221]: time="2025-03-17T18:48:12.965329463Z" level=info msg="cleaning up dead shim" Mar 17 18:48:12.971137 systemd[1]: cri-containerd-e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c.scope: Deactivated successfully. Mar 17 18:48:12.971573 systemd[1]: cri-containerd-e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c.scope: Consumed 10.283s CPU time. Mar 17 18:48:12.987527 env[1221]: time="2025-03-17T18:48:12.987460940Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:48:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3684 runtime=io.containerd.runc.v2\n" Mar 17 18:48:12.994368 env[1221]: time="2025-03-17T18:48:12.994281957Z" level=info msg="StopContainer for \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\" returns successfully" Mar 17 18:48:12.995981 env[1221]: time="2025-03-17T18:48:12.995873911Z" level=info msg="StopPodSandbox for \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\"" Mar 17 18:48:12.996308 env[1221]: time="2025-03-17T18:48:12.996040857Z" level=info msg="Container to stop \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:48:13.001941 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04-shm.mount: Deactivated successfully. Mar 17 18:48:13.026540 systemd[1]: cri-containerd-cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04.scope: Deactivated successfully. Mar 17 18:48:13.052874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c-rootfs.mount: Deactivated successfully. Mar 17 18:48:13.055097 env[1221]: time="2025-03-17T18:48:13.054953730Z" level=info msg="shim disconnected" id=e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c Mar 17 18:48:13.055097 env[1221]: time="2025-03-17T18:48:13.055018514Z" level=warning msg="cleaning up after shim disconnected" id=e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c namespace=k8s.io Mar 17 18:48:13.055097 env[1221]: time="2025-03-17T18:48:13.055033759Z" level=info msg="cleaning up dead shim" Mar 17 18:48:13.078286 env[1221]: time="2025-03-17T18:48:13.078228016Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:48:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3725 runtime=io.containerd.runc.v2\n" Mar 17 18:48:13.084005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04-rootfs.mount: Deactivated successfully. Mar 17 18:48:13.086573 env[1221]: time="2025-03-17T18:48:13.086508682Z" level=info msg="StopContainer for \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\" returns successfully" Mar 17 18:48:13.087928 env[1221]: time="2025-03-17T18:48:13.087212746Z" level=info msg="StopPodSandbox for \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\"" Mar 17 18:48:13.087928 env[1221]: time="2025-03-17T18:48:13.087299736Z" level=info msg="Container to stop \"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:48:13.087928 env[1221]: time="2025-03-17T18:48:13.087324977Z" level=info msg="Container to stop \"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:48:13.087928 env[1221]: time="2025-03-17T18:48:13.087344442Z" level=info msg="Container to stop \"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:48:13.087928 env[1221]: time="2025-03-17T18:48:13.087364430Z" level=info msg="Container to stop \"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:48:13.087928 env[1221]: time="2025-03-17T18:48:13.087391374Z" level=info msg="Container to stop \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:48:13.092782 env[1221]: time="2025-03-17T18:48:13.092722536Z" level=info msg="shim disconnected" id=cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04 Mar 17 18:48:13.093143 env[1221]: time="2025-03-17T18:48:13.092785405Z" level=warning msg="cleaning up after shim disconnected" id=cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04 namespace=k8s.io Mar 17 18:48:13.093143 env[1221]: time="2025-03-17T18:48:13.092800101Z" level=info msg="cleaning up dead shim" Mar 17 18:48:13.101428 systemd[1]: cri-containerd-a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f.scope: Deactivated successfully. Mar 17 18:48:13.114224 env[1221]: time="2025-03-17T18:48:13.114155576Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:48:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3746 runtime=io.containerd.runc.v2\n" Mar 17 18:48:13.114649 env[1221]: time="2025-03-17T18:48:13.114591217Z" level=info msg="TearDown network for sandbox \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\" successfully" Mar 17 18:48:13.114649 env[1221]: time="2025-03-17T18:48:13.114630869Z" level=info msg="StopPodSandbox for \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\" returns successfully" Mar 17 18:48:13.156075 env[1221]: time="2025-03-17T18:48:13.156010144Z" level=info msg="shim disconnected" id=a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f Mar 17 18:48:13.156459 env[1221]: time="2025-03-17T18:48:13.156428365Z" level=warning msg="cleaning up after shim disconnected" id=a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f namespace=k8s.io Mar 17 18:48:13.156625 env[1221]: time="2025-03-17T18:48:13.156599334Z" level=info msg="cleaning up dead shim" Mar 17 18:48:13.168730 env[1221]: time="2025-03-17T18:48:13.168643470Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:48:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3776 runtime=io.containerd.runc.v2\n" Mar 17 18:48:13.169154 env[1221]: time="2025-03-17T18:48:13.169112849Z" level=info msg="TearDown network for sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" successfully" Mar 17 18:48:13.169261 env[1221]: time="2025-03-17T18:48:13.169154115Z" level=info msg="StopPodSandbox for \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" returns successfully" Mar 17 18:48:13.282712 kubelet[2076]: I0317 18:48:13.282634 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6919bef5-1c5f-4605-bbce-bf53f5124720-clustermesh-secrets\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283357 kubelet[2076]: I0317 18:48:13.282725 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-config-path\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283357 kubelet[2076]: I0317 18:48:13.282754 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-host-proc-sys-net\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283357 kubelet[2076]: I0317 18:48:13.282777 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-lib-modules\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283357 kubelet[2076]: I0317 18:48:13.282807 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cni-path\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283357 kubelet[2076]: I0317 18:48:13.282961 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:13.283657 kubelet[2076]: I0317 18:48:13.282835 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77spc\" (UniqueName: \"kubernetes.io/projected/6919bef5-1c5f-4605-bbce-bf53f5124720-kube-api-access-77spc\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283657 kubelet[2076]: I0317 18:48:13.283553 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rql75\" (UniqueName: \"kubernetes.io/projected/e0d16c84-2526-4ff9-8c33-9061d637f468-kube-api-access-rql75\") pod \"e0d16c84-2526-4ff9-8c33-9061d637f468\" (UID: \"e0d16c84-2526-4ff9-8c33-9061d637f468\") " Mar 17 18:48:13.283657 kubelet[2076]: I0317 18:48:13.283586 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-host-proc-sys-kernel\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283657 kubelet[2076]: I0317 18:48:13.283611 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-xtables-lock\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283657 kubelet[2076]: I0317 18:48:13.283641 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-run\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283969 kubelet[2076]: I0317 18:48:13.283666 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-bpf-maps\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283969 kubelet[2076]: I0317 18:48:13.283724 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-cgroup\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283969 kubelet[2076]: I0317 18:48:13.283759 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6919bef5-1c5f-4605-bbce-bf53f5124720-hubble-tls\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283969 kubelet[2076]: I0317 18:48:13.283790 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0d16c84-2526-4ff9-8c33-9061d637f468-cilium-config-path\") pod \"e0d16c84-2526-4ff9-8c33-9061d637f468\" (UID: \"e0d16c84-2526-4ff9-8c33-9061d637f468\") " Mar 17 18:48:13.283969 kubelet[2076]: I0317 18:48:13.283819 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-hostproc\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.283969 kubelet[2076]: I0317 18:48:13.283846 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-etc-cni-netd\") pod \"6919bef5-1c5f-4605-bbce-bf53f5124720\" (UID: \"6919bef5-1c5f-4605-bbce-bf53f5124720\") " Mar 17 18:48:13.284299 kubelet[2076]: I0317 18:48:13.283906 2076 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-host-proc-sys-net\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.284299 kubelet[2076]: I0317 18:48:13.283963 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:13.284299 kubelet[2076]: I0317 18:48:13.283999 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:13.284299 kubelet[2076]: I0317 18:48:13.284024 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cni-path" (OuterVolumeSpecName: "cni-path") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:13.288474 kubelet[2076]: I0317 18:48:13.288100 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:48:13.288474 kubelet[2076]: I0317 18:48:13.288282 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:13.288474 kubelet[2076]: I0317 18:48:13.288359 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:13.288474 kubelet[2076]: I0317 18:48:13.288389 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:13.288474 kubelet[2076]: I0317 18:48:13.288413 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:13.291888 kubelet[2076]: I0317 18:48:13.291831 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-hostproc" (OuterVolumeSpecName: "hostproc") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:13.294460 kubelet[2076]: I0317 18:48:13.294412 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:13.294631 kubelet[2076]: I0317 18:48:13.294550 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6919bef5-1c5f-4605-bbce-bf53f5124720-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:48:13.296045 kubelet[2076]: I0317 18:48:13.295991 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d16c84-2526-4ff9-8c33-9061d637f468-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e0d16c84-2526-4ff9-8c33-9061d637f468" (UID: "e0d16c84-2526-4ff9-8c33-9061d637f468"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:48:13.299221 kubelet[2076]: I0317 18:48:13.299179 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6919bef5-1c5f-4605-bbce-bf53f5124720-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:48:13.300651 kubelet[2076]: I0317 18:48:13.300608 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6919bef5-1c5f-4605-bbce-bf53f5124720-kube-api-access-77spc" (OuterVolumeSpecName: "kube-api-access-77spc") pod "6919bef5-1c5f-4605-bbce-bf53f5124720" (UID: "6919bef5-1c5f-4605-bbce-bf53f5124720"). InnerVolumeSpecName "kube-api-access-77spc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:48:13.301854 kubelet[2076]: I0317 18:48:13.301806 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d16c84-2526-4ff9-8c33-9061d637f468-kube-api-access-rql75" (OuterVolumeSpecName: "kube-api-access-rql75") pod "e0d16c84-2526-4ff9-8c33-9061d637f468" (UID: "e0d16c84-2526-4ff9-8c33-9061d637f468"). InnerVolumeSpecName "kube-api-access-rql75". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:48:13.385063 kubelet[2076]: I0317 18:48:13.384889 2076 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-cgroup\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.385063 kubelet[2076]: I0317 18:48:13.384935 2076 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6919bef5-1c5f-4605-bbce-bf53f5124720-hubble-tls\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.385063 kubelet[2076]: I0317 18:48:13.384955 2076 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0d16c84-2526-4ff9-8c33-9061d637f468-cilium-config-path\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.385063 kubelet[2076]: I0317 18:48:13.384970 2076 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-etc-cni-netd\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.385063 kubelet[2076]: I0317 18:48:13.384986 2076 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-hostproc\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.385063 kubelet[2076]: I0317 18:48:13.385002 2076 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-config-path\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.385063 kubelet[2076]: I0317 18:48:13.385017 2076 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6919bef5-1c5f-4605-bbce-bf53f5124720-clustermesh-secrets\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.386754 kubelet[2076]: I0317 18:48:13.386718 2076 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-lib-modules\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.386964 kubelet[2076]: I0317 18:48:13.386942 2076 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cni-path\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.387112 kubelet[2076]: I0317 18:48:13.387093 2076 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-77spc\" (UniqueName: \"kubernetes.io/projected/6919bef5-1c5f-4605-bbce-bf53f5124720-kube-api-access-77spc\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.387269 kubelet[2076]: I0317 18:48:13.387246 2076 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-host-proc-sys-kernel\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.387443 kubelet[2076]: I0317 18:48:13.387422 2076 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rql75\" (UniqueName: \"kubernetes.io/projected/e0d16c84-2526-4ff9-8c33-9061d637f468-kube-api-access-rql75\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.387600 kubelet[2076]: I0317 18:48:13.387581 2076 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-xtables-lock\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.387763 kubelet[2076]: I0317 18:48:13.387743 2076 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-cilium-run\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.387918 kubelet[2076]: I0317 18:48:13.387899 2076 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6919bef5-1c5f-4605-bbce-bf53f5124720-bpf-maps\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:13.530827 kubelet[2076]: I0317 18:48:13.530789 2076 scope.go:117] "RemoveContainer" containerID="e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c" Mar 17 18:48:13.535417 env[1221]: time="2025-03-17T18:48:13.535357786Z" level=info msg="RemoveContainer for \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\"" Mar 17 18:48:13.538858 systemd[1]: Removed slice kubepods-burstable-pod6919bef5_1c5f_4605_bbce_bf53f5124720.slice. Mar 17 18:48:13.539041 systemd[1]: kubepods-burstable-pod6919bef5_1c5f_4605_bbce_bf53f5124720.slice: Consumed 10.441s CPU time. Mar 17 18:48:13.546334 env[1221]: time="2025-03-17T18:48:13.546228367Z" level=info msg="RemoveContainer for \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\" returns successfully" Mar 17 18:48:13.555449 kubelet[2076]: I0317 18:48:13.555414 2076 scope.go:117] "RemoveContainer" containerID="b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c" Mar 17 18:48:13.555545 systemd[1]: Removed slice kubepods-besteffort-pode0d16c84_2526_4ff9_8c33_9061d637f468.slice. Mar 17 18:48:13.561020 env[1221]: time="2025-03-17T18:48:13.560959453Z" level=info msg="RemoveContainer for \"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c\"" Mar 17 18:48:13.567068 env[1221]: time="2025-03-17T18:48:13.567000685Z" level=info msg="RemoveContainer for \"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c\" returns successfully" Mar 17 18:48:13.569350 kubelet[2076]: I0317 18:48:13.569298 2076 scope.go:117] "RemoveContainer" containerID="e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01" Mar 17 18:48:13.572397 env[1221]: time="2025-03-17T18:48:13.572241923Z" level=info msg="RemoveContainer for \"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01\"" Mar 17 18:48:13.576615 env[1221]: time="2025-03-17T18:48:13.576543644Z" level=info msg="RemoveContainer for \"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01\" returns successfully" Mar 17 18:48:13.576903 kubelet[2076]: I0317 18:48:13.576856 2076 scope.go:117] "RemoveContainer" containerID="23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802" Mar 17 18:48:13.579761 env[1221]: time="2025-03-17T18:48:13.578813797Z" level=info msg="RemoveContainer for \"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802\"" Mar 17 18:48:13.584977 env[1221]: time="2025-03-17T18:48:13.584911187Z" level=info msg="RemoveContainer for \"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802\" returns successfully" Mar 17 18:48:13.585233 kubelet[2076]: I0317 18:48:13.585187 2076 scope.go:117] "RemoveContainer" containerID="535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8" Mar 17 18:48:13.586901 env[1221]: time="2025-03-17T18:48:13.586834128Z" level=info msg="RemoveContainer for \"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8\"" Mar 17 18:48:13.590941 env[1221]: time="2025-03-17T18:48:13.590870929Z" level=info msg="RemoveContainer for \"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8\" returns successfully" Mar 17 18:48:13.591320 kubelet[2076]: I0317 18:48:13.591285 2076 scope.go:117] "RemoveContainer" containerID="e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c" Mar 17 18:48:13.591948 env[1221]: time="2025-03-17T18:48:13.591834698Z" level=error msg="ContainerStatus for \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\": not found" Mar 17 18:48:13.592246 kubelet[2076]: E0317 18:48:13.592211 2076 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\": not found" containerID="e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c" Mar 17 18:48:13.592401 kubelet[2076]: I0317 18:48:13.592276 2076 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c"} err="failed to get container status \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e51934d42c051d1e6f6f581aa57f503c3d0376925848eecbf3a3a4ed5483090c\": not found" Mar 17 18:48:13.592476 kubelet[2076]: I0317 18:48:13.592409 2076 scope.go:117] "RemoveContainer" containerID="b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c" Mar 17 18:48:13.592827 env[1221]: time="2025-03-17T18:48:13.592748700Z" level=error msg="ContainerStatus for \"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c\": not found" Mar 17 18:48:13.592991 kubelet[2076]: E0317 18:48:13.592958 2076 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c\": not found" containerID="b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c" Mar 17 18:48:13.593095 kubelet[2076]: I0317 18:48:13.592995 2076 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c"} err="failed to get container status \"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b027ccbe7e4a1e511d56f7f2e5ce0fe30875058be94ddb3b000fee541306524c\": not found" Mar 17 18:48:13.593095 kubelet[2076]: I0317 18:48:13.593025 2076 scope.go:117] "RemoveContainer" containerID="e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01" Mar 17 18:48:13.593366 env[1221]: time="2025-03-17T18:48:13.593281207Z" level=error msg="ContainerStatus for \"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01\": not found" Mar 17 18:48:13.593512 kubelet[2076]: E0317 18:48:13.593480 2076 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01\": not found" containerID="e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01" Mar 17 18:48:13.593608 kubelet[2076]: I0317 18:48:13.593518 2076 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01"} err="failed to get container status \"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01\": rpc error: code = NotFound desc = an error occurred when try to find container \"e67af14b33aea1b313d8096225f25d2403fd04955d90351a2499fb5205d1bd01\": not found" Mar 17 18:48:13.593608 kubelet[2076]: I0317 18:48:13.593545 2076 scope.go:117] "RemoveContainer" containerID="23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802" Mar 17 18:48:13.593994 env[1221]: time="2025-03-17T18:48:13.593898380Z" level=error msg="ContainerStatus for \"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802\": not found" Mar 17 18:48:13.594151 kubelet[2076]: E0317 18:48:13.594119 2076 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802\": not found" containerID="23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802" Mar 17 18:48:13.594295 kubelet[2076]: I0317 18:48:13.594156 2076 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802"} err="failed to get container status \"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802\": rpc error: code = NotFound desc = an error occurred when try to find container \"23e758fbda24046c64415af2ec4676715d98556a2930f2871a1c4ccad65fe802\": not found" Mar 17 18:48:13.594295 kubelet[2076]: I0317 18:48:13.594183 2076 scope.go:117] "RemoveContainer" containerID="535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8" Mar 17 18:48:13.594515 env[1221]: time="2025-03-17T18:48:13.594443442Z" level=error msg="ContainerStatus for \"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8\": not found" Mar 17 18:48:13.594717 kubelet[2076]: E0317 18:48:13.594654 2076 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8\": not found" containerID="535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8" Mar 17 18:48:13.594830 kubelet[2076]: I0317 18:48:13.594722 2076 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8"} err="failed to get container status \"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"535f7b5c149b5c3a3e6817d9384b1ac436285989272a0c71c1f7a173fbcbedc8\": not found" Mar 17 18:48:13.594830 kubelet[2076]: I0317 18:48:13.594747 2076 scope.go:117] "RemoveContainer" containerID="3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92" Mar 17 18:48:13.596169 env[1221]: time="2025-03-17T18:48:13.596129124Z" level=info msg="RemoveContainer for \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\"" Mar 17 18:48:13.600574 env[1221]: time="2025-03-17T18:48:13.600516077Z" level=info msg="RemoveContainer for \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\" returns successfully" Mar 17 18:48:13.601022 kubelet[2076]: I0317 18:48:13.600951 2076 scope.go:117] "RemoveContainer" containerID="3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92" Mar 17 18:48:13.601439 env[1221]: time="2025-03-17T18:48:13.601358562Z" level=error msg="ContainerStatus for \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\": not found" Mar 17 18:48:13.601797 kubelet[2076]: E0317 18:48:13.601753 2076 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\": not found" containerID="3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92" Mar 17 18:48:13.601917 kubelet[2076]: I0317 18:48:13.601795 2076 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92"} err="failed to get container status \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\": rpc error: code = NotFound desc = an error occurred when try to find container \"3018931a4cfd1272c25e2a7f6106c64f6325c9c694db64342596ba469278dd92\": not found" Mar 17 18:48:13.872128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f-rootfs.mount: Deactivated successfully. Mar 17 18:48:13.872317 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f-shm.mount: Deactivated successfully. Mar 17 18:48:13.872436 systemd[1]: var-lib-kubelet-pods-6919bef5\x2d1c5f\x2d4605\x2dbbce\x2dbf53f5124720-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77spc.mount: Deactivated successfully. Mar 17 18:48:13.872543 systemd[1]: var-lib-kubelet-pods-e0d16c84\x2d2526\x2d4ff9\x2d8c33\x2d9061d637f468-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drql75.mount: Deactivated successfully. Mar 17 18:48:13.872651 systemd[1]: var-lib-kubelet-pods-6919bef5\x2d1c5f\x2d4605\x2dbbce\x2dbf53f5124720-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:48:13.872796 systemd[1]: var-lib-kubelet-pods-6919bef5\x2d1c5f\x2d4605\x2dbbce\x2dbf53f5124720-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:48:14.159352 kubelet[2076]: I0317 18:48:14.158440 2076 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6919bef5-1c5f-4605-bbce-bf53f5124720" path="/var/lib/kubelet/pods/6919bef5-1c5f-4605-bbce-bf53f5124720/volumes" Mar 17 18:48:14.159734 kubelet[2076]: I0317 18:48:14.159669 2076 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0d16c84-2526-4ff9-8c33-9061d637f468" path="/var/lib/kubelet/pods/e0d16c84-2526-4ff9-8c33-9061d637f468/volumes" Mar 17 18:48:14.828104 sshd[3625]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:14.832981 systemd[1]: sshd@20-10.128.0.78:22-139.178.89.65:37402.service: Deactivated successfully. Mar 17 18:48:14.834197 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:48:14.834434 systemd[1]: session-21.scope: Consumed 1.607s CPU time. Mar 17 18:48:14.835297 systemd-logind[1207]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:48:14.836670 systemd-logind[1207]: Removed session 21. Mar 17 18:48:14.873758 systemd[1]: Started sshd@21-10.128.0.78:22-139.178.89.65:58534.service. Mar 17 18:48:15.164214 sshd[3795]: Accepted publickey for core from 139.178.89.65 port 58534 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:48:15.166342 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:15.173749 systemd[1]: Started session-22.scope. Mar 17 18:48:15.174667 systemd-logind[1207]: New session 22 of user core. Mar 17 18:48:15.289402 kubelet[2076]: E0317 18:48:15.289316 2076 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:48:15.942357 kubelet[2076]: I0317 18:48:15.942302 2076 topology_manager.go:215] "Topology Admit Handler" podUID="0240e0af-e26f-435f-a345-8bb72882f042" podNamespace="kube-system" podName="cilium-p4vbv" Mar 17 18:48:15.942793 kubelet[2076]: E0317 18:48:15.942766 2076 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6919bef5-1c5f-4605-bbce-bf53f5124720" containerName="mount-bpf-fs" Mar 17 18:48:15.942989 kubelet[2076]: E0317 18:48:15.942966 2076 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6919bef5-1c5f-4605-bbce-bf53f5124720" containerName="clean-cilium-state" Mar 17 18:48:15.943121 kubelet[2076]: E0317 18:48:15.943103 2076 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e0d16c84-2526-4ff9-8c33-9061d637f468" containerName="cilium-operator" Mar 17 18:48:15.943241 kubelet[2076]: E0317 18:48:15.943223 2076 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6919bef5-1c5f-4605-bbce-bf53f5124720" containerName="mount-cgroup" Mar 17 18:48:15.943340 kubelet[2076]: E0317 18:48:15.943324 2076 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6919bef5-1c5f-4605-bbce-bf53f5124720" containerName="apply-sysctl-overwrites" Mar 17 18:48:15.943451 kubelet[2076]: E0317 18:48:15.943434 2076 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6919bef5-1c5f-4605-bbce-bf53f5124720" containerName="cilium-agent" Mar 17 18:48:15.943622 kubelet[2076]: I0317 18:48:15.943602 2076 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d16c84-2526-4ff9-8c33-9061d637f468" containerName="cilium-operator" Mar 17 18:48:15.943773 kubelet[2076]: I0317 18:48:15.943754 2076 memory_manager.go:354] "RemoveStaleState removing state" podUID="6919bef5-1c5f-4605-bbce-bf53f5124720" containerName="cilium-agent" Mar 17 18:48:15.953981 systemd[1]: Created slice kubepods-burstable-pod0240e0af_e26f_435f_a345_8bb72882f042.slice. Mar 17 18:48:15.959383 sshd[3795]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:15.966453 systemd-logind[1207]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:48:15.966814 systemd[1]: sshd@21-10.128.0.78:22-139.178.89.65:58534.service: Deactivated successfully. Mar 17 18:48:15.968063 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:48:15.970561 systemd-logind[1207]: Removed session 22. Mar 17 18:48:15.971413 kubelet[2076]: W0317 18:48:15.971378 2076 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:48:15.971548 kubelet[2076]: E0317 18:48:15.971432 2076 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:48:15.972162 kubelet[2076]: W0317 18:48:15.971742 2076 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:48:15.972162 kubelet[2076]: E0317 18:48:15.971781 2076 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:48:15.972357 kubelet[2076]: W0317 18:48:15.972169 2076 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:48:15.972357 kubelet[2076]: E0317 18:48:15.972205 2076 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:48:15.972503 kubelet[2076]: W0317 18:48:15.972474 2076 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:48:15.972598 kubelet[2076]: E0317 18:48:15.972509 2076 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal' and this object Mar 17 18:48:16.012189 systemd[1]: Started sshd@22-10.128.0.78:22-139.178.89.65:58538.service. Mar 17 18:48:16.105699 kubelet[2076]: I0317 18:48:16.105537 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-hostproc\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.105699 kubelet[2076]: I0317 18:48:16.105641 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-lib-modules\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.105699 kubelet[2076]: I0317 18:48:16.105695 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx92s\" (UniqueName: \"kubernetes.io/projected/0240e0af-e26f-435f-a345-8bb72882f042-kube-api-access-fx92s\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106179 kubelet[2076]: I0317 18:48:16.105735 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0240e0af-e26f-435f-a345-8bb72882f042-hubble-tls\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106179 kubelet[2076]: I0317 18:48:16.105765 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cilium-run\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106179 kubelet[2076]: I0317 18:48:16.105795 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cni-path\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106179 kubelet[2076]: I0317 18:48:16.105832 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-xtables-lock\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106179 kubelet[2076]: I0317 18:48:16.105861 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0240e0af-e26f-435f-a345-8bb72882f042-clustermesh-secrets\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106179 kubelet[2076]: I0317 18:48:16.105900 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-host-proc-sys-net\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106460 kubelet[2076]: I0317 18:48:16.105925 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-bpf-maps\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106460 kubelet[2076]: I0317 18:48:16.105953 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0240e0af-e26f-435f-a345-8bb72882f042-cilium-ipsec-secrets\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106460 kubelet[2076]: I0317 18:48:16.105978 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-host-proc-sys-kernel\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106460 kubelet[2076]: I0317 18:48:16.106006 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cilium-cgroup\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106460 kubelet[2076]: I0317 18:48:16.106031 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-etc-cni-netd\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.106460 kubelet[2076]: I0317 18:48:16.106061 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0240e0af-e26f-435f-a345-8bb72882f042-cilium-config-path\") pod \"cilium-p4vbv\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " pod="kube-system/cilium-p4vbv" Mar 17 18:48:16.311340 sshd[3806]: Accepted publickey for core from 139.178.89.65 port 58538 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:48:16.313752 sshd[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:16.322153 systemd[1]: Started session-23.scope. Mar 17 18:48:16.323059 systemd-logind[1207]: New session 23 of user core. Mar 17 18:48:16.608051 kubelet[2076]: E0317 18:48:16.607864 2076 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-p4vbv" podUID="0240e0af-e26f-435f-a345-8bb72882f042" Mar 17 18:48:16.614624 sshd[3806]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:16.620364 systemd-logind[1207]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:48:16.622199 systemd[1]: sshd@22-10.128.0.78:22-139.178.89.65:58538.service: Deactivated successfully. Mar 17 18:48:16.623878 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:48:16.627529 systemd-logind[1207]: Removed session 23. Mar 17 18:48:16.662888 systemd[1]: Started sshd@23-10.128.0.78:22-139.178.89.65:58554.service. Mar 17 18:48:16.954237 sshd[3818]: Accepted publickey for core from 139.178.89.65 port 58554 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:48:16.956358 sshd[3818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:16.963790 systemd-logind[1207]: New session 24 of user core. Mar 17 18:48:16.963816 systemd[1]: Started session-24.scope. Mar 17 18:48:17.207992 kubelet[2076]: E0317 18:48:17.207778 2076 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:48:17.208285 kubelet[2076]: E0317 18:48:17.208082 2076 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0240e0af-e26f-435f-a345-8bb72882f042-cilium-config-path podName:0240e0af-e26f-435f-a345-8bb72882f042 nodeName:}" failed. No retries permitted until 2025-03-17 18:48:17.707969671 +0000 UTC m=+117.721420187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/0240e0af-e26f-435f-a345-8bb72882f042-cilium-config-path") pod "cilium-p4vbv" (UID: "0240e0af-e26f-435f-a345-8bb72882f042") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:48:17.208514 kubelet[2076]: E0317 18:48:17.208477 2076 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 18:48:17.208627 kubelet[2076]: E0317 18:48:17.208594 2076 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0240e0af-e26f-435f-a345-8bb72882f042-clustermesh-secrets podName:0240e0af-e26f-435f-a345-8bb72882f042 nodeName:}" failed. No retries permitted until 2025-03-17 18:48:17.708562711 +0000 UTC m=+117.722013217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/0240e0af-e26f-435f-a345-8bb72882f042-clustermesh-secrets") pod "cilium-p4vbv" (UID: "0240e0af-e26f-435f-a345-8bb72882f042") : failed to sync secret cache: timed out waiting for the condition Mar 17 18:48:17.208783 kubelet[2076]: E0317 18:48:17.208725 2076 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Mar 17 18:48:17.208783 kubelet[2076]: E0317 18:48:17.208745 2076 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-p4vbv: failed to sync secret cache: timed out waiting for the condition Mar 17 18:48:17.208919 kubelet[2076]: E0317 18:48:17.208814 2076 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0240e0af-e26f-435f-a345-8bb72882f042-hubble-tls podName:0240e0af-e26f-435f-a345-8bb72882f042 nodeName:}" failed. No retries permitted until 2025-03-17 18:48:17.70879344 +0000 UTC m=+117.722243931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/0240e0af-e26f-435f-a345-8bb72882f042-hubble-tls") pod "cilium-p4vbv" (UID: "0240e0af-e26f-435f-a345-8bb72882f042") : failed to sync secret cache: timed out waiting for the condition Mar 17 18:48:17.718895 kubelet[2076]: I0317 18:48:17.718830 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-hostproc\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.718895 kubelet[2076]: I0317 18:48:17.718886 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cni-path\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.718895 kubelet[2076]: I0317 18:48:17.718917 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cilium-cgroup\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.719625 kubelet[2076]: I0317 18:48:17.718941 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cilium-run\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.719625 kubelet[2076]: I0317 18:48:17.718964 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-xtables-lock\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.719625 kubelet[2076]: I0317 18:48:17.718987 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-host-proc-sys-net\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.719625 kubelet[2076]: I0317 18:48:17.719029 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0240e0af-e26f-435f-a345-8bb72882f042-cilium-ipsec-secrets\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.719625 kubelet[2076]: I0317 18:48:17.719065 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-etc-cni-netd\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.719625 kubelet[2076]: I0317 18:48:17.719099 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-lib-modules\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.720003 kubelet[2076]: I0317 18:48:17.719125 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-bpf-maps\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.720003 kubelet[2076]: I0317 18:48:17.719154 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-host-proc-sys-kernel\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.720003 kubelet[2076]: I0317 18:48:17.719206 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx92s\" (UniqueName: \"kubernetes.io/projected/0240e0af-e26f-435f-a345-8bb72882f042-kube-api-access-fx92s\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.737216 kubelet[2076]: I0317 18:48:17.737153 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0240e0af-e26f-435f-a345-8bb72882f042-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:48:17.737393 kubelet[2076]: I0317 18:48:17.737248 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-hostproc" (OuterVolumeSpecName: "hostproc") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:17.737393 kubelet[2076]: I0317 18:48:17.737275 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cni-path" (OuterVolumeSpecName: "cni-path") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:17.737393 kubelet[2076]: I0317 18:48:17.737296 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:17.737393 kubelet[2076]: I0317 18:48:17.737322 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:17.737393 kubelet[2076]: I0317 18:48:17.737343 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:17.737704 kubelet[2076]: I0317 18:48:17.737364 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:17.737704 kubelet[2076]: I0317 18:48:17.737393 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:17.737704 kubelet[2076]: I0317 18:48:17.737426 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:17.737704 kubelet[2076]: I0317 18:48:17.737452 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:17.737704 kubelet[2076]: I0317 18:48:17.737475 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:48:17.742067 systemd[1]: var-lib-kubelet-pods-0240e0af\x2de26f\x2d435f\x2da345\x2d8bb72882f042-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:48:17.759717 kubelet[2076]: I0317 18:48:17.756039 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0240e0af-e26f-435f-a345-8bb72882f042-kube-api-access-fx92s" (OuterVolumeSpecName: "kube-api-access-fx92s") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "kube-api-access-fx92s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:48:17.763522 systemd[1]: var-lib-kubelet-pods-0240e0af\x2de26f\x2d435f\x2da345\x2d8bb72882f042-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfx92s.mount: Deactivated successfully. Mar 17 18:48:17.819981 kubelet[2076]: I0317 18:48:17.819908 2076 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0240e0af-e26f-435f-a345-8bb72882f042-cilium-ipsec-secrets\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.819981 kubelet[2076]: I0317 18:48:17.819964 2076 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-etc-cni-netd\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.819981 kubelet[2076]: I0317 18:48:17.819983 2076 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-lib-modules\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.820294 kubelet[2076]: I0317 18:48:17.820001 2076 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-bpf-maps\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.820294 kubelet[2076]: I0317 18:48:17.820017 2076 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-host-proc-sys-kernel\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.820294 kubelet[2076]: I0317 18:48:17.820032 2076 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fx92s\" (UniqueName: \"kubernetes.io/projected/0240e0af-e26f-435f-a345-8bb72882f042-kube-api-access-fx92s\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.820294 kubelet[2076]: I0317 18:48:17.820060 2076 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-hostproc\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.820294 kubelet[2076]: I0317 18:48:17.820074 2076 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cni-path\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.820294 kubelet[2076]: I0317 18:48:17.820089 2076 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cilium-run\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.820294 kubelet[2076]: I0317 18:48:17.820102 2076 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-xtables-lock\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.820516 kubelet[2076]: I0317 18:48:17.820116 2076 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-host-proc-sys-net\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.820516 kubelet[2076]: I0317 18:48:17.820129 2076 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0240e0af-e26f-435f-a345-8bb72882f042-cilium-cgroup\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:17.921488 kubelet[2076]: I0317 18:48:17.921403 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0240e0af-e26f-435f-a345-8bb72882f042-hubble-tls\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.921762 kubelet[2076]: I0317 18:48:17.921542 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0240e0af-e26f-435f-a345-8bb72882f042-cilium-config-path\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.921762 kubelet[2076]: I0317 18:48:17.921581 2076 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0240e0af-e26f-435f-a345-8bb72882f042-clustermesh-secrets\") pod \"0240e0af-e26f-435f-a345-8bb72882f042\" (UID: \"0240e0af-e26f-435f-a345-8bb72882f042\") " Mar 17 18:48:17.930068 systemd[1]: var-lib-kubelet-pods-0240e0af\x2de26f\x2d435f\x2da345\x2d8bb72882f042-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:48:17.930228 systemd[1]: var-lib-kubelet-pods-0240e0af\x2de26f\x2d435f\x2da345\x2d8bb72882f042-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:48:17.931427 kubelet[2076]: I0317 18:48:17.931376 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0240e0af-e26f-435f-a345-8bb72882f042-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:48:17.933756 kubelet[2076]: I0317 18:48:17.933711 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0240e0af-e26f-435f-a345-8bb72882f042-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:48:17.934569 kubelet[2076]: I0317 18:48:17.934521 2076 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0240e0af-e26f-435f-a345-8bb72882f042-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0240e0af-e26f-435f-a345-8bb72882f042" (UID: "0240e0af-e26f-435f-a345-8bb72882f042"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:48:18.022502 kubelet[2076]: I0317 18:48:18.022334 2076 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0240e0af-e26f-435f-a345-8bb72882f042-hubble-tls\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:18.022502 kubelet[2076]: I0317 18:48:18.022391 2076 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0240e0af-e26f-435f-a345-8bb72882f042-cilium-config-path\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:18.022502 kubelet[2076]: I0317 18:48:18.022414 2076 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0240e0af-e26f-435f-a345-8bb72882f042-clustermesh-secrets\") on node \"ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:48:18.164656 systemd[1]: Removed slice kubepods-burstable-pod0240e0af_e26f_435f_a345_8bb72882f042.slice. Mar 17 18:48:18.599586 kubelet[2076]: I0317 18:48:18.599524 2076 topology_manager.go:215] "Topology Admit Handler" podUID="80439fb6-b603-48fa-a535-1cfd02129aa2" podNamespace="kube-system" podName="cilium-cnvnp" Mar 17 18:48:18.608094 systemd[1]: Created slice kubepods-burstable-pod80439fb6_b603_48fa_a535_1cfd02129aa2.slice. Mar 17 18:48:18.726638 kubelet[2076]: I0317 18:48:18.726594 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80439fb6-b603-48fa-a535-1cfd02129aa2-xtables-lock\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.727349 kubelet[2076]: I0317 18:48:18.727321 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80439fb6-b603-48fa-a535-1cfd02129aa2-host-proc-sys-kernel\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.727558 kubelet[2076]: I0317 18:48:18.727533 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80439fb6-b603-48fa-a535-1cfd02129aa2-bpf-maps\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.727745 kubelet[2076]: I0317 18:48:18.727723 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80439fb6-b603-48fa-a535-1cfd02129aa2-hostproc\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.727922 kubelet[2076]: I0317 18:48:18.727900 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80439fb6-b603-48fa-a535-1cfd02129aa2-cilium-config-path\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.728089 kubelet[2076]: I0317 18:48:18.728066 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80439fb6-b603-48fa-a535-1cfd02129aa2-cilium-cgroup\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.728264 kubelet[2076]: I0317 18:48:18.728243 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80439fb6-b603-48fa-a535-1cfd02129aa2-etc-cni-netd\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.728431 kubelet[2076]: I0317 18:48:18.728410 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80439fb6-b603-48fa-a535-1cfd02129aa2-hubble-tls\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.728596 kubelet[2076]: I0317 18:48:18.728575 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv84q\" (UniqueName: \"kubernetes.io/projected/80439fb6-b603-48fa-a535-1cfd02129aa2-kube-api-access-lv84q\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.728822 kubelet[2076]: I0317 18:48:18.728798 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80439fb6-b603-48fa-a535-1cfd02129aa2-lib-modules\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.729034 kubelet[2076]: I0317 18:48:18.729009 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80439fb6-b603-48fa-a535-1cfd02129aa2-cilium-run\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.729213 kubelet[2076]: I0317 18:48:18.729192 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80439fb6-b603-48fa-a535-1cfd02129aa2-host-proc-sys-net\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.729366 kubelet[2076]: I0317 18:48:18.729343 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/80439fb6-b603-48fa-a535-1cfd02129aa2-cilium-ipsec-secrets\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.729532 kubelet[2076]: I0317 18:48:18.729510 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80439fb6-b603-48fa-a535-1cfd02129aa2-clustermesh-secrets\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.729711 kubelet[2076]: I0317 18:48:18.729671 2076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80439fb6-b603-48fa-a535-1cfd02129aa2-cni-path\") pod \"cilium-cnvnp\" (UID: \"80439fb6-b603-48fa-a535-1cfd02129aa2\") " pod="kube-system/cilium-cnvnp" Mar 17 18:48:18.913875 env[1221]: time="2025-03-17T18:48:18.913367785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnvnp,Uid:80439fb6-b603-48fa-a535-1cfd02129aa2,Namespace:kube-system,Attempt:0,}" Mar 17 18:48:18.931341 env[1221]: time="2025-03-17T18:48:18.931247710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:48:18.931649 env[1221]: time="2025-03-17T18:48:18.931608467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:48:18.931950 env[1221]: time="2025-03-17T18:48:18.931884603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:48:18.932391 env[1221]: time="2025-03-17T18:48:18.932313891Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547 pid=3843 runtime=io.containerd.runc.v2 Mar 17 18:48:18.950394 systemd[1]: Started cri-containerd-3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547.scope. Mar 17 18:48:18.993535 env[1221]: time="2025-03-17T18:48:18.993464393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnvnp,Uid:80439fb6-b603-48fa-a535-1cfd02129aa2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547\"" Mar 17 18:48:18.998732 env[1221]: time="2025-03-17T18:48:18.998498182Z" level=info msg="CreateContainer within sandbox \"3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:48:19.011746 env[1221]: time="2025-03-17T18:48:19.011663433Z" level=info msg="CreateContainer within sandbox \"3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b31f0f20f604abe1483fbae23b3a834cbc8fd0f97558cf84f2f1ca085a3f3489\"" Mar 17 18:48:19.012637 env[1221]: time="2025-03-17T18:48:19.012598336Z" level=info msg="StartContainer for \"b31f0f20f604abe1483fbae23b3a834cbc8fd0f97558cf84f2f1ca085a3f3489\"" Mar 17 18:48:19.037152 systemd[1]: Started cri-containerd-b31f0f20f604abe1483fbae23b3a834cbc8fd0f97558cf84f2f1ca085a3f3489.scope. Mar 17 18:48:19.082988 env[1221]: time="2025-03-17T18:48:19.082906059Z" level=info msg="StartContainer for \"b31f0f20f604abe1483fbae23b3a834cbc8fd0f97558cf84f2f1ca085a3f3489\" returns successfully" Mar 17 18:48:19.096063 systemd[1]: cri-containerd-b31f0f20f604abe1483fbae23b3a834cbc8fd0f97558cf84f2f1ca085a3f3489.scope: Deactivated successfully. Mar 17 18:48:19.134586 env[1221]: time="2025-03-17T18:48:19.134515832Z" level=info msg="shim disconnected" id=b31f0f20f604abe1483fbae23b3a834cbc8fd0f97558cf84f2f1ca085a3f3489 Mar 17 18:48:19.134586 env[1221]: time="2025-03-17T18:48:19.134584286Z" level=warning msg="cleaning up after shim disconnected" id=b31f0f20f604abe1483fbae23b3a834cbc8fd0f97558cf84f2f1ca085a3f3489 namespace=k8s.io Mar 17 18:48:19.134586 env[1221]: time="2025-03-17T18:48:19.134598842Z" level=info msg="cleaning up dead shim" Mar 17 18:48:19.147377 env[1221]: time="2025-03-17T18:48:19.147297432Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:48:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3929 runtime=io.containerd.runc.v2\n" Mar 17 18:48:19.564718 env[1221]: time="2025-03-17T18:48:19.564624128Z" level=info msg="CreateContainer within sandbox \"3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:48:19.581549 env[1221]: time="2025-03-17T18:48:19.581485273Z" level=info msg="CreateContainer within sandbox \"3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"240a11b46a250c75b06d65f76e42c9637c95e84102f4ad0f0df9b407a678cafa\"" Mar 17 18:48:19.582836 env[1221]: time="2025-03-17T18:48:19.582758603Z" level=info msg="StartContainer for \"240a11b46a250c75b06d65f76e42c9637c95e84102f4ad0f0df9b407a678cafa\"" Mar 17 18:48:19.620259 systemd[1]: Started cri-containerd-240a11b46a250c75b06d65f76e42c9637c95e84102f4ad0f0df9b407a678cafa.scope. Mar 17 18:48:19.684296 env[1221]: time="2025-03-17T18:48:19.684225644Z" level=info msg="StartContainer for \"240a11b46a250c75b06d65f76e42c9637c95e84102f4ad0f0df9b407a678cafa\" returns successfully" Mar 17 18:48:19.700133 systemd[1]: cri-containerd-240a11b46a250c75b06d65f76e42c9637c95e84102f4ad0f0df9b407a678cafa.scope: Deactivated successfully. Mar 17 18:48:19.745205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-240a11b46a250c75b06d65f76e42c9637c95e84102f4ad0f0df9b407a678cafa-rootfs.mount: Deactivated successfully. Mar 17 18:48:19.752063 env[1221]: time="2025-03-17T18:48:19.752001991Z" level=info msg="shim disconnected" id=240a11b46a250c75b06d65f76e42c9637c95e84102f4ad0f0df9b407a678cafa Mar 17 18:48:19.752415 env[1221]: time="2025-03-17T18:48:19.752387084Z" level=warning msg="cleaning up after shim disconnected" id=240a11b46a250c75b06d65f76e42c9637c95e84102f4ad0f0df9b407a678cafa namespace=k8s.io Mar 17 18:48:19.752542 env[1221]: time="2025-03-17T18:48:19.752520384Z" level=info msg="cleaning up dead shim" Mar 17 18:48:19.769919 env[1221]: time="2025-03-17T18:48:19.769845345Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:48:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3990 runtime=io.containerd.runc.v2\n" Mar 17 18:48:20.159400 env[1221]: time="2025-03-17T18:48:20.157051131Z" level=info msg="StopPodSandbox for \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\"" Mar 17 18:48:20.159400 env[1221]: time="2025-03-17T18:48:20.157206965Z" level=info msg="TearDown network for sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" successfully" Mar 17 18:48:20.159400 env[1221]: time="2025-03-17T18:48:20.157269394Z" level=info msg="StopPodSandbox for \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" returns successfully" Mar 17 18:48:20.159400 env[1221]: time="2025-03-17T18:48:20.158056076Z" level=info msg="RemovePodSandbox for \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\"" Mar 17 18:48:20.159400 env[1221]: time="2025-03-17T18:48:20.158101148Z" level=info msg="Forcibly stopping sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\"" Mar 17 18:48:20.159400 env[1221]: time="2025-03-17T18:48:20.158211747Z" level=info msg="TearDown network for sandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" successfully" Mar 17 18:48:20.161586 kubelet[2076]: I0317 18:48:20.161534 2076 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0240e0af-e26f-435f-a345-8bb72882f042" path="/var/lib/kubelet/pods/0240e0af-e26f-435f-a345-8bb72882f042/volumes" Mar 17 18:48:20.163895 env[1221]: time="2025-03-17T18:48:20.163835236Z" level=info msg="RemovePodSandbox \"a7ca1a2de015f3f22913b13a0382b50523599a32401550fc3cbb83ae5731e51f\" returns successfully" Mar 17 18:48:20.164442 env[1221]: time="2025-03-17T18:48:20.164404532Z" level=info msg="StopPodSandbox for \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\"" Mar 17 18:48:20.164578 env[1221]: time="2025-03-17T18:48:20.164528531Z" level=info msg="TearDown network for sandbox \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\" successfully" Mar 17 18:48:20.164649 env[1221]: time="2025-03-17T18:48:20.164580971Z" level=info msg="StopPodSandbox for \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\" returns successfully" Mar 17 18:48:20.165094 env[1221]: time="2025-03-17T18:48:20.165052938Z" level=info msg="RemovePodSandbox for \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\"" Mar 17 18:48:20.165252 env[1221]: time="2025-03-17T18:48:20.165100329Z" level=info msg="Forcibly stopping sandbox \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\"" Mar 17 18:48:20.165252 env[1221]: time="2025-03-17T18:48:20.165207243Z" level=info msg="TearDown network for sandbox \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\" successfully" Mar 17 18:48:20.172749 env[1221]: time="2025-03-17T18:48:20.172636927Z" level=info msg="RemovePodSandbox \"cfcd8a236b6389db8ae0f79355393b04b17e1144ac4a884700612f894fbfea04\" returns successfully" Mar 17 18:48:20.291306 kubelet[2076]: E0317 18:48:20.291256 2076 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:48:20.569327 env[1221]: time="2025-03-17T18:48:20.569181047Z" level=info msg="CreateContainer within sandbox \"3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:48:20.594931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024607186.mount: Deactivated successfully. Mar 17 18:48:20.610868 env[1221]: time="2025-03-17T18:48:20.610801887Z" level=info msg="CreateContainer within sandbox \"3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"22ab44043aa78d0533bc68c6a2d65983b85643920e7cd3518fd76c61e1bd3105\"" Mar 17 18:48:20.612322 env[1221]: time="2025-03-17T18:48:20.612272180Z" level=info msg="StartContainer for \"22ab44043aa78d0533bc68c6a2d65983b85643920e7cd3518fd76c61e1bd3105\"" Mar 17 18:48:20.643334 systemd[1]: Started cri-containerd-22ab44043aa78d0533bc68c6a2d65983b85643920e7cd3518fd76c61e1bd3105.scope. Mar 17 18:48:20.760721 env[1221]: time="2025-03-17T18:48:20.760625441Z" level=info msg="StartContainer for \"22ab44043aa78d0533bc68c6a2d65983b85643920e7cd3518fd76c61e1bd3105\" returns successfully" Mar 17 18:48:20.763796 systemd[1]: cri-containerd-22ab44043aa78d0533bc68c6a2d65983b85643920e7cd3518fd76c61e1bd3105.scope: Deactivated successfully. Mar 17 18:48:20.794867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22ab44043aa78d0533bc68c6a2d65983b85643920e7cd3518fd76c61e1bd3105-rootfs.mount: Deactivated successfully. Mar 17 18:48:20.799111 env[1221]: time="2025-03-17T18:48:20.799054614Z" level=info msg="shim disconnected" id=22ab44043aa78d0533bc68c6a2d65983b85643920e7cd3518fd76c61e1bd3105 Mar 17 18:48:20.799608 env[1221]: time="2025-03-17T18:48:20.799552603Z" level=warning msg="cleaning up after shim disconnected" id=22ab44043aa78d0533bc68c6a2d65983b85643920e7cd3518fd76c61e1bd3105 namespace=k8s.io Mar 17 18:48:20.799608 env[1221]: time="2025-03-17T18:48:20.799585651Z" level=info msg="cleaning up dead shim" Mar 17 18:48:20.811478 env[1221]: time="2025-03-17T18:48:20.811421079Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:48:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4049 runtime=io.containerd.runc.v2\n" Mar 17 18:48:21.574136 env[1221]: time="2025-03-17T18:48:21.574073058Z" level=info msg="CreateContainer within sandbox \"3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:48:21.597187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535547042.mount: Deactivated successfully. Mar 17 18:48:21.610137 env[1221]: time="2025-03-17T18:48:21.610070891Z" level=info msg="CreateContainer within sandbox \"3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2e3589441e90222cd1ecbb28865daf18ae9a077ca4f209a6bf034cd620e76709\"" Mar 17 18:48:21.616361 env[1221]: time="2025-03-17T18:48:21.616273225Z" level=info msg="StartContainer for \"2e3589441e90222cd1ecbb28865daf18ae9a077ca4f209a6bf034cd620e76709\"" Mar 17 18:48:21.645058 systemd[1]: Started cri-containerd-2e3589441e90222cd1ecbb28865daf18ae9a077ca4f209a6bf034cd620e76709.scope. Mar 17 18:48:21.693047 env[1221]: time="2025-03-17T18:48:21.692987982Z" level=info msg="StartContainer for \"2e3589441e90222cd1ecbb28865daf18ae9a077ca4f209a6bf034cd620e76709\" returns successfully" Mar 17 18:48:21.696227 systemd[1]: cri-containerd-2e3589441e90222cd1ecbb28865daf18ae9a077ca4f209a6bf034cd620e76709.scope: Deactivated successfully. Mar 17 18:48:21.735482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e3589441e90222cd1ecbb28865daf18ae9a077ca4f209a6bf034cd620e76709-rootfs.mount: Deactivated successfully. Mar 17 18:48:21.739388 env[1221]: time="2025-03-17T18:48:21.739314512Z" level=info msg="shim disconnected" id=2e3589441e90222cd1ecbb28865daf18ae9a077ca4f209a6bf034cd620e76709 Mar 17 18:48:21.739388 env[1221]: time="2025-03-17T18:48:21.739382279Z" level=warning msg="cleaning up after shim disconnected" id=2e3589441e90222cd1ecbb28865daf18ae9a077ca4f209a6bf034cd620e76709 namespace=k8s.io Mar 17 18:48:21.739669 env[1221]: time="2025-03-17T18:48:21.739398421Z" level=info msg="cleaning up dead shim" Mar 17 18:48:21.750971 env[1221]: time="2025-03-17T18:48:21.750916718Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:48:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4104 runtime=io.containerd.runc.v2\n" Mar 17 18:48:22.579590 env[1221]: time="2025-03-17T18:48:22.579528877Z" level=info msg="CreateContainer within sandbox \"3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:48:22.619131 env[1221]: time="2025-03-17T18:48:22.619055325Z" level=info msg="CreateContainer within sandbox \"3f2ae1168c290eebe01870bdfa68eeedc229d17544d7346f03e7dc89796e0547\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1dffadc43757d0fee5e57803eeac9119aee539ed5077a4c826566f0364a7c48\"" Mar 17 18:48:22.620439 env[1221]: time="2025-03-17T18:48:22.620400911Z" level=info msg="StartContainer for \"c1dffadc43757d0fee5e57803eeac9119aee539ed5077a4c826566f0364a7c48\"" Mar 17 18:48:22.658638 systemd[1]: Started cri-containerd-c1dffadc43757d0fee5e57803eeac9119aee539ed5077a4c826566f0364a7c48.scope. Mar 17 18:48:22.714818 env[1221]: time="2025-03-17T18:48:22.714740948Z" level=info msg="StartContainer for \"c1dffadc43757d0fee5e57803eeac9119aee539ed5077a4c826566f0364a7c48\" returns successfully" Mar 17 18:48:22.756791 systemd[1]: run-containerd-runc-k8s.io-c1dffadc43757d0fee5e57803eeac9119aee539ed5077a4c826566f0364a7c48-runc.aZJVRb.mount: Deactivated successfully. Mar 17 18:48:23.097692 kubelet[2076]: I0317 18:48:23.097595 2076 setters.go:580] "Node became not ready" node="ci-3510-3-7-0a30c20bb7dd1d16f611.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:48:23Z","lastTransitionTime":"2025-03-17T18:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:48:23.221734 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:48:25.457277 systemd[1]: run-containerd-runc-k8s.io-c1dffadc43757d0fee5e57803eeac9119aee539ed5077a4c826566f0364a7c48-runc.6FEAby.mount: Deactivated successfully. Mar 17 18:48:26.581656 systemd-networkd[1021]: lxc_health: Link UP Mar 17 18:48:26.594246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:48:26.600142 systemd-networkd[1021]: lxc_health: Gained carrier Mar 17 18:48:26.968577 kubelet[2076]: I0317 18:48:26.968489 2076 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cnvnp" podStartSLOduration=8.968432398000001 podStartE2EDuration="8.968432398s" podCreationTimestamp="2025-03-17 18:48:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:48:23.604916288 +0000 UTC m=+123.618366802" watchObservedRunningTime="2025-03-17 18:48:26.968432398 +0000 UTC m=+126.981882916" Mar 17 18:48:27.716006 systemd[1]: run-containerd-runc-k8s.io-c1dffadc43757d0fee5e57803eeac9119aee539ed5077a4c826566f0364a7c48-runc.sG7Jlg.mount: Deactivated successfully. Mar 17 18:48:28.345391 systemd-networkd[1021]: lxc_health: Gained IPv6LL Mar 17 18:48:32.314482 systemd[1]: run-containerd-runc-k8s.io-c1dffadc43757d0fee5e57803eeac9119aee539ed5077a4c826566f0364a7c48-runc.oZxWVp.mount: Deactivated successfully. Mar 17 18:48:32.464033 sshd[3818]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:32.469996 systemd-logind[1207]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:48:32.472340 systemd[1]: sshd@23-10.128.0.78:22-139.178.89.65:58554.service: Deactivated successfully. Mar 17 18:48:32.473538 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:48:32.475662 systemd-logind[1207]: Removed session 24.