Mar 17 18:41:27.121125 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:41:27.121166 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:41:27.121195 kernel: BIOS-provided physical RAM map: Mar 17 18:41:27.121210 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 17 18:41:27.121222 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 17 18:41:27.121235 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 17 18:41:27.121255 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 17 18:41:27.121269 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 17 18:41:27.121283 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd277fff] usable Mar 17 18:41:27.121297 kernel: BIOS-e820: [mem 0x00000000bd278000-0x00000000bd281fff] ACPI data Mar 17 18:41:27.121311 kernel: BIOS-e820: [mem 0x00000000bd282000-0x00000000bf8ecfff] usable Mar 17 18:41:27.121325 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Mar 17 18:41:27.121339 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 17 18:41:27.121354 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 17 18:41:27.121376 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 17 18:41:27.121391 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 17 18:41:27.121407 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 17 18:41:27.121422 kernel: NX (Execute Disable) protection: active Mar 17 18:41:27.121464 kernel: efi: EFI v2.70 by EDK II Mar 17 18:41:27.121480 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd278018 Mar 17 18:41:27.121495 kernel: random: crng init done Mar 17 18:41:27.121509 kernel: SMBIOS 2.4 present. Mar 17 18:41:27.121529 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 Mar 17 18:41:27.121543 kernel: Hypervisor detected: KVM Mar 17 18:41:27.121558 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:41:27.121573 kernel: kvm-clock: cpu 0, msr 21419a001, primary cpu clock Mar 17 18:41:27.121589 kernel: kvm-clock: using sched offset of 13991991793 cycles Mar 17 18:41:27.121605 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:41:27.121620 kernel: tsc: Detected 2299.998 MHz processor Mar 17 18:41:27.121637 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:41:27.121654 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:41:27.121669 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 17 18:41:27.121689 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:41:27.121704 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 17 18:41:27.121720 kernel: Using GB pages for direct mapping Mar 17 18:41:27.121735 kernel: Secure boot disabled Mar 17 18:41:27.121751 kernel: ACPI: Early table checksum verification disabled Mar 17 18:41:27.121766 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 17 18:41:27.121782 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 17 18:41:27.121798 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 17 18:41:27.121824 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 17 18:41:27.121841 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 17 18:41:27.121857 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Mar 17 18:41:27.121874 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 17 18:41:27.121892 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 17 18:41:27.121908 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 17 18:41:27.121927 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 17 18:41:27.121943 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 17 18:41:27.121959 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 17 18:41:27.121976 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 17 18:41:27.121992 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 17 18:41:27.122008 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 17 18:41:27.122025 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 17 18:41:27.122042 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 17 18:41:27.122058 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 17 18:41:27.122079 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 17 18:41:27.122096 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 17 18:41:27.122112 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 18:41:27.122129 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 18:41:27.122146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 18:41:27.122162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 17 18:41:27.122179 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 17 18:41:27.122228 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Mar 17 18:41:27.122245 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Mar 17 18:41:27.122265 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Mar 17 18:41:27.122283 kernel: Zone ranges: Mar 17 18:41:27.122300 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:41:27.122316 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 18:41:27.122331 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 17 18:41:27.122347 kernel: Movable zone start for each node Mar 17 18:41:27.122364 kernel: Early memory node ranges Mar 17 18:41:27.122382 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 17 18:41:27.122398 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 17 18:41:27.122419 kernel: node 0: [mem 0x0000000000100000-0x00000000bd277fff] Mar 17 18:41:27.122448 kernel: node 0: [mem 0x00000000bd282000-0x00000000bf8ecfff] Mar 17 18:41:27.122465 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 17 18:41:27.122482 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 17 18:41:27.122499 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 17 18:41:27.122515 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:41:27.122532 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 17 18:41:27.122549 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 17 18:41:27.122566 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Mar 17 18:41:27.122586 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 17 18:41:27.122603 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 17 18:41:27.122620 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 17 18:41:27.122637 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:41:27.122653 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:41:27.122669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:41:27.122685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:41:27.122702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:41:27.122719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:41:27.122740 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:41:27.122757 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 18:41:27.122774 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 17 18:41:27.122791 kernel: Booting paravirtualized kernel on KVM Mar 17 18:41:27.122808 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:41:27.122825 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Mar 17 18:41:27.122842 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Mar 17 18:41:27.122858 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Mar 17 18:41:27.122874 kernel: pcpu-alloc: [0] 0 1 Mar 17 18:41:27.122894 kernel: kvm-guest: PV spinlocks enabled Mar 17 18:41:27.122912 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 18:41:27.122928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Mar 17 18:41:27.122945 kernel: Policy zone: Normal Mar 17 18:41:27.122964 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:41:27.122981 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:41:27.122997 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 17 18:41:27.123014 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:41:27.123031 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:41:27.123052 kernel: Memory: 7515412K/7860544K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 344872K reserved, 0K cma-reserved) Mar 17 18:41:27.123069 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:41:27.123085 kernel: Kernel/User page tables isolation: enabled Mar 17 18:41:27.123102 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:41:27.123118 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:41:27.123135 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:41:27.123153 kernel: rcu: RCU event tracing is enabled. Mar 17 18:41:27.123170 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:41:27.123198 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:41:27.123227 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:41:27.123245 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:41:27.123265 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:41:27.123283 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 18:41:27.123299 kernel: Console: colour dummy device 80x25 Mar 17 18:41:27.123316 kernel: printk: console [ttyS0] enabled Mar 17 18:41:27.123334 kernel: ACPI: Core revision 20210730 Mar 17 18:41:27.123351 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:41:27.123369 kernel: x2apic enabled Mar 17 18:41:27.123391 kernel: Switched APIC routing to physical x2apic. Mar 17 18:41:27.123408 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 17 18:41:27.123427 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 17 18:41:27.123752 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 17 18:41:27.123772 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 17 18:41:27.123790 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 17 18:41:27.123808 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:41:27.123830 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 17 18:41:27.123849 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 17 18:41:27.123992 kernel: Spectre V2 : Mitigation: IBRS Mar 17 18:41:27.124010 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:41:27.124028 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:41:27.124045 kernel: RETBleed: Mitigation: IBRS Mar 17 18:41:27.124063 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:41:27.124081 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Mar 17 18:41:27.124099 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:41:27.124120 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 18:41:27.124274 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 18:41:27.124292 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:41:27.124309 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:41:27.124327 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:41:27.124342 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:41:27.124359 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 18:41:27.124376 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:41:27.124393 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:41:27.124617 kernel: LSM: Security Framework initializing Mar 17 18:41:27.124694 kernel: SELinux: Initializing. Mar 17 18:41:27.124712 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 18:41:27.124730 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 18:41:27.124749 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 17 18:41:27.124767 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 17 18:41:27.124785 kernel: signal: max sigframe size: 1776 Mar 17 18:41:27.124802 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:41:27.124820 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 18:41:27.124841 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:41:27.124858 kernel: x86: Booting SMP configuration: Mar 17 18:41:27.124876 kernel: .... node #0, CPUs: #1 Mar 17 18:41:27.124894 kernel: kvm-clock: cpu 1, msr 21419a041, secondary cpu clock Mar 17 18:41:27.124912 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 17 18:41:27.124931 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 17 18:41:27.124948 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:41:27.124966 kernel: smpboot: Max logical packages: 1 Mar 17 18:41:27.124987 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 17 18:41:27.125005 kernel: devtmpfs: initialized Mar 17 18:41:27.125022 kernel: x86/mm: Memory block size: 128MB Mar 17 18:41:27.125040 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 17 18:41:27.125058 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:41:27.125076 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:41:27.125094 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:41:27.125111 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:41:27.125129 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:41:27.125150 kernel: audit: type=2000 audit(1742236886.977:1): state=initialized audit_enabled=0 res=1 Mar 17 18:41:27.125168 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:41:27.125192 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:41:27.125209 kernel: cpuidle: using governor menu Mar 17 18:41:27.125227 kernel: ACPI: bus type PCI registered Mar 17 18:41:27.125245 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:41:27.125262 kernel: dca service started, version 1.12.1 Mar 17 18:41:27.125280 kernel: PCI: Using configuration type 1 for base access Mar 17 18:41:27.125298 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:41:27.125319 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:41:27.125336 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:41:27.125354 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:41:27.125372 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:41:27.125389 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:41:27.125407 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:41:27.125424 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:41:27.127470 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:41:27.127502 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:41:27.127527 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 17 18:41:27.127545 kernel: ACPI: Interpreter enabled Mar 17 18:41:27.127563 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 18:41:27.127581 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:41:27.127598 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:41:27.127616 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 17 18:41:27.127634 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:41:27.127888 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:41:27.128074 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Mar 17 18:41:27.128097 kernel: PCI host bridge to bus 0000:00 Mar 17 18:41:27.128280 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:41:27.128672 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:41:27.128854 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:41:27.129298 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 17 18:41:27.129604 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:41:27.129894 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 18:41:27.130088 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Mar 17 18:41:27.130279 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 18:41:27.130467 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 17 18:41:27.130688 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Mar 17 18:41:27.130896 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 17 18:41:27.131064 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Mar 17 18:41:27.131242 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:41:27.131400 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Mar 17 18:41:27.137633 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Mar 17 18:41:27.137862 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:41:27.138059 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Mar 17 18:41:27.138234 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Mar 17 18:41:27.138267 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:41:27.138287 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:41:27.138305 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:41:27.138323 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:41:27.138341 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 18:41:27.138369 kernel: iommu: Default domain type: Translated Mar 17 18:41:27.138386 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:41:27.138403 kernel: vgaarb: loaded Mar 17 18:41:27.138450 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:41:27.138492 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:41:27.138510 kernel: PTP clock support registered Mar 17 18:41:27.138528 kernel: Registered efivars operations Mar 17 18:41:27.138546 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:41:27.138563 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:41:27.138581 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 17 18:41:27.138599 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 17 18:41:27.138616 kernel: e820: reserve RAM buffer [mem 0xbd278000-0xbfffffff] Mar 17 18:41:27.138639 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 17 18:41:27.138661 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 17 18:41:27.138678 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:41:27.138696 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:41:27.138714 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:41:27.138732 kernel: pnp: PnP ACPI init Mar 17 18:41:27.138749 kernel: pnp: PnP ACPI: found 7 devices Mar 17 18:41:27.138768 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:41:27.138787 kernel: NET: Registered PF_INET protocol family Mar 17 18:41:27.138804 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 18:41:27.138826 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 17 18:41:27.138844 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:41:27.138862 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:41:27.138880 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Mar 17 18:41:27.138898 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 17 18:41:27.138916 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 18:41:27.138934 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 18:41:27.138953 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:41:27.138974 kernel: NET: Registered PF_XDP protocol family Mar 17 18:41:27.139139 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:41:27.139301 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:41:27.139520 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:41:27.139674 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 17 18:41:27.139848 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 18:41:27.139872 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:41:27.139895 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 18:41:27.139913 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 17 18:41:27.139931 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 18:41:27.139948 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 17 18:41:27.139965 kernel: clocksource: Switched to clocksource tsc Mar 17 18:41:27.139982 kernel: Initialise system trusted keyrings Mar 17 18:41:27.139999 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 17 18:41:27.140016 kernel: Key type asymmetric registered Mar 17 18:41:27.140032 kernel: Asymmetric key parser 'x509' registered Mar 17 18:41:27.140053 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:41:27.140070 kernel: io scheduler mq-deadline registered Mar 17 18:41:27.140088 kernel: io scheduler kyber registered Mar 17 18:41:27.140105 kernel: io scheduler bfq registered Mar 17 18:41:27.140122 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:41:27.140140 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 18:41:27.140317 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 17 18:41:27.140340 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 17 18:41:27.140562 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 17 18:41:27.140590 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 18:41:27.140766 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 17 18:41:27.140789 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:41:27.140806 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:41:27.140823 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 17 18:41:27.140840 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 17 18:41:27.140857 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 17 18:41:27.141046 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 17 18:41:27.141075 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:41:27.141098 kernel: i8042: Warning: Keylock active Mar 17 18:41:27.141112 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:41:27.141129 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:41:27.141322 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 17 18:41:27.141844 kernel: rtc_cmos 00:00: registered as rtc0 Mar 17 18:41:27.142291 kernel: rtc_cmos 00:00: setting system clock to 2025-03-17T18:41:26 UTC (1742236886) Mar 17 18:41:27.142632 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 17 18:41:27.142667 kernel: intel_pstate: CPU model not supported Mar 17 18:41:27.142686 kernel: pstore: Registered efi as persistent store backend Mar 17 18:41:27.142705 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:41:27.142804 kernel: Segment Routing with IPv6 Mar 17 18:41:27.142821 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:41:27.142839 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:41:27.142856 kernel: Key type dns_resolver registered Mar 17 18:41:27.142874 kernel: IPI shorthand broadcast: enabled Mar 17 18:41:27.142901 kernel: sched_clock: Marking stable (754030970, 152915981)->(967603576, -60656625) Mar 17 18:41:27.142924 kernel: registered taskstats version 1 Mar 17 18:41:27.142942 kernel: Loading compiled-in X.509 certificates Mar 17 18:41:27.142960 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:41:27.142979 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:41:27.142997 kernel: Key type .fscrypt registered Mar 17 18:41:27.143016 kernel: Key type fscrypt-provisioning registered Mar 17 18:41:27.143034 kernel: pstore: Using crash dump compression: deflate Mar 17 18:41:27.143057 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:41:27.143075 kernel: ima: No architecture policies found Mar 17 18:41:27.143097 kernel: clk: Disabling unused clocks Mar 17 18:41:27.143114 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:41:27.143133 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:41:27.143151 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:41:27.143168 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:41:27.143184 kernel: Run /init as init process Mar 17 18:41:27.143210 kernel: with arguments: Mar 17 18:41:27.143228 kernel: /init Mar 17 18:41:27.143254 kernel: with environment: Mar 17 18:41:27.143276 kernel: HOME=/ Mar 17 18:41:27.143293 kernel: TERM=linux Mar 17 18:41:27.143311 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:41:27.143333 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:41:27.143355 systemd[1]: Detected virtualization kvm. Mar 17 18:41:27.143374 systemd[1]: Detected architecture x86-64. Mar 17 18:41:27.143391 systemd[1]: Running in initrd. Mar 17 18:41:27.143424 systemd[1]: No hostname configured, using default hostname. Mar 17 18:41:27.149003 systemd[1]: Hostname set to . Mar 17 18:41:27.149030 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:41:27.149050 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:41:27.149069 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:41:27.149087 systemd[1]: Reached target cryptsetup.target. Mar 17 18:41:27.149105 systemd[1]: Reached target paths.target. Mar 17 18:41:27.149124 systemd[1]: Reached target slices.target. Mar 17 18:41:27.149149 systemd[1]: Reached target swap.target. Mar 17 18:41:27.149167 systemd[1]: Reached target timers.target. Mar 17 18:41:27.149185 systemd[1]: Listening on iscsid.socket. Mar 17 18:41:27.149201 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:41:27.149218 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:41:27.149237 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:41:27.149255 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:41:27.149274 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:41:27.149297 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:41:27.149316 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:41:27.149362 systemd[1]: Reached target sockets.target. Mar 17 18:41:27.149384 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:41:27.149402 systemd[1]: Finished network-cleanup.service. Mar 17 18:41:27.149428 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:41:27.149483 systemd[1]: Starting systemd-journald.service... Mar 17 18:41:27.149506 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:41:27.149525 systemd[1]: Starting systemd-resolved.service... Mar 17 18:41:27.149545 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:41:27.149564 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:41:27.149584 kernel: audit: type=1130 audit(1742236887.130:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.149605 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:41:27.149624 kernel: audit: type=1130 audit(1742236887.142:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.149644 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:41:27.149672 systemd-journald[190]: Journal started Mar 17 18:41:27.149773 systemd-journald[190]: Runtime Journal (/run/log/journal/7d86a730ae4dca304595cb83d9842c59) is 8.0M, max 148.8M, 140.8M free. Mar 17 18:41:27.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.156366 systemd-modules-load[191]: Inserted module 'overlay' Mar 17 18:41:27.162651 systemd[1]: Started systemd-journald.service. Mar 17 18:41:27.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.173249 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:41:27.186596 kernel: audit: type=1130 audit(1742236887.171:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.186636 kernel: audit: type=1130 audit(1742236887.178:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.179983 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:41:27.197582 kernel: audit: type=1130 audit(1742236887.189:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.194973 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:41:27.211976 systemd-resolved[192]: Positive Trust Anchors: Mar 17 18:41:27.212715 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:41:27.213145 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:41:27.223309 systemd-resolved[192]: Defaulting to hostname 'linux'. Mar 17 18:41:27.225329 systemd[1]: Started systemd-resolved.service. Mar 17 18:41:27.225542 systemd[1]: Reached target nss-lookup.target. Mar 17 18:41:27.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.251497 kernel: audit: type=1130 audit(1742236887.224:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.251587 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:41:27.265416 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:41:27.314262 kernel: Bridge firewalling registered Mar 17 18:41:27.314303 kernel: audit: type=1130 audit(1742236887.277:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.276074 systemd-modules-load[191]: Inserted module 'br_netfilter' Mar 17 18:41:27.332771 dracut-cmdline[207]: dracut-dracut-053 Mar 17 18:41:27.332771 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Mar 17 18:41:27.332771 dracut-cmdline[207]: BEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:41:27.456593 kernel: SCSI subsystem initialized Mar 17 18:41:27.456640 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:41:27.456666 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:41:27.456686 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:41:27.456716 kernel: audit: type=1130 audit(1742236887.401:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.456737 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:41:27.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.301192 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:41:27.468671 kernel: iscsi: registered transport (tcp) Mar 17 18:41:27.384615 systemd-modules-load[191]: Inserted module 'dm_multipath' Mar 17 18:41:27.385821 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:41:27.403929 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:41:27.501341 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:41:27.501383 kernel: QLogic iSCSI HBA Driver Mar 17 18:41:27.438238 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:41:27.538616 kernel: audit: type=1130 audit(1742236887.509:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.568726 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:41:27.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:27.570102 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:41:27.632511 kernel: raid6: avx2x4 gen() 18263 MB/s Mar 17 18:41:27.653519 kernel: raid6: avx2x4 xor() 7718 MB/s Mar 17 18:41:27.674484 kernel: raid6: avx2x2 gen() 18055 MB/s Mar 17 18:41:27.695499 kernel: raid6: avx2x2 xor() 18347 MB/s Mar 17 18:41:27.716490 kernel: raid6: avx2x1 gen() 14145 MB/s Mar 17 18:41:27.737495 kernel: raid6: avx2x1 xor() 16002 MB/s Mar 17 18:41:27.758539 kernel: raid6: sse2x4 gen() 10877 MB/s Mar 17 18:41:27.779513 kernel: raid6: sse2x4 xor() 6517 MB/s Mar 17 18:41:27.800533 kernel: raid6: sse2x2 gen() 11893 MB/s Mar 17 18:41:27.821516 kernel: raid6: sse2x2 xor() 7289 MB/s Mar 17 18:41:27.842479 kernel: raid6: sse2x1 gen() 10527 MB/s Mar 17 18:41:27.868824 kernel: raid6: sse2x1 xor() 5212 MB/s Mar 17 18:41:27.868913 kernel: raid6: using algorithm avx2x4 gen() 18263 MB/s Mar 17 18:41:27.868952 kernel: raid6: .... xor() 7718 MB/s, rmw enabled Mar 17 18:41:27.873884 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:41:27.899484 kernel: xor: automatically using best checksumming function avx Mar 17 18:41:28.011488 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:41:28.023286 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:41:28.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:28.030000 audit: BPF prog-id=7 op=LOAD Mar 17 18:41:28.030000 audit: BPF prog-id=8 op=LOAD Mar 17 18:41:28.032984 systemd[1]: Starting systemd-udevd.service... Mar 17 18:41:28.049785 systemd-udevd[389]: Using default interface naming scheme 'v252'. Mar 17 18:41:28.057037 systemd[1]: Started systemd-udevd.service. Mar 17 18:41:28.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:28.073770 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:41:28.089708 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Mar 17 18:41:28.128863 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:41:28.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:28.130131 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:41:28.197015 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:41:28.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:28.287510 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:41:28.323466 kernel: scsi host0: Virtio SCSI HBA Mar 17 18:41:28.384470 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 17 18:41:28.395790 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:41:28.395881 kernel: AES CTR mode by8 optimization enabled Mar 17 18:41:28.457943 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Mar 17 18:41:28.523391 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 17 18:41:28.523702 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 17 18:41:28.523908 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 17 18:41:28.524049 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 18:41:28.524184 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:41:28.524200 kernel: GPT:17805311 != 25165823 Mar 17 18:41:28.524213 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:41:28.524226 kernel: GPT:17805311 != 25165823 Mar 17 18:41:28.524239 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:41:28.524252 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:41:28.524270 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 17 18:41:28.582017 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:41:28.607616 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (432) Mar 17 18:41:28.597427 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:41:28.631214 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:41:28.653618 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:41:28.672622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:41:28.682740 systemd[1]: Starting disk-uuid.service... Mar 17 18:41:28.697831 disk-uuid[512]: Primary Header is updated. Mar 17 18:41:28.697831 disk-uuid[512]: Secondary Entries is updated. Mar 17 18:41:28.697831 disk-uuid[512]: Secondary Header is updated. Mar 17 18:41:28.745609 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:41:28.745644 kernel: GPT:disk_guids don't match. Mar 17 18:41:28.745665 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:41:28.745679 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:41:28.771548 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:41:29.783045 disk-uuid[513]: The operation has completed successfully. Mar 17 18:41:29.793613 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:41:29.856598 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:41:29.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:29.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:29.856760 systemd[1]: Finished disk-uuid.service. Mar 17 18:41:29.880835 systemd[1]: Starting verity-setup.service... Mar 17 18:41:29.909488 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 18:41:30.002684 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:41:30.005234 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:41:30.017139 systemd[1]: Finished verity-setup.service. Mar 17 18:41:30.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.116306 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:41:30.116132 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:41:30.124057 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:41:30.168162 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:41:30.168205 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:41:30.168229 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:41:30.125258 systemd[1]: Starting ignition-setup.service... Mar 17 18:41:30.187591 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:41:30.139894 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:41:30.201498 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:41:30.217600 systemd[1]: Finished ignition-setup.service. Mar 17 18:41:30.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.219695 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:41:30.257090 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:41:30.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.256000 audit: BPF prog-id=9 op=LOAD Mar 17 18:41:30.259360 systemd[1]: Starting systemd-networkd.service... Mar 17 18:41:30.293403 systemd-networkd[687]: lo: Link UP Mar 17 18:41:30.293418 systemd-networkd[687]: lo: Gained carrier Mar 17 18:41:30.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.294417 systemd-networkd[687]: Enumeration completed Mar 17 18:41:30.294574 systemd[1]: Started systemd-networkd.service. Mar 17 18:41:30.294960 systemd-networkd[687]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:41:30.297472 systemd-networkd[687]: eth0: Link UP Mar 17 18:41:30.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.297480 systemd-networkd[687]: eth0: Gained carrier Mar 17 18:41:30.372753 iscsid[693]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:41:30.372753 iscsid[693]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:41:30.372753 iscsid[693]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:41:30.372753 iscsid[693]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:41:30.372753 iscsid[693]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:41:30.372753 iscsid[693]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:41:30.372753 iscsid[693]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:41:30.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.301022 systemd[1]: Reached target network.target. Mar 17 18:41:30.309570 systemd-networkd[687]: eth0: DHCPv4 address 10.128.0.50/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 17 18:41:30.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.330066 systemd[1]: Starting iscsiuio.service... Mar 17 18:41:30.340319 systemd[1]: Started iscsiuio.service. Mar 17 18:41:30.564819 ignition[653]: Ignition 2.14.0 Mar 17 18:41:30.358958 systemd[1]: Starting iscsid.service... Mar 17 18:41:30.564834 ignition[653]: Stage: fetch-offline Mar 17 18:41:30.391862 systemd[1]: Started iscsid.service. Mar 17 18:41:30.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.564907 ignition[653]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:41:30.426973 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:41:30.564966 ignition[653]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:41:30.461913 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:41:30.592140 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:41:30.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.484984 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:41:30.592348 ignition[653]: parsed url from cmdline: "" Mar 17 18:41:30.512643 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:41:30.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.592353 ignition[653]: no config URL provided Mar 17 18:41:30.512776 systemd[1]: Reached target remote-fs.target. Mar 17 18:41:30.592361 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:41:30.514191 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:41:30.592372 ignition[653]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:41:30.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:30.542144 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:41:30.592381 ignition[653]: failed to fetch config: resource requires networking Mar 17 18:41:30.593921 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:41:30.592564 ignition[653]: Ignition finished successfully Mar 17 18:41:30.608996 systemd[1]: Starting ignition-fetch.service... Mar 17 18:41:30.621187 ignition[712]: Ignition 2.14.0 Mar 17 18:41:30.641671 unknown[712]: fetched base config from "system" Mar 17 18:41:30.621197 ignition[712]: Stage: fetch Mar 17 18:41:30.641686 unknown[712]: fetched base config from "system" Mar 17 18:41:30.621333 ignition[712]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:41:30.641704 unknown[712]: fetched user config from "gcp" Mar 17 18:41:30.621368 ignition[712]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:41:30.645370 systemd[1]: Finished ignition-fetch.service. Mar 17 18:41:30.630338 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:41:30.657971 systemd[1]: Starting ignition-kargs.service... Mar 17 18:41:30.630589 ignition[712]: parsed url from cmdline: "" Mar 17 18:41:30.681729 systemd[1]: Finished ignition-kargs.service. Mar 17 18:41:30.630596 ignition[712]: no config URL provided Mar 17 18:41:30.691939 systemd[1]: Starting ignition-disks.service... Mar 17 18:41:30.630604 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:41:30.730134 systemd[1]: Finished ignition-disks.service. Mar 17 18:41:30.630617 ignition[712]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:41:30.738068 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:41:30.630658 ignition[712]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 17 18:41:30.760770 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:41:30.636075 ignition[712]: GET result: OK Mar 17 18:41:30.775651 systemd[1]: Reached target local-fs.target. Mar 17 18:41:30.636185 ignition[712]: parsing config with SHA512: d1a32d0d1f406713459d8bf9032763892d5567dac7d25b6eea8d64689e06f18c7cdcbb3c4d3c33770dcc0f71a1104377a5381a48961044f5eb4662a001d47298 Mar 17 18:41:30.791655 systemd[1]: Reached target sysinit.target. Mar 17 18:41:30.643600 ignition[712]: fetch: fetch complete Mar 17 18:41:30.805651 systemd[1]: Reached target basic.target. Mar 17 18:41:30.643607 ignition[712]: fetch: fetch passed Mar 17 18:41:30.819954 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:41:30.643667 ignition[712]: Ignition finished successfully Mar 17 18:41:30.671157 ignition[718]: Ignition 2.14.0 Mar 17 18:41:30.671167 ignition[718]: Stage: kargs Mar 17 18:41:30.671301 ignition[718]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:41:30.671333 ignition[718]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:41:30.679090 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:41:30.680637 ignition[718]: kargs: kargs passed Mar 17 18:41:30.680690 ignition[718]: Ignition finished successfully Mar 17 18:41:30.704254 ignition[724]: Ignition 2.14.0 Mar 17 18:41:30.704265 ignition[724]: Stage: disks Mar 17 18:41:30.704416 ignition[724]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:41:30.704469 ignition[724]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:41:30.712239 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:41:30.714548 ignition[724]: disks: disks passed Mar 17 18:41:30.714607 ignition[724]: Ignition finished successfully Mar 17 18:41:30.872823 systemd-fsck[732]: ROOT: clean, 623/1628000 files, 124059/1617920 blocks Mar 17 18:41:31.060497 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:41:31.101637 kernel: kauditd_printk_skb: 22 callbacks suppressed Mar 17 18:41:31.101686 kernel: audit: type=1130 audit(1742236891.067:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.096905 systemd[1]: Mounting sysroot.mount... Mar 17 18:41:31.125491 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:41:31.126490 systemd[1]: Mounted sysroot.mount. Mar 17 18:41:31.126848 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:41:31.141417 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:41:31.159254 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:41:31.159319 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:41:31.159359 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:41:31.174955 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:41:31.198514 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:41:31.274315 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (738) Mar 17 18:41:31.274354 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:41:31.274518 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:41:31.274556 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:41:31.274580 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:41:31.261823 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:41:31.285649 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:41:31.305675 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:41:31.315601 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:41:31.325602 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:41:31.335576 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:41:31.379592 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:41:31.414669 kernel: audit: type=1130 audit(1742236891.378:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.381046 systemd[1]: Starting ignition-mount.service... Mar 17 18:41:31.422827 systemd[1]: Starting sysroot-boot.service... Mar 17 18:41:31.436841 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:41:31.437021 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:41:31.462595 ignition[803]: INFO : Ignition 2.14.0 Mar 17 18:41:31.462595 ignition[803]: INFO : Stage: mount Mar 17 18:41:31.462595 ignition[803]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:41:31.462595 ignition[803]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:41:31.560640 kernel: audit: type=1130 audit(1742236891.482:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.560699 kernel: audit: type=1130 audit(1742236891.513:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.469689 systemd[1]: Finished sysroot-boot.service. Mar 17 18:41:31.574718 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:41:31.574718 ignition[803]: INFO : mount: mount passed Mar 17 18:41:31.574718 ignition[803]: INFO : Ignition finished successfully Mar 17 18:41:31.640639 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (813) Mar 17 18:41:31.640699 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:41:31.640727 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:41:31.640748 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:41:31.640769 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:41:31.484083 systemd[1]: Finished ignition-mount.service. Mar 17 18:41:31.516359 systemd[1]: Starting ignition-files.service... Mar 17 18:41:31.572012 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:41:31.674630 ignition[832]: INFO : Ignition 2.14.0 Mar 17 18:41:31.674630 ignition[832]: INFO : Stage: files Mar 17 18:41:31.674630 ignition[832]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:41:31.674630 ignition[832]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:41:31.674630 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:41:31.674630 ignition[832]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:41:31.598360 systemd-networkd[687]: eth0: Gained IPv6LL Mar 17 18:41:31.747609 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:41:31.747609 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:41:31.747609 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:41:31.747609 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:41:31.747609 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:41:31.747609 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:41:31.747609 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:41:31.747609 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:41:31.747609 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:41:31.638868 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:41:31.883608 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:41:31.688895 unknown[832]: wrote ssh authorized keys file for user: core Mar 17 18:41:32.317945 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem603210599" Mar 17 18:41:32.334613 ignition[832]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem603210599": device or resource busy Mar 17 18:41:32.334613 ignition[832]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem603210599", trying btrfs: device or resource busy Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem603210599" Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem603210599" Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem603210599" Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem603210599" Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Mar 17 18:41:32.334613 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:41:32.332034 systemd[1]: mnt-oem603210599.mount: Deactivated successfully. Mar 17 18:41:32.573653 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1927667189" Mar 17 18:41:32.573653 ignition[832]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1927667189": device or resource busy Mar 17 18:41:32.573653 ignition[832]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1927667189", trying btrfs: device or resource busy Mar 17 18:41:32.573653 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1927667189" Mar 17 18:41:32.573653 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1927667189" Mar 17 18:41:32.573653 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem1927667189" Mar 17 18:41:32.573653 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem1927667189" Mar 17 18:41:32.573653 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Mar 17 18:41:32.573653 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:41:32.573653 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:41:32.745636 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Mar 17 18:41:32.807738 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:41:32.823618 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem998174667" Mar 17 18:41:32.823618 ignition[832]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem998174667": device or resource busy Mar 17 18:41:33.061686 ignition[832]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem998174667", trying btrfs: device or resource busy Mar 17 18:41:33.061686 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem998174667" Mar 17 18:41:33.061686 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem998174667" Mar 17 18:41:33.061686 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem998174667" Mar 17 18:41:33.061686 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem998174667" Mar 17 18:41:33.061686 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Mar 17 18:41:33.061686 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:41:33.061686 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(18): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 18:41:33.061686 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(18): GET result: OK Mar 17 18:41:33.396898 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:41:33.396898 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(19): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Mar 17 18:41:33.429604 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(19): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:41:33.429604 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem257030877" Mar 17 18:41:33.429604 ignition[832]: CRITICAL : files: createFilesystemsFiles: createFiles: op(19): op(1a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem257030877": device or resource busy Mar 17 18:41:33.429604 ignition[832]: ERROR : files: createFilesystemsFiles: createFiles: op(19): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem257030877", trying btrfs: device or resource busy Mar 17 18:41:33.429604 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem257030877" Mar 17 18:41:33.429604 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem257030877" Mar 17 18:41:33.429604 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1c): [started] unmounting "/mnt/oem257030877" Mar 17 18:41:33.429604 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1c): [finished] unmounting "/mnt/oem257030877" Mar 17 18:41:33.429604 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(19): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Mar 17 18:41:33.429604 ignition[832]: INFO : files: op(1d): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:41:33.429604 ignition[832]: INFO : files: op(1d): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:41:33.429604 ignition[832]: INFO : files: op(1e): [started] processing unit "oem-gce.service" Mar 17 18:41:33.429604 ignition[832]: INFO : files: op(1e): [finished] processing unit "oem-gce.service" Mar 17 18:41:33.429604 ignition[832]: INFO : files: op(1f): [started] processing unit "oem-gce-enable-oslogin.service" Mar 17 18:41:33.429604 ignition[832]: INFO : files: op(1f): [finished] processing unit "oem-gce-enable-oslogin.service" Mar 17 18:41:33.429604 ignition[832]: INFO : files: op(20): [started] processing unit "containerd.service" Mar 17 18:41:33.429604 ignition[832]: INFO : files: op(20): op(21): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:41:33.866811 kernel: audit: type=1130 audit(1742236893.445:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.866852 kernel: audit: type=1130 audit(1742236893.551:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.866870 kernel: audit: type=1130 audit(1742236893.590:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.866907 kernel: audit: type=1131 audit(1742236893.590:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.866922 kernel: audit: type=1130 audit(1742236893.731:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.866937 kernel: audit: type=1131 audit(1742236893.731:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.421000 systemd[1]: mnt-oem257030877.mount: Deactivated successfully. Mar 17 18:41:33.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(20): op(21): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(20): [finished] processing unit "containerd.service" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(22): [started] processing unit "prepare-helm.service" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(22): op(23): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(22): op(23): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(22): [finished] processing unit "prepare-helm.service" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(24): [started] setting preset to enabled for "oem-gce.service" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(24): [finished] setting preset to enabled for "oem-gce.service" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(25): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(25): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(26): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(27): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:41:33.906039 ignition[832]: INFO : files: op(27): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:41:33.906039 ignition[832]: INFO : files: createResultFile: createFiles: op(28): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:41:33.906039 ignition[832]: INFO : files: createResultFile: createFiles: op(28): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:41:33.906039 ignition[832]: INFO : files: files passed Mar 17 18:41:33.906039 ignition[832]: INFO : Ignition finished successfully Mar 17 18:41:34.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.434390 systemd[1]: Finished ignition-files.service. Mar 17 18:41:34.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.457335 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:41:34.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.326722 initrd-setup-root-after-ignition[855]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:41:34.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.489842 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:41:33.491056 systemd[1]: Starting ignition-quench.service... Mar 17 18:41:33.521995 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:41:34.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.553126 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:41:33.553273 systemd[1]: Finished ignition-quench.service. Mar 17 18:41:34.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.592134 systemd[1]: Reached target ignition-complete.target. Mar 17 18:41:34.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.656929 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:41:34.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.464778 ignition[870]: INFO : Ignition 2.14.0 Mar 17 18:41:34.464778 ignition[870]: INFO : Stage: umount Mar 17 18:41:34.464778 ignition[870]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:41:34.464778 ignition[870]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Mar 17 18:41:34.464778 ignition[870]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 18:41:34.464778 ignition[870]: INFO : umount: umount passed Mar 17 18:41:34.464778 ignition[870]: INFO : Ignition finished successfully Mar 17 18:41:34.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.697876 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:41:33.698003 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:41:33.733043 systemd[1]: Reached target initrd-fs.target. Mar 17 18:41:33.786884 systemd[1]: Reached target initrd.target. Mar 17 18:41:33.820881 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:41:33.822316 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:41:33.854075 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:41:34.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.897337 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:41:34.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.922236 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:41:33.945065 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:41:33.965926 systemd[1]: Stopped target timers.target. Mar 17 18:41:33.983906 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:41:34.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.984107 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:41:34.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.009108 systemd[1]: Stopped target initrd.target. Mar 17 18:41:34.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.765000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:41:34.032890 systemd[1]: Stopped target basic.target. Mar 17 18:41:34.050898 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:41:34.071879 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:41:34.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.092926 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:41:34.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.115890 systemd[1]: Stopped target remote-fs.target. Mar 17 18:41:34.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.138908 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:41:34.159888 systemd[1]: Stopped target sysinit.target. Mar 17 18:41:34.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.180897 systemd[1]: Stopped target local-fs.target. Mar 17 18:41:34.202902 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:41:34.225924 systemd[1]: Stopped target swap.target. Mar 17 18:41:34.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.247870 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:41:34.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.248093 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:41:34.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.263270 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:41:34.276945 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:41:34.277136 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:41:34.300070 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:41:34.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.300259 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:41:35.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.316998 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:41:35.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:35.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:34.317174 systemd[1]: Stopped ignition-files.service. Mar 17 18:41:34.336522 systemd[1]: Stopping ignition-mount.service... Mar 17 18:41:34.372794 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:41:35.071000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:41:35.071000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:41:35.071000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:41:35.073000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:41:35.073000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:41:34.373052 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:41:34.391629 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:41:35.102458 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). Mar 17 18:41:34.405608 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:41:35.112643 iscsid[693]: iscsid shutting down. Mar 17 18:41:34.405908 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:41:34.421900 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:41:34.422088 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:41:34.441856 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:41:34.443084 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:41:34.443200 systemd[1]: Stopped ignition-mount.service. Mar 17 18:41:34.456411 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:41:34.456562 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:41:34.472422 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:41:34.472591 systemd[1]: Stopped ignition-disks.service. Mar 17 18:41:34.479907 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:41:34.479983 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:41:34.492881 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:41:34.492961 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:41:34.509933 systemd[1]: Stopped target network.target. Mar 17 18:41:34.551654 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:41:34.551889 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:41:34.558910 systemd[1]: Stopped target paths.target. Mar 17 18:41:34.580625 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:41:34.584552 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:41:34.596624 systemd[1]: Stopped target slices.target. Mar 17 18:41:34.609633 systemd[1]: Stopped target sockets.target. Mar 17 18:41:34.622697 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:41:34.622758 systemd[1]: Closed iscsid.socket. Mar 17 18:41:34.635722 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:41:34.635811 systemd[1]: Closed iscsiuio.socket. Mar 17 18:41:34.653693 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:41:34.653814 systemd[1]: Stopped ignition-setup.service. Mar 17 18:41:34.669801 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:41:34.669881 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:41:34.685985 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:41:34.690540 systemd-networkd[687]: eth0: DHCPv6 lease lost Mar 17 18:41:34.699887 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:41:34.714193 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:41:34.714316 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:41:34.735501 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:41:34.735635 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:41:34.751488 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:41:34.751605 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:41:34.767859 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:41:34.767903 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:41:34.782786 systemd[1]: Stopping network-cleanup.service... Mar 17 18:41:34.795622 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:41:34.795882 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:41:34.811859 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:41:34.811934 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:41:34.828928 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:41:34.828996 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:41:34.844971 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:41:34.860374 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:41:34.861144 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:41:34.861301 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:41:34.876150 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:41:34.876247 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:41:34.889782 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:41:34.889845 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:41:34.904749 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:41:34.904828 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:41:34.911951 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:41:34.912020 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:41:34.933832 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:41:34.933913 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:41:34.950889 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:41:34.972616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:41:34.972746 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:41:34.996357 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:41:34.996529 systemd[1]: Stopped network-cleanup.service. Mar 17 18:41:35.014118 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:41:35.014242 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:41:35.029940 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:41:35.047813 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:41:35.071172 systemd[1]: Switching root. Mar 17 18:41:35.116046 systemd-journald[190]: Journal stopped Mar 17 18:41:39.824470 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:41:39.824597 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:41:39.824628 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:41:39.824655 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:41:39.824677 kernel: SELinux: policy capability open_perms=1 Mar 17 18:41:39.824705 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:41:39.824733 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:41:39.824756 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:41:39.824778 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:41:39.824800 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:41:39.824828 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:41:39.824851 systemd[1]: Successfully loaded SELinux policy in 114.881ms. Mar 17 18:41:39.824893 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.110ms. Mar 17 18:41:39.824918 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:41:39.824943 systemd[1]: Detected virtualization kvm. Mar 17 18:41:39.824966 systemd[1]: Detected architecture x86-64. Mar 17 18:41:39.824989 systemd[1]: Detected first boot. Mar 17 18:41:39.825014 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:41:39.825038 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:41:39.825065 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:41:39.825089 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:39.825120 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:39.825149 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:39.825184 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:41:39.825208 systemd[1]: Unnecessary job was removed for dev-sda6.device. Mar 17 18:41:39.825232 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:41:39.825256 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:41:39.825283 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 18:41:39.825306 systemd[1]: Created slice system-getty.slice. Mar 17 18:41:39.825329 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:41:39.825352 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:41:39.825375 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:41:39.825398 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:41:39.825421 systemd[1]: Created slice user.slice. Mar 17 18:41:39.825458 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:41:39.825488 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:41:39.825515 systemd[1]: Set up automount boot.automount. Mar 17 18:41:39.825539 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:41:39.825563 systemd[1]: Reached target integritysetup.target. Mar 17 18:41:39.825588 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:41:39.825612 systemd[1]: Reached target remote-fs.target. Mar 17 18:41:39.825635 systemd[1]: Reached target slices.target. Mar 17 18:41:39.825658 systemd[1]: Reached target swap.target. Mar 17 18:41:39.825684 systemd[1]: Reached target torcx.target. Mar 17 18:41:39.825711 systemd[1]: Reached target veritysetup.target. Mar 17 18:41:39.825735 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:41:39.825758 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:41:39.825781 kernel: kauditd_printk_skb: 50 callbacks suppressed Mar 17 18:41:39.825804 kernel: audit: type=1400 audit(1742236899.341:86): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:41:39.825827 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:41:39.825850 kernel: audit: type=1335 audit(1742236899.341:87): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:41:39.825873 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:41:39.825900 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:41:39.825923 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:41:39.825946 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:41:39.825970 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:41:39.825994 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:41:39.826016 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:41:39.826041 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:41:39.826065 systemd[1]: Mounting media.mount... Mar 17 18:41:39.826088 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:39.826111 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:41:39.826138 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:41:39.826161 systemd[1]: Mounting tmp.mount... Mar 17 18:41:39.826186 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:41:39.826209 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:39.826232 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:41:39.826259 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:41:39.826282 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:39.826305 systemd[1]: Starting modprobe@drm.service... Mar 17 18:41:39.826328 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:39.826354 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:41:39.826377 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:39.826399 kernel: fuse: init (API version 7.34) Mar 17 18:41:39.826421 kernel: loop: module loaded Mar 17 18:41:39.826456 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:41:39.826480 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 18:41:39.826513 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 18:41:39.826536 systemd[1]: Starting systemd-journald.service... Mar 17 18:41:39.826559 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:41:39.826586 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:41:39.826609 kernel: audit: type=1305 audit(1742236899.820:88): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:41:39.826638 systemd-journald[1034]: Journal started Mar 17 18:41:39.826727 systemd-journald[1034]: Runtime Journal (/run/log/journal/7d86a730ae4dca304595cb83d9842c59) is 8.0M, max 148.8M, 140.8M free. Mar 17 18:41:39.341000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:41:39.341000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:41:39.820000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:41:39.872748 kernel: audit: type=1300 audit(1742236899.820:88): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffca7538ca0 a2=4000 a3=7ffca7538d3c items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:39.872862 kernel: audit: type=1327 audit(1742236899.820:88): proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:41:39.820000 audit[1034]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffca7538ca0 a2=4000 a3=7ffca7538d3c items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:39.820000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:41:39.897490 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:41:39.912479 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:41:39.932465 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:39.942499 systemd[1]: Started systemd-journald.service. Mar 17 18:41:39.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:39.972465 kernel: audit: type=1130 audit(1742236899.948:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:39.974506 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:41:39.983833 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:41:39.990825 systemd[1]: Mounted media.mount. Mar 17 18:41:39.997850 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:41:40.006873 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:41:40.015829 systemd[1]: Mounted tmp.mount. Mar 17 18:41:40.023178 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:41:40.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.032292 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:41:40.054503 kernel: audit: type=1130 audit(1742236900.030:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.063220 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:41:40.063551 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:41:40.085548 kernel: audit: type=1130 audit(1742236900.061:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.094197 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:40.094509 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:40.138328 kernel: audit: type=1130 audit(1742236900.092:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.138491 kernel: audit: type=1131 audit(1742236900.092:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.147136 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:41:40.147385 systemd[1]: Finished modprobe@drm.service. Mar 17 18:41:40.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.156132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:40.156395 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:40.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.165115 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:41:40.165374 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:41:40.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.175085 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:40.175343 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:40.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.185208 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:41:40.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.194127 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:41:40.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.203138 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:41:40.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.212195 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:41:40.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.222278 systemd[1]: Reached target network-pre.target. Mar 17 18:41:40.232398 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:41:40.243335 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:41:40.250620 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:41:40.254656 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:41:40.263618 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:41:40.271661 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:40.273576 systemd-journald[1034]: Time spent on flushing to /var/log/journal/7d86a730ae4dca304595cb83d9842c59 is 57.296ms for 1093 entries. Mar 17 18:41:40.273576 systemd-journald[1034]: System Journal (/var/log/journal/7d86a730ae4dca304595cb83d9842c59) is 8.0M, max 584.8M, 576.8M free. Mar 17 18:41:40.374467 systemd-journald[1034]: Received client request to flush runtime journal. Mar 17 18:41:40.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.283115 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:41:40.290648 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:40.292704 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:41:40.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.301758 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:41:40.378002 udevadm[1056]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:41:40.310852 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:41:40.322424 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:41:40.330813 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:41:40.340422 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:41:40.353252 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:41:40.368204 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:41:40.377421 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:41:40.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.391224 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:41:40.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:40.401895 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:41:40.466559 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:41:40.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:41.019032 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:41:41.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:41.029540 systemd[1]: Starting systemd-udevd.service... Mar 17 18:41:41.053523 systemd-udevd[1067]: Using default interface naming scheme 'v252'. Mar 17 18:41:41.112185 systemd[1]: Started systemd-udevd.service. Mar 17 18:41:41.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:41.124959 systemd[1]: Starting systemd-networkd.service... Mar 17 18:41:41.146138 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:41:41.181596 systemd[1]: Found device dev-ttyS0.device. Mar 17 18:41:41.219506 systemd[1]: Started systemd-userdbd.service. Mar 17 18:41:41.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:41.308472 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:41:41.352500 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:41:41.368896 systemd-networkd[1078]: lo: Link UP Mar 17 18:41:41.368911 systemd-networkd[1078]: lo: Gained carrier Mar 17 18:41:41.369706 systemd-networkd[1078]: Enumeration completed Mar 17 18:41:41.369911 systemd[1]: Started systemd-networkd.service. Mar 17 18:41:41.370986 systemd-networkd[1078]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:41:41.373004 systemd-networkd[1078]: eth0: Link UP Mar 17 18:41:41.373173 systemd-networkd[1078]: eth0: Gained carrier Mar 17 18:41:41.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:41.383693 systemd-networkd[1078]: eth0: DHCPv4 address 10.128.0.50/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 17 18:41:41.416462 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Mar 17 18:41:41.427000 audit[1080]: AVC avc: denied { confidentiality } for pid=1080 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:41:41.427000 audit[1080]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560960101bd0 a1=338ac a2=7f32cddd5bc5 a3=5 items=110 ppid=1067 pid=1080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:41.427000 audit: CWD cwd="/" Mar 17 18:41:41.427000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=1 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=2 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=3 name=(null) inode=14491 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=4 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=5 name=(null) inode=14492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=6 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=7 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=8 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=9 name=(null) inode=14494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=10 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=11 name=(null) inode=14495 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=12 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=13 name=(null) inode=14496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=14 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=15 name=(null) inode=14497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=16 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=17 name=(null) inode=14498 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=18 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=19 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=20 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=21 name=(null) inode=14500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=22 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=23 name=(null) inode=14501 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=24 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=25 name=(null) inode=14502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=26 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=27 name=(null) inode=14503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=28 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=29 name=(null) inode=14504 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=30 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=31 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=32 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=33 name=(null) inode=14506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=34 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=35 name=(null) inode=14507 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=36 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=37 name=(null) inode=14508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=38 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=39 name=(null) inode=14509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=40 name=(null) inode=14505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=41 name=(null) inode=14510 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=42 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=43 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=44 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=45 name=(null) inode=14512 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=46 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=47 name=(null) inode=14513 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=48 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=49 name=(null) inode=14514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=50 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=51 name=(null) inode=14515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=52 name=(null) inode=14511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=53 name=(null) inode=14516 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=55 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=56 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=57 name=(null) inode=14518 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=58 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=59 name=(null) inode=14519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=60 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=61 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=62 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=63 name=(null) inode=14521 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=64 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=65 name=(null) inode=14522 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=66 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=67 name=(null) inode=14523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=68 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=69 name=(null) inode=14524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=70 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=71 name=(null) inode=14525 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=72 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=73 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=74 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=75 name=(null) inode=14527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=76 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=77 name=(null) inode=14528 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=78 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=79 name=(null) inode=14529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=80 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=81 name=(null) inode=14530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=82 name=(null) inode=14526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=83 name=(null) inode=14531 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=84 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=85 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=86 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=87 name=(null) inode=14533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=88 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=89 name=(null) inode=14534 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=90 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=91 name=(null) inode=14535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=92 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=93 name=(null) inode=14536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=94 name=(null) inode=14532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=95 name=(null) inode=14537 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=96 name=(null) inode=14517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=97 name=(null) inode=14538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=98 name=(null) inode=14538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=99 name=(null) inode=14539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=100 name=(null) inode=14538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=101 name=(null) inode=14540 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=102 name=(null) inode=14538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=103 name=(null) inode=14541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=104 name=(null) inode=14538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=105 name=(null) inode=14542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=106 name=(null) inode=14538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=107 name=(null) inode=14543 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PATH item=109 name=(null) inode=14544 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:41.427000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:41:41.492568 kernel: ACPI: button: Sleep Button [SLPF] Mar 17 18:41:41.504479 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 18:41:41.522533 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:41:41.524486 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:41:41.532514 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:41:41.548463 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 17 18:41:41.565281 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:41:41.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:41.576580 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:41:41.617458 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:41:41.647040 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:41:41.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:41.655982 systemd[1]: Reached target cryptsetup.target. Mar 17 18:41:41.666292 systemd[1]: Starting lvm2-activation.service... Mar 17 18:41:41.672821 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:41:41.702149 systemd[1]: Finished lvm2-activation.service. Mar 17 18:41:41.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:41.710999 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:41:41.719690 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:41:41.719741 systemd[1]: Reached target local-fs.target. Mar 17 18:41:41.728652 systemd[1]: Reached target machines.target. Mar 17 18:41:41.738515 systemd[1]: Starting ldconfig.service... Mar 17 18:41:41.746882 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:41.746967 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:41.748847 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:41:41.757573 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:41:41.769728 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:41:41.772047 systemd[1]: Starting systemd-sysext.service... Mar 17 18:41:41.772981 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Mar 17 18:41:41.776267 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:41:41.795618 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:41:41.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:41.806797 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:41:41.815171 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:41:41.817033 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:41:41.846494 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 18:41:41.953032 systemd-fsck[1121]: fsck.fat 4.2 (2021-01-31) Mar 17 18:41:41.953032 systemd-fsck[1121]: /dev/sda1: 789 files, 119299/258078 clusters Mar 17 18:41:41.956793 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:41:41.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:41.973806 systemd[1]: Mounting boot.mount... Mar 17 18:41:42.013751 systemd[1]: Mounted boot.mount. Mar 17 18:41:42.040199 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:41:42.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.169681 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:41:42.203502 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 18:41:42.237657 (sd-sysext)[1132]: Using extensions 'kubernetes'. Mar 17 18:41:42.238819 (sd-sysext)[1132]: Merged extensions into '/usr'. Mar 17 18:41:42.268247 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:42.271173 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:41:42.280885 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:42.286593 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:42.296550 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:42.307771 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:42.315706 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:42.316545 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:42.316771 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:42.323358 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:41:42.331204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:42.331553 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:42.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.340427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:42.340734 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:42.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.356806 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:41:42.359411 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:41:42.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.368412 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:42.368641 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:42.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.378530 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:42.378754 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:42.380264 systemd[1]: Finished systemd-sysext.service. Mar 17 18:41:42.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.390790 systemd[1]: Starting ensure-sysext.service... Mar 17 18:41:42.399523 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:41:42.411290 systemd[1]: Reloading. Mar 17 18:41:42.428717 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:41:42.432255 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:41:42.441149 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:41:42.505559 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:41:42.555429 /usr/lib/systemd/system-generators/torcx-generator[1167]: time="2025-03-17T18:41:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:41:42.561400 /usr/lib/systemd/system-generators/torcx-generator[1167]: time="2025-03-17T18:41:42Z" level=info msg="torcx already run" Mar 17 18:41:42.731640 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:42.731667 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:42.756554 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:42.846628 systemd[1]: Finished ldconfig.service. Mar 17 18:41:42.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.855503 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:41:42.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.870345 systemd[1]: Starting audit-rules.service... Mar 17 18:41:42.879710 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:41:42.891130 systemd[1]: Starting oem-gce-enable-oslogin.service... Mar 17 18:41:42.902137 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:41:42.913865 systemd[1]: Starting systemd-resolved.service... Mar 17 18:41:42.923731 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:41:42.933781 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:41:42.942975 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:41:42.945000 audit[1244]: SYSTEM_BOOT pid=1244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:42.952508 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Mar 17 18:41:42.952852 systemd[1]: Finished oem-gce-enable-oslogin.service. Mar 17 18:41:42.958000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:41:42.958000 audit[1251]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc65785840 a2=420 a3=0 items=0 ppid=1219 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:42.958000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:41:42.960272 augenrules[1251]: No rules Mar 17 18:41:42.962605 systemd[1]: Finished audit-rules.service. Mar 17 18:41:42.973362 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:41:42.991363 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:42.992036 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:42.995739 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:43.005807 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:43.014779 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:43.023939 systemd[1]: Starting oem-gce-enable-oslogin.service... Mar 17 18:41:43.036177 enable-oslogin[1265]: /etc/pam.d/sshd already exists. Not enabling OS Login Mar 17 18:41:43.032676 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:43.032972 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:43.035898 systemd[1]: Starting systemd-update-done.service... Mar 17 18:41:43.042583 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:43.042830 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:43.045720 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:41:43.055447 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:43.055745 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:43.063384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:43.063662 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:43.073391 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:43.073682 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:43.083354 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Mar 17 18:41:43.083785 systemd[1]: Finished oem-gce-enable-oslogin.service. Mar 17 18:41:43.093626 systemd[1]: Finished systemd-update-done.service. Mar 17 18:41:43.107294 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:43.107859 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:43.113353 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:43.122864 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:43.131662 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:43.141043 systemd[1]: Starting oem-gce-enable-oslogin.service... Mar 17 18:41:43.150664 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:43.154728 enable-oslogin[1276]: /etc/pam.d/sshd already exists. Not enabling OS Login Mar 17 18:41:43.150949 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:43.151164 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:43.151331 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:43.153857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:43.154244 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:43.163321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:43.163625 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:43.172482 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:43.172800 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:43.182467 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Mar 17 18:41:43.182885 systemd[1]: Finished oem-gce-enable-oslogin.service. Mar 17 18:41:44.332383 systemd-timesyncd[1241]: Contacted time server 169.254.169.254:123 (169.254.169.254). Mar 17 18:41:44.332502 systemd-timesyncd[1241]: Initial clock synchronization to Mon 2025-03-17 18:41:44.332234 UTC. Mar 17 18:41:44.334388 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:41:44.344070 systemd-resolved[1237]: Positive Trust Anchors: Mar 17 18:41:44.344132 systemd[1]: Reached target time-set.target. Mar 17 18:41:44.344329 systemd-resolved[1237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:41:44.344398 systemd-resolved[1237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:41:44.352716 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:44.352896 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:44.358087 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:44.358598 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:44.361536 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:44.371844 systemd[1]: Starting modprobe@drm.service... Mar 17 18:41:44.382202 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:44.386627 systemd-networkd[1078]: eth0: Gained IPv6LL Mar 17 18:41:44.392574 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:44.396829 systemd-resolved[1237]: Defaulting to hostname 'linux'. Mar 17 18:41:44.402039 systemd[1]: Starting oem-gce-enable-oslogin.service... Mar 17 18:41:44.406912 enable-oslogin[1288]: /etc/pam.d/sshd already exists. Not enabling OS Login Mar 17 18:41:44.410774 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:44.411058 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:44.413678 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:41:44.422713 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:44.422977 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:44.425916 systemd[1]: Started systemd-resolved.service. Mar 17 18:41:44.435690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:44.435998 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:44.445303 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:41:44.445605 systemd[1]: Finished modprobe@drm.service. Mar 17 18:41:44.454335 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:44.454671 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:44.464293 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:44.464602 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:44.473319 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Mar 17 18:41:44.473683 systemd[1]: Finished oem-gce-enable-oslogin.service. Mar 17 18:41:44.483486 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:41:44.494588 systemd[1]: Reached target network.target. Mar 17 18:41:44.502706 systemd[1]: Reached target network-online.target. Mar 17 18:41:44.511643 systemd[1]: Reached target nss-lookup.target. Mar 17 18:41:44.519685 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:44.519748 systemd[1]: Reached target sysinit.target. Mar 17 18:41:44.528772 systemd[1]: Started motdgen.path. Mar 17 18:41:44.535713 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:41:44.545873 systemd[1]: Started logrotate.timer. Mar 17 18:41:44.552838 systemd[1]: Started mdadm.timer. Mar 17 18:41:44.559658 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:41:44.568719 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:41:44.568788 systemd[1]: Reached target paths.target. Mar 17 18:41:44.575643 systemd[1]: Reached target timers.target. Mar 17 18:41:44.583488 systemd[1]: Listening on dbus.socket. Mar 17 18:41:44.592191 systemd[1]: Starting docker.socket... Mar 17 18:41:44.601936 systemd[1]: Listening on sshd.socket. Mar 17 18:41:44.608774 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:44.608876 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:44.609875 systemd[1]: Finished ensure-sysext.service. Mar 17 18:41:44.618896 systemd[1]: Listening on docker.socket. Mar 17 18:41:44.627963 systemd[1]: Reached target sockets.target. Mar 17 18:41:44.636620 systemd[1]: Reached target basic.target. Mar 17 18:41:44.643961 systemd[1]: System is tainted: cgroupsv1 Mar 17 18:41:44.644054 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:41:44.644091 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:41:44.645837 systemd[1]: Starting containerd.service... Mar 17 18:41:44.655238 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 18:41:44.665732 systemd[1]: Starting dbus.service... Mar 17 18:41:44.675827 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:41:44.686669 systemd[1]: Starting extend-filesystems.service... Mar 17 18:41:44.693628 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:41:44.726251 jq[1300]: false Mar 17 18:41:44.696420 systemd[1]: Starting kubelet.service... Mar 17 18:41:44.705571 systemd[1]: Starting motdgen.service... Mar 17 18:41:44.714062 systemd[1]: Starting oem-gce.service... Mar 17 18:41:44.724629 systemd[1]: Starting prepare-helm.service... Mar 17 18:41:44.733774 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:41:44.743677 systemd[1]: Starting sshd-keygen.service... Mar 17 18:41:44.754274 systemd[1]: Starting systemd-logind.service... Mar 17 18:41:44.761654 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:44.761788 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Mar 17 18:41:44.764004 systemd[1]: Starting update-engine.service... Mar 17 18:41:44.773794 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:41:44.787534 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:41:44.787969 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:41:44.796775 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:41:44.797256 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:41:44.817956 jq[1323]: true Mar 17 18:41:44.830607 mkfs.ext4[1336]: mke2fs 1.46.5 (30-Dec-2021) Mar 17 18:41:44.838840 mkfs.ext4[1336]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Mar 17 18:41:44.839543 mkfs.ext4[1336]: Creating filesystem with 262144 4k blocks and 65536 inodes Mar 17 18:41:44.839543 mkfs.ext4[1336]: Filesystem UUID: f716d728-e327-4676-bc8f-9e8e14d11e14 Mar 17 18:41:44.839543 mkfs.ext4[1336]: Superblock backups stored on blocks: Mar 17 18:41:44.839732 mkfs.ext4[1336]: 32768, 98304, 163840, 229376 Mar 17 18:41:44.839732 mkfs.ext4[1336]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Mar 17 18:41:44.839732 mkfs.ext4[1336]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Mar 17 18:41:44.842376 mkfs.ext4[1336]: Creating journal (8192 blocks): done Mar 17 18:41:44.854523 mkfs.ext4[1336]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Mar 17 18:41:44.872234 jq[1338]: true Mar 17 18:41:44.908879 extend-filesystems[1301]: Found loop1 Mar 17 18:41:44.919586 umount[1351]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Mar 17 18:41:44.935963 extend-filesystems[1301]: Found sda Mar 17 18:41:44.943669 extend-filesystems[1301]: Found sda1 Mar 17 18:41:44.943669 extend-filesystems[1301]: Found sda2 Mar 17 18:41:44.943669 extend-filesystems[1301]: Found sda3 Mar 17 18:41:44.943669 extend-filesystems[1301]: Found usr Mar 17 18:41:44.943669 extend-filesystems[1301]: Found sda4 Mar 17 18:41:44.943669 extend-filesystems[1301]: Found sda6 Mar 17 18:41:44.943669 extend-filesystems[1301]: Found sda7 Mar 17 18:41:44.943669 extend-filesystems[1301]: Found sda9 Mar 17 18:41:44.943669 extend-filesystems[1301]: Checking size of /dev/sda9 Mar 17 18:41:45.042349 kernel: loop2: detected capacity change from 0 to 2097152 Mar 17 18:41:45.042433 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Mar 17 18:41:45.042517 extend-filesystems[1301]: Resized partition /dev/sda9 Mar 17 18:41:45.064998 tar[1332]: linux-amd64/helm Mar 17 18:41:44.958089 dbus-daemon[1299]: [system] SELinux support is enabled Mar 17 18:41:44.948657 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:41:45.070505 extend-filesystems[1371]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:41:44.965616 dbus-daemon[1299]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1078 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 18:41:44.949067 systemd[1]: Finished motdgen.service. Mar 17 18:41:45.103814 update_engine[1321]: I0317 18:41:45.077493 1321 main.cc:92] Flatcar Update Engine starting Mar 17 18:41:45.103814 update_engine[1321]: I0317 18:41:45.084156 1321 update_check_scheduler.cc:74] Next update check in 7m36s Mar 17 18:41:45.024492 dbus-daemon[1299]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:41:44.964215 systemd[1]: Started dbus.service. Mar 17 18:41:44.974385 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:41:44.974507 systemd[1]: Reached target system-config.target. Mar 17 18:41:44.987883 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:41:44.987917 systemd[1]: Reached target user-config.target. Mar 17 18:41:45.041723 systemd[1]: Starting systemd-hostnamed.service... Mar 17 18:41:45.084131 systemd[1]: Started update-engine.service. Mar 17 18:41:45.096013 systemd[1]: Started locksmithd.service. Mar 17 18:41:45.119472 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Mar 17 18:41:45.155277 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:41:45.162051 bash[1376]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:41:45.161874 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:41:45.167097 extend-filesystems[1371]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 18:41:45.167097 extend-filesystems[1371]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 17 18:41:45.167097 extend-filesystems[1371]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Mar 17 18:41:45.215642 extend-filesystems[1301]: Resized filesystem in /dev/sda9 Mar 17 18:41:45.172101 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:41:45.172519 systemd[1]: Finished extend-filesystems.service. Mar 17 18:41:45.229401 env[1335]: time="2025-03-17T18:41:45.229335539Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:41:45.487736 systemd-logind[1317]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:41:45.494162 systemd-logind[1317]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 17 18:41:45.496561 systemd-logind[1317]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:41:45.497666 systemd-logind[1317]: New seat seat0. Mar 17 18:41:45.505191 systemd[1]: Started systemd-logind.service. Mar 17 18:41:45.507168 env[1335]: time="2025-03-17T18:41:45.507106978Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:41:45.507399 env[1335]: time="2025-03-17T18:41:45.507365579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:45.519952 env[1335]: time="2025-03-17T18:41:45.519843732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:45.520110 env[1335]: time="2025-03-17T18:41:45.519968251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:45.524674 env[1335]: time="2025-03-17T18:41:45.524594609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:45.524674 env[1335]: time="2025-03-17T18:41:45.524664927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:45.524874 env[1335]: time="2025-03-17T18:41:45.524694683Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:41:45.524874 env[1335]: time="2025-03-17T18:41:45.524729963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:45.525001 env[1335]: time="2025-03-17T18:41:45.524927221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:45.525463 env[1335]: time="2025-03-17T18:41:45.525399867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:45.525885 env[1335]: time="2025-03-17T18:41:45.525841656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:45.525979 env[1335]: time="2025-03-17T18:41:45.525885334Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:41:45.526037 env[1335]: time="2025-03-17T18:41:45.526012776Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:41:45.526100 env[1335]: time="2025-03-17T18:41:45.526038235Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:41:45.541603 env[1335]: time="2025-03-17T18:41:45.541546863Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:41:45.541732 env[1335]: time="2025-03-17T18:41:45.541618117Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:41:45.541732 env[1335]: time="2025-03-17T18:41:45.541641689Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:41:45.541732 env[1335]: time="2025-03-17T18:41:45.541716098Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:41:45.541902 env[1335]: time="2025-03-17T18:41:45.541739790Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:41:45.541902 env[1335]: time="2025-03-17T18:41:45.541841442Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:41:45.541902 env[1335]: time="2025-03-17T18:41:45.541866766Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:41:45.542041 env[1335]: time="2025-03-17T18:41:45.541912827Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:41:45.542041 env[1335]: time="2025-03-17T18:41:45.541937489Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:41:45.542041 env[1335]: time="2025-03-17T18:41:45.541960251Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:41:45.542041 env[1335]: time="2025-03-17T18:41:45.541984978Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:41:45.542041 env[1335]: time="2025-03-17T18:41:45.542008185Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:41:45.542270 env[1335]: time="2025-03-17T18:41:45.542211674Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:41:45.542382 coreos-metadata[1298]: Mar 17 18:41:45.541 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Mar 17 18:41:45.542864 env[1335]: time="2025-03-17T18:41:45.542351821Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:41:45.543064 env[1335]: time="2025-03-17T18:41:45.543028913Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:41:45.543149 env[1335]: time="2025-03-17T18:41:45.543084307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543149 env[1335]: time="2025-03-17T18:41:45.543110503Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:41:45.543259 env[1335]: time="2025-03-17T18:41:45.543193099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543259 env[1335]: time="2025-03-17T18:41:45.543241425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543374 env[1335]: time="2025-03-17T18:41:45.543265945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543374 env[1335]: time="2025-03-17T18:41:45.543286545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543374 env[1335]: time="2025-03-17T18:41:45.543309313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543374 env[1335]: time="2025-03-17T18:41:45.543331825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543374 env[1335]: time="2025-03-17T18:41:45.543353633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543626 env[1335]: time="2025-03-17T18:41:45.543373889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543626 env[1335]: time="2025-03-17T18:41:45.543397612Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:41:45.543626 env[1335]: time="2025-03-17T18:41:45.543613425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543790 env[1335]: time="2025-03-17T18:41:45.543641623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543790 env[1335]: time="2025-03-17T18:41:45.543665818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.543790 env[1335]: time="2025-03-17T18:41:45.543687129Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:41:45.543790 env[1335]: time="2025-03-17T18:41:45.543712687Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:41:45.543790 env[1335]: time="2025-03-17T18:41:45.543733969Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:41:45.543790 env[1335]: time="2025-03-17T18:41:45.543762528Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:41:45.544060 env[1335]: time="2025-03-17T18:41:45.543816425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:41:45.544265 env[1335]: time="2025-03-17T18:41:45.544169691Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:41:45.548685 env[1335]: time="2025-03-17T18:41:45.544277130Z" level=info msg="Connect containerd service" Mar 17 18:41:45.548685 env[1335]: time="2025-03-17T18:41:45.544347853Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:41:45.548947 coreos-metadata[1298]: Mar 17 18:41:45.548 INFO Fetch failed with 404: resource not found Mar 17 18:41:45.548947 coreos-metadata[1298]: Mar 17 18:41:45.548 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Mar 17 18:41:45.551629 env[1335]: time="2025-03-17T18:41:45.550819955Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:41:45.551629 env[1335]: time="2025-03-17T18:41:45.551192102Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:41:45.551629 env[1335]: time="2025-03-17T18:41:45.551263075Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:41:45.551511 systemd[1]: Started containerd.service. Mar 17 18:41:45.556561 coreos-metadata[1298]: Mar 17 18:41:45.556 INFO Fetch successful Mar 17 18:41:45.556561 coreos-metadata[1298]: Mar 17 18:41:45.556 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Mar 17 18:41:45.556561 coreos-metadata[1298]: Mar 17 18:41:45.556 INFO Fetch failed with 404: resource not found Mar 17 18:41:45.556561 coreos-metadata[1298]: Mar 17 18:41:45.556 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Mar 17 18:41:45.556561 coreos-metadata[1298]: Mar 17 18:41:45.556 INFO Fetch failed with 404: resource not found Mar 17 18:41:45.556561 coreos-metadata[1298]: Mar 17 18:41:45.556 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Mar 17 18:41:45.556561 coreos-metadata[1298]: Mar 17 18:41:45.556 INFO Fetch successful Mar 17 18:41:45.558789 unknown[1298]: wrote ssh authorized keys file for user: core Mar 17 18:41:45.588733 env[1335]: time="2025-03-17T18:41:45.588664910Z" level=info msg="containerd successfully booted in 0.360538s" Mar 17 18:41:45.588733 env[1335]: time="2025-03-17T18:41:45.551591859Z" level=info msg="Start subscribing containerd event" Mar 17 18:41:45.588937 env[1335]: time="2025-03-17T18:41:45.588801528Z" level=info msg="Start recovering state" Mar 17 18:41:45.589608 env[1335]: time="2025-03-17T18:41:45.589550792Z" level=info msg="Start event monitor" Mar 17 18:41:45.589722 env[1335]: time="2025-03-17T18:41:45.589616468Z" level=info msg="Start snapshots syncer" Mar 17 18:41:45.589722 env[1335]: time="2025-03-17T18:41:45.589646477Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:41:45.589722 env[1335]: time="2025-03-17T18:41:45.589679680Z" level=info msg="Start streaming server" Mar 17 18:41:45.593662 update-ssh-keys[1394]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:41:45.594512 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 18:41:45.717303 dbus-daemon[1299]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 18:41:45.717921 systemd[1]: Started systemd-hostnamed.service. Mar 17 18:41:45.718765 dbus-daemon[1299]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1377 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 18:41:45.732807 systemd[1]: Starting polkit.service... Mar 17 18:41:45.832666 polkitd[1407]: Started polkitd version 121 Mar 17 18:41:45.866149 polkitd[1407]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 18:41:45.866270 polkitd[1407]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 18:41:45.879511 polkitd[1407]: Finished loading, compiling and executing 2 rules Mar 17 18:41:45.880208 dbus-daemon[1299]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 18:41:45.880469 systemd[1]: Started polkit.service. Mar 17 18:41:45.881318 polkitd[1407]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 18:41:45.920992 systemd-hostnamed[1377]: Hostname set to (transient) Mar 17 18:41:45.924834 systemd-resolved[1237]: System hostname changed to 'ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal'. Mar 17 18:41:46.950413 tar[1332]: linux-amd64/LICENSE Mar 17 18:41:46.950413 tar[1332]: linux-amd64/README.md Mar 17 18:41:46.973969 systemd[1]: Finished prepare-helm.service. Mar 17 18:41:47.019599 systemd[1]: Started kubelet.service. Mar 17 18:41:48.414294 locksmithd[1381]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:41:48.477361 kubelet[1423]: E0317 18:41:48.477302 1423 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:41:48.480456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:41:48.480752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:41:49.053880 sshd_keygen[1347]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:41:49.109259 systemd[1]: Finished sshd-keygen.service. Mar 17 18:41:49.120567 systemd[1]: Starting issuegen.service... Mar 17 18:41:49.133458 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:41:49.133866 systemd[1]: Finished issuegen.service. Mar 17 18:41:49.144543 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:41:49.155689 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:41:49.167517 systemd[1]: Started getty@tty1.service. Mar 17 18:41:49.177819 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:41:49.187052 systemd[1]: Reached target getty.target. Mar 17 18:41:51.100634 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Mar 17 18:41:51.857908 systemd[1]: Created slice system-sshd.slice. Mar 17 18:41:51.869432 systemd[1]: Started sshd@0-10.128.0.50:22-103.39.233.104:36606.service. Mar 17 18:41:53.162511 kernel: loop2: detected capacity change from 0 to 2097152 Mar 17 18:41:53.182145 systemd-nspawn[1455]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Mar 17 18:41:53.182145 systemd-nspawn[1455]: Press ^] three times within 1s to kill container. Mar 17 18:41:53.197481 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:41:53.276240 systemd[1]: Started oem-gce.service. Mar 17 18:41:53.284267 systemd[1]: Reached target multi-user.target. Mar 17 18:41:53.294955 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:41:53.308141 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:41:53.308405 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:41:53.319654 systemd[1]: Startup finished in 9.646s (kernel) + 16.769s (userspace) = 26.416s. Mar 17 18:41:53.354011 systemd-nspawn[1455]: + '[' -e /etc/default/instance_configs.cfg.template ']' Mar 17 18:41:53.354235 systemd-nspawn[1455]: + echo -e '[InstanceSetup]\nset_host_keys = false' Mar 17 18:41:53.354317 systemd-nspawn[1455]: + /usr/bin/google_instance_setup Mar 17 18:41:53.624478 systemd[1]: Started sshd@1-10.128.0.50:22-139.178.89.65:59024.service. Mar 17 18:41:53.748919 sshd[1453]: Invalid user from 103.39.233.104 port 36606 Mar 17 18:41:53.934203 sshd[1464]: Accepted publickey for core from 139.178.89.65 port 59024 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:41:53.937902 sshd[1464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:53.956391 systemd[1]: Created slice user-500.slice. Mar 17 18:41:53.958263 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:41:53.964410 systemd-logind[1317]: New session 1 of user core. Mar 17 18:41:53.978696 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:41:53.981379 systemd[1]: Starting user@500.service... Mar 17 18:41:54.001998 (systemd)[1471]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:54.093154 instance-setup[1463]: INFO Running google_set_multiqueue. Mar 17 18:41:54.115519 instance-setup[1463]: INFO Set channels for eth0 to 2. Mar 17 18:41:54.121003 instance-setup[1463]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Mar 17 18:41:54.123771 instance-setup[1463]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Mar 17 18:41:54.124128 instance-setup[1463]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Mar 17 18:41:54.126726 instance-setup[1463]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Mar 17 18:41:54.127028 instance-setup[1463]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Mar 17 18:41:54.129373 instance-setup[1463]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Mar 17 18:41:54.129692 instance-setup[1463]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Mar 17 18:41:54.131658 instance-setup[1463]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Mar 17 18:41:54.147577 systemd[1471]: Queued start job for default target default.target. Mar 17 18:41:54.147983 systemd[1471]: Reached target paths.target. Mar 17 18:41:54.148015 systemd[1471]: Reached target sockets.target. Mar 17 18:41:54.148038 systemd[1471]: Reached target timers.target. Mar 17 18:41:54.148058 systemd[1471]: Reached target basic.target. Mar 17 18:41:54.148139 systemd[1471]: Reached target default.target. Mar 17 18:41:54.148196 systemd[1471]: Startup finished in 134ms. Mar 17 18:41:54.148264 systemd[1]: Started user@500.service. Mar 17 18:41:54.149979 systemd[1]: Started session-1.scope. Mar 17 18:41:54.152038 instance-setup[1463]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Mar 17 18:41:54.152769 instance-setup[1463]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Mar 17 18:41:54.202723 systemd-nspawn[1455]: + /usr/bin/google_metadata_script_runner --script-type startup Mar 17 18:41:54.376038 systemd[1]: Started sshd@2-10.128.0.50:22-139.178.89.65:59032.service. Mar 17 18:41:54.605993 startup-script[1507]: INFO Starting startup scripts. Mar 17 18:41:54.619419 startup-script[1507]: INFO No startup scripts found in metadata. Mar 17 18:41:54.619660 startup-script[1507]: INFO Finished running startup scripts. Mar 17 18:41:54.657960 systemd-nspawn[1455]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Mar 17 18:41:54.657960 systemd-nspawn[1455]: + daemon_pids=() Mar 17 18:41:54.657960 systemd-nspawn[1455]: + for d in accounts clock_skew network Mar 17 18:41:54.657960 systemd-nspawn[1455]: + daemon_pids+=($!) Mar 17 18:41:54.657960 systemd-nspawn[1455]: + for d in accounts clock_skew network Mar 17 18:41:54.657960 systemd-nspawn[1455]: + /usr/bin/google_accounts_daemon Mar 17 18:41:54.657960 systemd-nspawn[1455]: + daemon_pids+=($!) Mar 17 18:41:54.657960 systemd-nspawn[1455]: + for d in accounts clock_skew network Mar 17 18:41:54.657960 systemd-nspawn[1455]: + daemon_pids+=($!) Mar 17 18:41:54.657960 systemd-nspawn[1455]: + NOTIFY_SOCKET=/run/systemd/notify Mar 17 18:41:54.657960 systemd-nspawn[1455]: + /usr/bin/systemd-notify --ready Mar 17 18:41:54.657960 systemd-nspawn[1455]: + /usr/bin/google_network_daemon Mar 17 18:41:54.659214 systemd-nspawn[1455]: + /usr/bin/google_clock_skew_daemon Mar 17 18:41:54.693327 sshd[1509]: Accepted publickey for core from 139.178.89.65 port 59032 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:41:54.694971 sshd[1509]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:54.704838 systemd[1]: Started session-2.scope. Mar 17 18:41:54.705516 systemd-logind[1317]: New session 2 of user core. Mar 17 18:41:54.733423 systemd-nspawn[1455]: + wait -n 36 37 38 Mar 17 18:41:54.913756 sshd[1509]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:54.919523 systemd[1]: sshd@2-10.128.0.50:22-139.178.89.65:59032.service: Deactivated successfully. Mar 17 18:41:54.921190 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:41:54.921212 systemd-logind[1317]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:41:54.923091 systemd-logind[1317]: Removed session 2. Mar 17 18:41:54.957247 systemd[1]: Started sshd@3-10.128.0.50:22-139.178.89.65:59042.service. Mar 17 18:41:55.269740 sshd[1522]: Accepted publickey for core from 139.178.89.65 port 59042 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:41:55.271326 sshd[1522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:55.280084 systemd[1]: Started session-3.scope. Mar 17 18:41:55.282206 systemd-logind[1317]: New session 3 of user core. Mar 17 18:41:55.424124 google-networking[1515]: INFO Starting Google Networking daemon. Mar 17 18:41:55.450431 groupadd[1532]: group added to /etc/group: name=google-sudoers, GID=1000 Mar 17 18:41:55.455221 groupadd[1532]: group added to /etc/gshadow: name=google-sudoers Mar 17 18:41:55.467582 groupadd[1532]: new group: name=google-sudoers, GID=1000 Mar 17 18:41:55.479877 sshd[1522]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:55.484326 systemd[1]: sshd@3-10.128.0.50:22-139.178.89.65:59042.service: Deactivated successfully. Mar 17 18:41:55.485635 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:41:55.488189 systemd-logind[1317]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:41:55.494186 systemd-logind[1317]: Removed session 3. Mar 17 18:41:55.501589 google-accounts[1513]: INFO Starting Google Accounts daemon. Mar 17 18:41:55.524353 systemd[1]: Started sshd@4-10.128.0.50:22-139.178.89.65:59058.service. Mar 17 18:41:55.523075 google-clock-skew[1514]: INFO Starting Google Clock Skew daemon. Mar 17 18:41:55.546963 google-clock-skew[1514]: INFO Clock drift token has changed: 0. Mar 17 18:41:55.552293 google-accounts[1513]: WARNING OS Login not installed. Mar 17 18:41:55.553265 systemd-nspawn[1455]: hwclock: Cannot access the Hardware Clock via any known method. Mar 17 18:41:55.553265 systemd-nspawn[1455]: hwclock: Use the --verbose option to see the details of our search for an access method. Mar 17 18:41:55.553709 google-accounts[1513]: INFO Creating a new user account for 0. Mar 17 18:41:55.555807 google-clock-skew[1514]: WARNING Failed to sync system time with hardware clock. Mar 17 18:41:55.559818 systemd-nspawn[1455]: useradd: invalid user name '0': use --badname to ignore Mar 17 18:41:55.560611 google-accounts[1513]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Mar 17 18:41:55.820596 sshd[1543]: Accepted publickey for core from 139.178.89.65 port 59058 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:41:55.822929 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:55.829586 systemd-logind[1317]: New session 4 of user core. Mar 17 18:41:55.830166 systemd[1]: Started session-4.scope. Mar 17 18:41:56.035632 sshd[1543]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:56.039929 systemd[1]: sshd@4-10.128.0.50:22-139.178.89.65:59058.service: Deactivated successfully. Mar 17 18:41:56.041229 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:41:56.043752 systemd-logind[1317]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:41:56.045777 systemd-logind[1317]: Removed session 4. Mar 17 18:41:56.079236 systemd[1]: Started sshd@5-10.128.0.50:22-139.178.89.65:59062.service. Mar 17 18:41:56.367012 sshd[1554]: Accepted publickey for core from 139.178.89.65 port 59062 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:41:56.369055 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:56.375619 systemd-logind[1317]: New session 5 of user core. Mar 17 18:41:56.375901 systemd[1]: Started session-5.scope. Mar 17 18:41:56.565226 sudo[1558]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:41:56.565691 sudo[1558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:41:56.599643 systemd[1]: Starting docker.service... Mar 17 18:41:56.654052 env[1568]: time="2025-03-17T18:41:56.653431372Z" level=info msg="Starting up" Mar 17 18:41:56.656132 env[1568]: time="2025-03-17T18:41:56.656093753Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:41:56.656132 env[1568]: time="2025-03-17T18:41:56.656125969Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:41:56.656309 env[1568]: time="2025-03-17T18:41:56.656153995Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:41:56.656309 env[1568]: time="2025-03-17T18:41:56.656169599Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:41:56.658863 env[1568]: time="2025-03-17T18:41:56.658814909Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:41:56.658863 env[1568]: time="2025-03-17T18:41:56.658838051Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:41:56.658863 env[1568]: time="2025-03-17T18:41:56.658860336Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:41:56.659072 env[1568]: time="2025-03-17T18:41:56.658874836Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:41:56.670711 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport553873954-merged.mount: Deactivated successfully. Mar 17 18:41:57.181145 env[1568]: time="2025-03-17T18:41:57.181086004Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 18:41:57.181145 env[1568]: time="2025-03-17T18:41:57.181121991Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 18:41:57.181564 env[1568]: time="2025-03-17T18:41:57.181483314Z" level=info msg="Loading containers: start." Mar 17 18:41:57.357481 kernel: Initializing XFRM netlink socket Mar 17 18:41:57.405051 env[1568]: time="2025-03-17T18:41:57.404987129Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:41:57.490933 systemd-networkd[1078]: docker0: Link UP Mar 17 18:41:57.512688 env[1568]: time="2025-03-17T18:41:57.512629124Z" level=info msg="Loading containers: done." Mar 17 18:41:57.532516 env[1568]: time="2025-03-17T18:41:57.528398383Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:41:57.532516 env[1568]: time="2025-03-17T18:41:57.528790419Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:41:57.532516 env[1568]: time="2025-03-17T18:41:57.529001436Z" level=info msg="Daemon has completed initialization" Mar 17 18:41:57.555679 systemd[1]: Started docker.service. Mar 17 18:41:57.564100 env[1568]: time="2025-03-17T18:41:57.564017078Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:41:58.732262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:41:58.732639 systemd[1]: Stopped kubelet.service. Mar 17 18:41:58.735192 systemd[1]: Starting kubelet.service... Mar 17 18:41:58.826033 env[1335]: time="2025-03-17T18:41:58.825973558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:41:58.971068 systemd[1]: Started kubelet.service. Mar 17 18:41:59.049723 kubelet[1707]: E0317 18:41:59.049271 1707 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:41:59.054069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:41:59.054378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:41:59.373482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993072269.mount: Deactivated successfully. Mar 17 18:41:59.854948 sshd[1453]: Connection closed by invalid user 103.39.233.104 port 36606 [preauth] Mar 17 18:41:59.856985 systemd[1]: sshd@0-10.128.0.50:22-103.39.233.104:36606.service: Deactivated successfully. Mar 17 18:42:01.294869 env[1335]: time="2025-03-17T18:42:01.294788689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:01.302598 env[1335]: time="2025-03-17T18:42:01.302537012Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:01.305974 env[1335]: time="2025-03-17T18:42:01.305900279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:01.310016 env[1335]: time="2025-03-17T18:42:01.309945412Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:01.311104 env[1335]: time="2025-03-17T18:42:01.311046272Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 18:42:01.327003 env[1335]: time="2025-03-17T18:42:01.326952866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:42:03.163024 env[1335]: time="2025-03-17T18:42:03.162948354Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:03.166085 env[1335]: time="2025-03-17T18:42:03.166036063Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:03.169010 env[1335]: time="2025-03-17T18:42:03.168961529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:03.171992 env[1335]: time="2025-03-17T18:42:03.171935731Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:03.173230 env[1335]: time="2025-03-17T18:42:03.173163953Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 18:42:03.188793 env[1335]: time="2025-03-17T18:42:03.188726591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:42:04.446709 env[1335]: time="2025-03-17T18:42:04.446636637Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:04.450365 env[1335]: time="2025-03-17T18:42:04.450300482Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:04.453465 env[1335]: time="2025-03-17T18:42:04.453393744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:04.457735 env[1335]: time="2025-03-17T18:42:04.457673950Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:04.458261 env[1335]: time="2025-03-17T18:42:04.458218555Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 18:42:04.474700 env[1335]: time="2025-03-17T18:42:04.474636373Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:42:05.620309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1827246993.mount: Deactivated successfully. Mar 17 18:42:06.337217 env[1335]: time="2025-03-17T18:42:06.337141181Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:06.340461 env[1335]: time="2025-03-17T18:42:06.340388348Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:06.343002 env[1335]: time="2025-03-17T18:42:06.342940825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:06.345945 env[1335]: time="2025-03-17T18:42:06.345891113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:06.346707 env[1335]: time="2025-03-17T18:42:06.346662362Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 18:42:06.361923 env[1335]: time="2025-03-17T18:42:06.361857362Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:42:06.811709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2247822042.mount: Deactivated successfully. Mar 17 18:42:08.092339 env[1335]: time="2025-03-17T18:42:08.092262074Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:08.095251 env[1335]: time="2025-03-17T18:42:08.095193851Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:08.097836 env[1335]: time="2025-03-17T18:42:08.097782929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:08.100541 env[1335]: time="2025-03-17T18:42:08.100475130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:08.101668 env[1335]: time="2025-03-17T18:42:08.101619634Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:42:08.116677 env[1335]: time="2025-03-17T18:42:08.116613712Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:42:08.664388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461582005.mount: Deactivated successfully. Mar 17 18:42:08.673253 env[1335]: time="2025-03-17T18:42:08.673185068Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:08.677004 env[1335]: time="2025-03-17T18:42:08.676941259Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:08.679839 env[1335]: time="2025-03-17T18:42:08.679778913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:08.682480 env[1335]: time="2025-03-17T18:42:08.682404193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:08.683272 env[1335]: time="2025-03-17T18:42:08.683219820Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 18:42:08.697197 env[1335]: time="2025-03-17T18:42:08.697143348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:42:09.130943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:42:09.131219 systemd[1]: Stopped kubelet.service. Mar 17 18:42:09.136120 systemd[1]: Starting kubelet.service... Mar 17 18:42:09.162060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618562093.mount: Deactivated successfully. Mar 17 18:42:09.417392 systemd[1]: Started kubelet.service. Mar 17 18:42:09.515452 kubelet[1757]: E0317 18:42:09.515364 1757 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:42:09.518781 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:42:09.519091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:42:12.030167 env[1335]: time="2025-03-17T18:42:12.030094342Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:12.033888 env[1335]: time="2025-03-17T18:42:12.033826475Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:12.036758 env[1335]: time="2025-03-17T18:42:12.036705024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:12.039341 env[1335]: time="2025-03-17T18:42:12.039292262Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:12.040384 env[1335]: time="2025-03-17T18:42:12.040331558Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 18:42:15.864498 systemd[1]: Stopped kubelet.service. Mar 17 18:42:15.868166 systemd[1]: Starting kubelet.service... Mar 17 18:42:15.906539 systemd[1]: Reloading. Mar 17 18:42:16.031386 /usr/lib/systemd/system-generators/torcx-generator[1851]: time="2025-03-17T18:42:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:42:16.032188 /usr/lib/systemd/system-generators/torcx-generator[1851]: time="2025-03-17T18:42:16Z" level=info msg="torcx already run" Mar 17 18:42:16.188130 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:42:16.188159 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:42:16.212908 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:42:16.322800 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 18:42:16.347088 systemd[1]: Started kubelet.service. Mar 17 18:42:16.352579 systemd[1]: Stopping kubelet.service... Mar 17 18:42:16.353993 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:42:16.354640 systemd[1]: Stopped kubelet.service. Mar 17 18:42:16.358740 systemd[1]: Starting kubelet.service... Mar 17 18:42:16.563787 systemd[1]: Started kubelet.service. Mar 17 18:42:16.641873 kubelet[1918]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:42:16.642336 kubelet[1918]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:42:16.642418 kubelet[1918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:42:16.645242 kubelet[1918]: I0317 18:42:16.645170 1918 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:42:16.999488 kubelet[1918]: I0317 18:42:16.999305 1918 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:42:16.999488 kubelet[1918]: I0317 18:42:16.999349 1918 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:42:17.000382 kubelet[1918]: I0317 18:42:17.000331 1918 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:42:17.032855 kubelet[1918]: E0317 18:42:17.032523 1918 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:17.032855 kubelet[1918]: I0317 18:42:17.032686 1918 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:42:17.051422 kubelet[1918]: I0317 18:42:17.051346 1918 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:42:17.052287 kubelet[1918]: I0317 18:42:17.052233 1918 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:42:17.052650 kubelet[1918]: I0317 18:42:17.052277 1918 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:42:17.053933 kubelet[1918]: I0317 18:42:17.053887 1918 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:42:17.053933 kubelet[1918]: I0317 18:42:17.053930 1918 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:42:17.054122 kubelet[1918]: I0317 18:42:17.054107 1918 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:42:17.055737 kubelet[1918]: I0317 18:42:17.055698 1918 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:42:17.055737 kubelet[1918]: I0317 18:42:17.055735 1918 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:42:17.055935 kubelet[1918]: I0317 18:42:17.055770 1918 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:42:17.055935 kubelet[1918]: I0317 18:42:17.055797 1918 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:42:17.070295 kubelet[1918]: W0317 18:42:17.069935 1918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:17.070295 kubelet[1918]: E0317 18:42:17.070025 1918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:17.070295 kubelet[1918]: W0317 18:42:17.070146 1918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:17.070295 kubelet[1918]: E0317 18:42:17.070201 1918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:17.070720 kubelet[1918]: I0317 18:42:17.070470 1918 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:42:17.073501 kubelet[1918]: I0317 18:42:17.073462 1918 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:42:17.073683 kubelet[1918]: W0317 18:42:17.073566 1918 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:42:17.075013 kubelet[1918]: I0317 18:42:17.074709 1918 server.go:1264] "Started kubelet" Mar 17 18:42:17.083594 kubelet[1918]: I0317 18:42:17.083056 1918 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:42:17.084485 kubelet[1918]: I0317 18:42:17.084407 1918 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:42:17.091001 kubelet[1918]: I0317 18:42:17.090898 1918 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:42:17.091484 kubelet[1918]: I0317 18:42:17.091461 1918 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:42:17.093459 kubelet[1918]: E0317 18:42:17.093268 1918 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal.182dab4374e69105 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,UID:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,},FirstTimestamp:2025-03-17 18:42:17.074675973 +0000 UTC m=+0.492479523,LastTimestamp:2025-03-17 18:42:17.074675973 +0000 UTC m=+0.492479523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,}" Mar 17 18:42:17.099684 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:42:17.100809 kubelet[1918]: I0317 18:42:17.099904 1918 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:42:17.101824 kubelet[1918]: I0317 18:42:17.101787 1918 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:42:17.103429 kubelet[1918]: I0317 18:42:17.102400 1918 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:42:17.103429 kubelet[1918]: I0317 18:42:17.102498 1918 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:42:17.103765 kubelet[1918]: W0317 18:42:17.103407 1918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:17.103765 kubelet[1918]: E0317 18:42:17.103505 1918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:17.103765 kubelet[1918]: E0317 18:42:17.103693 1918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="200ms" Mar 17 18:42:17.108061 kubelet[1918]: I0317 18:42:17.108025 1918 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:42:17.108239 kubelet[1918]: I0317 18:42:17.108162 1918 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:42:17.110774 kubelet[1918]: E0317 18:42:17.110739 1918 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:42:17.112264 kubelet[1918]: I0317 18:42:17.112239 1918 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:42:17.155235 kubelet[1918]: I0317 18:42:17.155186 1918 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:42:17.155235 kubelet[1918]: I0317 18:42:17.155211 1918 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:42:17.155235 kubelet[1918]: I0317 18:42:17.155238 1918 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:42:17.159468 kubelet[1918]: I0317 18:42:17.157825 1918 policy_none.go:49] "None policy: Start" Mar 17 18:42:17.159468 kubelet[1918]: I0317 18:42:17.158899 1918 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:42:17.159468 kubelet[1918]: I0317 18:42:17.158929 1918 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:42:17.163468 kubelet[1918]: I0317 18:42:17.163393 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:42:17.166931 kubelet[1918]: I0317 18:42:17.166896 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:42:17.167138 kubelet[1918]: I0317 18:42:17.167120 1918 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:42:17.167315 kubelet[1918]: I0317 18:42:17.167298 1918 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:42:17.167565 kubelet[1918]: E0317 18:42:17.167519 1918 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:42:17.170047 kubelet[1918]: I0317 18:42:17.170015 1918 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:42:17.170265 kubelet[1918]: I0317 18:42:17.170223 1918 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:42:17.170392 kubelet[1918]: I0317 18:42:17.170374 1918 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:42:17.176727 kubelet[1918]: W0317 18:42:17.176637 1918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:17.178089 kubelet[1918]: E0317 18:42:17.178053 1918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:17.181747 kubelet[1918]: E0317 18:42:17.181708 1918 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" not found" Mar 17 18:42:17.211601 kubelet[1918]: I0317 18:42:17.211563 1918 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.212488 kubelet[1918]: E0317 18:42:17.212416 1918 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.50:6443/api/v1/nodes\": dial tcp 10.128.0.50:6443: connect: connection refused" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.269038 kubelet[1918]: I0317 18:42:17.268870 1918 topology_manager.go:215] "Topology Admit Handler" podUID="7491036d5618b6b0fd58177e920a9596" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.286082 kubelet[1918]: I0317 18:42:17.286033 1918 topology_manager.go:215] "Topology Admit Handler" podUID="f2c3fd111ba94b53ee298573314d435a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.304209 kubelet[1918]: I0317 18:42:17.304161 1918 topology_manager.go:215] "Topology Admit Handler" podUID="9a7457d33d73e2f3d1017626147a7420" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.304518 kubelet[1918]: E0317 18:42:17.304475 1918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="400ms" Mar 17 18:42:17.378961 kubelet[1918]: E0317 18:42:17.378695 1918 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal.182dab4374e69105 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,UID:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,},FirstTimestamp:2025-03-17 18:42:17.074675973 +0000 UTC m=+0.492479523,LastTimestamp:2025-03-17 18:42:17.074675973 +0000 UTC m=+0.492479523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,}" Mar 17 18:42:17.403194 kubelet[1918]: I0317 18:42:17.403119 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2c3fd111ba94b53ee298573314d435a-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"f2c3fd111ba94b53ee298573314d435a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.403421 kubelet[1918]: I0317 18:42:17.403325 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f2c3fd111ba94b53ee298573314d435a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"f2c3fd111ba94b53ee298573314d435a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.403421 kubelet[1918]: I0317 18:42:17.403395 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2c3fd111ba94b53ee298573314d435a-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"f2c3fd111ba94b53ee298573314d435a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.403421 kubelet[1918]: I0317 18:42:17.403424 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f2c3fd111ba94b53ee298573314d435a-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"f2c3fd111ba94b53ee298573314d435a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.403704 kubelet[1918]: I0317 18:42:17.403529 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2c3fd111ba94b53ee298573314d435a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"f2c3fd111ba94b53ee298573314d435a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.403704 kubelet[1918]: I0317 18:42:17.403594 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7491036d5618b6b0fd58177e920a9596-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"7491036d5618b6b0fd58177e920a9596\") " pod="kube-system/kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.403704 kubelet[1918]: I0317 18:42:17.403660 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7491036d5618b6b0fd58177e920a9596-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"7491036d5618b6b0fd58177e920a9596\") " pod="kube-system/kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.403895 kubelet[1918]: I0317 18:42:17.403747 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7491036d5618b6b0fd58177e920a9596-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"7491036d5618b6b0fd58177e920a9596\") " pod="kube-system/kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.431222 kubelet[1918]: I0317 18:42:17.431180 1918 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.431715 kubelet[1918]: E0317 18:42:17.431640 1918 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.50:6443/api/v1/nodes\": dial tcp 10.128.0.50:6443: connect: connection refused" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.504273 kubelet[1918]: I0317 18:42:17.504218 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a7457d33d73e2f3d1017626147a7420-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"9a7457d33d73e2f3d1017626147a7420\") " pod="kube-system/kube-scheduler-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.612165 env[1335]: time="2025-03-17T18:42:17.612017871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,Uid:7491036d5618b6b0fd58177e920a9596,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:17.616876 env[1335]: time="2025-03-17T18:42:17.616510018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,Uid:f2c3fd111ba94b53ee298573314d435a,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:17.620959 env[1335]: time="2025-03-17T18:42:17.620864225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,Uid:9a7457d33d73e2f3d1017626147a7420,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:17.705847 kubelet[1918]: E0317 18:42:17.705766 1918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="800ms" Mar 17 18:42:17.837376 kubelet[1918]: I0317 18:42:17.837335 1918 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.837873 kubelet[1918]: E0317 18:42:17.837808 1918 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.50:6443/api/v1/nodes\": dial tcp 10.128.0.50:6443: connect: connection refused" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:17.939622 kubelet[1918]: W0317 18:42:17.939397 1918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:17.939622 kubelet[1918]: E0317 18:42:17.939526 1918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:18.015112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596130985.mount: Deactivated successfully. Mar 17 18:42:18.026416 env[1335]: time="2025-03-17T18:42:18.026334189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.027952 env[1335]: time="2025-03-17T18:42:18.027896408Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.031529 env[1335]: time="2025-03-17T18:42:18.031483159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.033164 env[1335]: time="2025-03-17T18:42:18.033105534Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.034400 env[1335]: time="2025-03-17T18:42:18.034338531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.037186 env[1335]: time="2025-03-17T18:42:18.037129258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.038356 env[1335]: time="2025-03-17T18:42:18.038304785Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.039296 env[1335]: time="2025-03-17T18:42:18.039253296Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.041679 env[1335]: time="2025-03-17T18:42:18.041631746Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.042590 env[1335]: time="2025-03-17T18:42:18.042541442Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.043522 env[1335]: time="2025-03-17T18:42:18.043485382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.078905 env[1335]: time="2025-03-17T18:42:18.078808316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:18.079336 env[1335]: time="2025-03-17T18:42:18.079166017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:18.079336 env[1335]: time="2025-03-17T18:42:18.079205123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:18.080214 env[1335]: time="2025-03-17T18:42:18.079834019Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6feb8c8e1576f974acf568577b0ccc220cb6c69acdfc232e51d96087c6cd7396 pid=1957 runtime=io.containerd.runc.v2 Mar 17 18:42:18.092399 env[1335]: time="2025-03-17T18:42:18.092350403Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:18.133978 env[1335]: time="2025-03-17T18:42:18.130092614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:18.133978 env[1335]: time="2025-03-17T18:42:18.130197032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:18.133978 env[1335]: time="2025-03-17T18:42:18.130237718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:18.133978 env[1335]: time="2025-03-17T18:42:18.130494708Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39c97862039aa421708ba3fb8e09e5465356379f9ee208c77f0dd3139f1990df pid=1988 runtime=io.containerd.runc.v2 Mar 17 18:42:18.166469 env[1335]: time="2025-03-17T18:42:18.165087706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:18.166469 env[1335]: time="2025-03-17T18:42:18.165223472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:18.166469 env[1335]: time="2025-03-17T18:42:18.165283901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:18.166469 env[1335]: time="2025-03-17T18:42:18.165552266Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0925cee2d884c3cc67c4596fbfd387e4e46a60b834e7443805a284b4e647c80d pid=2020 runtime=io.containerd.runc.v2 Mar 17 18:42:18.239621 env[1335]: time="2025-03-17T18:42:18.235096322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,Uid:7491036d5618b6b0fd58177e920a9596,Namespace:kube-system,Attempt:0,} returns sandbox id \"6feb8c8e1576f974acf568577b0ccc220cb6c69acdfc232e51d96087c6cd7396\"" Mar 17 18:42:18.240809 kubelet[1918]: E0317 18:42:18.240766 1918 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-21291" Mar 17 18:42:18.248058 env[1335]: time="2025-03-17T18:42:18.247995858Z" level=info msg="CreateContainer within sandbox \"6feb8c8e1576f974acf568577b0ccc220cb6c69acdfc232e51d96087c6cd7396\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:42:18.281376 env[1335]: time="2025-03-17T18:42:18.281311289Z" level=info msg="CreateContainer within sandbox \"6feb8c8e1576f974acf568577b0ccc220cb6c69acdfc232e51d96087c6cd7396\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ea821df7c840737c105eb13878e9ae0aaa6f62acf665c09c4ee704fe248e7c34\"" Mar 17 18:42:18.282927 env[1335]: time="2025-03-17T18:42:18.282878196Z" level=info msg="StartContainer for \"ea821df7c840737c105eb13878e9ae0aaa6f62acf665c09c4ee704fe248e7c34\"" Mar 17 18:42:18.293978 env[1335]: time="2025-03-17T18:42:18.293923263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,Uid:f2c3fd111ba94b53ee298573314d435a,Namespace:kube-system,Attempt:0,} returns sandbox id \"39c97862039aa421708ba3fb8e09e5465356379f9ee208c77f0dd3139f1990df\"" Mar 17 18:42:18.302283 kubelet[1918]: E0317 18:42:18.301768 1918 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flat" Mar 17 18:42:18.304284 env[1335]: time="2025-03-17T18:42:18.304236854Z" level=info msg="CreateContainer within sandbox \"39c97862039aa421708ba3fb8e09e5465356379f9ee208c77f0dd3139f1990df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:42:18.325812 env[1335]: time="2025-03-17T18:42:18.325748783Z" level=info msg="CreateContainer within sandbox \"39c97862039aa421708ba3fb8e09e5465356379f9ee208c77f0dd3139f1990df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9b1c5c2b5feea880aed91b376503a3e0a2b939d1954a4d8b1f49152fe5a0633e\"" Mar 17 18:42:18.326943 env[1335]: time="2025-03-17T18:42:18.326901436Z" level=info msg="StartContainer for \"9b1c5c2b5feea880aed91b376503a3e0a2b939d1954a4d8b1f49152fe5a0633e\"" Mar 17 18:42:18.343832 env[1335]: time="2025-03-17T18:42:18.343648938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,Uid:9a7457d33d73e2f3d1017626147a7420,Namespace:kube-system,Attempt:0,} returns sandbox id \"0925cee2d884c3cc67c4596fbfd387e4e46a60b834e7443805a284b4e647c80d\"" Mar 17 18:42:18.384265 kubelet[1918]: E0317 18:42:18.380656 1918 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-21291" Mar 17 18:42:18.384470 env[1335]: time="2025-03-17T18:42:18.382818285Z" level=info msg="CreateContainer within sandbox \"0925cee2d884c3cc67c4596fbfd387e4e46a60b834e7443805a284b4e647c80d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:42:18.393518 kubelet[1918]: W0317 18:42:18.388376 1918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:18.393518 kubelet[1918]: E0317 18:42:18.388500 1918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.50:6443: connect: connection refused Mar 17 18:42:18.419092 env[1335]: time="2025-03-17T18:42:18.419029956Z" level=info msg="CreateContainer within sandbox \"0925cee2d884c3cc67c4596fbfd387e4e46a60b834e7443805a284b4e647c80d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ca331c4803dc623beb3694e63bf4239d55b9178a8b39aa339afe3672f6aac54\"" Mar 17 18:42:18.420136 env[1335]: time="2025-03-17T18:42:18.420086513Z" level=info msg="StartContainer for \"6ca331c4803dc623beb3694e63bf4239d55b9178a8b39aa339afe3672f6aac54\"" Mar 17 18:42:18.476627 env[1335]: time="2025-03-17T18:42:18.476413118Z" level=info msg="StartContainer for \"ea821df7c840737c105eb13878e9ae0aaa6f62acf665c09c4ee704fe248e7c34\" returns successfully" Mar 17 18:42:18.510279 kubelet[1918]: E0317 18:42:18.510070 1918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.50:6443: connect: connection refused" interval="1.6s" Mar 17 18:42:18.537673 env[1335]: time="2025-03-17T18:42:18.537614264Z" level=info msg="StartContainer for \"9b1c5c2b5feea880aed91b376503a3e0a2b939d1954a4d8b1f49152fe5a0633e\" returns successfully" Mar 17 18:42:18.642732 kubelet[1918]: I0317 18:42:18.642696 1918 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:18.706507 env[1335]: time="2025-03-17T18:42:18.706399146Z" level=info msg="StartContainer for \"6ca331c4803dc623beb3694e63bf4239d55b9178a8b39aa339afe3672f6aac54\" returns successfully" Mar 17 18:42:20.951850 kubelet[1918]: E0317 18:42:20.951772 1918 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" not found" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:21.002108 kubelet[1918]: I0317 18:42:21.002047 1918 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:21.068546 kubelet[1918]: I0317 18:42:21.068488 1918 apiserver.go:52] "Watching apiserver" Mar 17 18:42:21.102893 kubelet[1918]: I0317 18:42:21.102857 1918 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:42:21.433981 kubelet[1918]: E0317 18:42:21.433930 1918 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:22.059815 kubelet[1918]: W0317 18:42:22.059773 1918 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 18:42:23.041116 systemd[1]: Reloading. Mar 17 18:42:23.158515 /usr/lib/systemd/system-generators/torcx-generator[2200]: time="2025-03-17T18:42:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:42:23.159154 /usr/lib/systemd/system-generators/torcx-generator[2200]: time="2025-03-17T18:42:23Z" level=info msg="torcx already run" Mar 17 18:42:23.286419 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:42:23.286462 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:42:23.312148 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:42:23.455515 kubelet[1918]: E0317 18:42:23.455320 1918 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal.182dab4374e69105 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,UID:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,},FirstTimestamp:2025-03-17 18:42:17.074675973 +0000 UTC m=+0.492479523,LastTimestamp:2025-03-17 18:42:17.074675973 +0000 UTC m=+0.492479523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal,}" Mar 17 18:42:23.457602 systemd[1]: Stopping kubelet.service... Mar 17 18:42:23.472293 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:42:23.472796 systemd[1]: Stopped kubelet.service. Mar 17 18:42:23.476625 systemd[1]: Starting kubelet.service... Mar 17 18:42:23.746786 systemd[1]: Started kubelet.service. Mar 17 18:42:23.860157 kubelet[2258]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:42:23.860157 kubelet[2258]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:42:23.860157 kubelet[2258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:42:23.860872 kubelet[2258]: I0317 18:42:23.860248 2258 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:42:23.867929 kubelet[2258]: I0317 18:42:23.867880 2258 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:42:23.867929 kubelet[2258]: I0317 18:42:23.867914 2258 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:42:23.868718 kubelet[2258]: I0317 18:42:23.868687 2258 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:42:23.873974 kubelet[2258]: I0317 18:42:23.873940 2258 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:42:23.875872 kubelet[2258]: I0317 18:42:23.875835 2258 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:42:23.887211 kubelet[2258]: I0317 18:42:23.887167 2258 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:42:23.887949 kubelet[2258]: I0317 18:42:23.887886 2258 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:42:23.888202 kubelet[2258]: I0317 18:42:23.887938 2258 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:42:23.888382 kubelet[2258]: I0317 18:42:23.888214 2258 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:42:23.888382 kubelet[2258]: I0317 18:42:23.888234 2258 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:42:23.888382 kubelet[2258]: I0317 18:42:23.888299 2258 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:42:23.893392 kubelet[2258]: I0317 18:42:23.889194 2258 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:42:23.893392 kubelet[2258]: I0317 18:42:23.889226 2258 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:42:23.893392 kubelet[2258]: I0317 18:42:23.889280 2258 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:42:23.893392 kubelet[2258]: I0317 18:42:23.889302 2258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:42:23.906181 kubelet[2258]: I0317 18:42:23.900505 2258 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:42:23.906181 kubelet[2258]: I0317 18:42:23.900771 2258 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:42:23.906181 kubelet[2258]: I0317 18:42:23.901302 2258 server.go:1264] "Started kubelet" Mar 17 18:42:23.906181 kubelet[2258]: I0317 18:42:23.903840 2258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:42:23.914681 kubelet[2258]: I0317 18:42:23.914623 2258 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:42:23.916036 kubelet[2258]: I0317 18:42:23.916007 2258 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:42:23.918309 kubelet[2258]: I0317 18:42:23.918235 2258 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:42:23.918583 kubelet[2258]: I0317 18:42:23.918564 2258 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:42:23.926339 kubelet[2258]: I0317 18:42:23.926199 2258 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:42:23.931114 kubelet[2258]: I0317 18:42:23.931066 2258 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:42:23.932661 kubelet[2258]: I0317 18:42:23.932602 2258 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:42:23.937626 kubelet[2258]: I0317 18:42:23.937589 2258 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:42:23.937831 kubelet[2258]: I0317 18:42:23.937755 2258 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:42:23.942791 kubelet[2258]: E0317 18:42:23.942764 2258 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:42:23.946089 kubelet[2258]: I0317 18:42:23.946066 2258 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:42:23.963788 kubelet[2258]: I0317 18:42:23.963744 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:42:23.966879 kubelet[2258]: I0317 18:42:23.966833 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:42:23.967114 kubelet[2258]: I0317 18:42:23.967094 2258 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:42:23.967362 kubelet[2258]: I0317 18:42:23.967344 2258 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:42:23.967610 kubelet[2258]: E0317 18:42:23.967573 2258 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:42:24.033534 kubelet[2258]: I0317 18:42:24.033490 2258 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.050981 kubelet[2258]: I0317 18:42:24.050941 2258 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.051176 kubelet[2258]: I0317 18:42:24.051076 2258 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.056502 kubelet[2258]: I0317 18:42:24.056470 2258 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:42:24.056752 kubelet[2258]: I0317 18:42:24.056730 2258 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:42:24.056925 kubelet[2258]: I0317 18:42:24.056886 2258 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:42:24.057312 kubelet[2258]: I0317 18:42:24.057293 2258 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:42:24.057514 kubelet[2258]: I0317 18:42:24.057423 2258 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:42:24.057666 kubelet[2258]: I0317 18:42:24.057626 2258 policy_none.go:49] "None policy: Start" Mar 17 18:42:24.060558 kubelet[2258]: I0317 18:42:24.060529 2258 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:42:24.060679 kubelet[2258]: I0317 18:42:24.060568 2258 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:42:24.060973 kubelet[2258]: I0317 18:42:24.060951 2258 state_mem.go:75] "Updated machine memory state" Mar 17 18:42:24.064347 sudo[2289]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:42:24.064936 kubelet[2258]: I0317 18:42:24.064860 2258 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:42:24.065427 kubelet[2258]: I0317 18:42:24.065356 2258 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:42:24.066069 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:42:24.067911 kubelet[2258]: I0317 18:42:24.067802 2258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:42:24.068929 kubelet[2258]: I0317 18:42:24.068746 2258 topology_manager.go:215] "Topology Admit Handler" podUID="7491036d5618b6b0fd58177e920a9596" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.069172 kubelet[2258]: I0317 18:42:24.069028 2258 topology_manager.go:215] "Topology Admit Handler" podUID="f2c3fd111ba94b53ee298573314d435a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.069388 kubelet[2258]: I0317 18:42:24.069249 2258 topology_manager.go:215] "Topology Admit Handler" podUID="9a7457d33d73e2f3d1017626147a7420" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.097965 kubelet[2258]: W0317 18:42:24.097927 2258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 18:42:24.098781 kubelet[2258]: W0317 18:42:24.098752 2258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 18:42:24.099730 kubelet[2258]: W0317 18:42:24.099701 2258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 18:42:24.099863 kubelet[2258]: E0317 18:42:24.099778 2258 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.242202 kubelet[2258]: I0317 18:42:24.242150 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7491036d5618b6b0fd58177e920a9596-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"7491036d5618b6b0fd58177e920a9596\") " pod="kube-system/kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.242492 kubelet[2258]: I0317 18:42:24.242466 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2c3fd111ba94b53ee298573314d435a-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"f2c3fd111ba94b53ee298573314d435a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.242630 kubelet[2258]: I0317 18:42:24.242609 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2c3fd111ba94b53ee298573314d435a-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"f2c3fd111ba94b53ee298573314d435a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.242753 kubelet[2258]: I0317 18:42:24.242732 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f2c3fd111ba94b53ee298573314d435a-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"f2c3fd111ba94b53ee298573314d435a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.242961 kubelet[2258]: I0317 18:42:24.242912 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2c3fd111ba94b53ee298573314d435a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"f2c3fd111ba94b53ee298573314d435a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.243108 kubelet[2258]: I0317 18:42:24.243085 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a7457d33d73e2f3d1017626147a7420-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"9a7457d33d73e2f3d1017626147a7420\") " pod="kube-system/kube-scheduler-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.243254 kubelet[2258]: I0317 18:42:24.243231 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7491036d5618b6b0fd58177e920a9596-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"7491036d5618b6b0fd58177e920a9596\") " pod="kube-system/kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.243433 kubelet[2258]: I0317 18:42:24.243408 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7491036d5618b6b0fd58177e920a9596-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"7491036d5618b6b0fd58177e920a9596\") " pod="kube-system/kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.243627 kubelet[2258]: I0317 18:42:24.243603 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f2c3fd111ba94b53ee298573314d435a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" (UID: \"f2c3fd111ba94b53ee298573314d435a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:24.874121 sudo[2289]: pam_unix(sudo:session): session closed for user root Mar 17 18:42:24.896922 kubelet[2258]: I0317 18:42:24.896866 2258 apiserver.go:52] "Watching apiserver" Mar 17 18:42:24.932367 kubelet[2258]: I0317 18:42:24.932319 2258 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:42:25.001601 kubelet[2258]: W0317 18:42:25.001557 2258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 18:42:25.001931 kubelet[2258]: E0317 18:42:25.001893 2258 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" Mar 17 18:42:25.037860 kubelet[2258]: I0317 18:42:25.037768 2258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" podStartSLOduration=1.0377423669999999 podStartE2EDuration="1.037742367s" podCreationTimestamp="2025-03-17 18:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:25.02387091 +0000 UTC m=+1.257662296" watchObservedRunningTime="2025-03-17 18:42:25.037742367 +0000 UTC m=+1.271533738" Mar 17 18:42:25.051815 kubelet[2258]: I0317 18:42:25.051725 2258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" podStartSLOduration=1.051696601 podStartE2EDuration="1.051696601s" podCreationTimestamp="2025-03-17 18:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:25.038413864 +0000 UTC m=+1.272205258" watchObservedRunningTime="2025-03-17 18:42:25.051696601 +0000 UTC m=+1.285487987" Mar 17 18:42:25.067546 kubelet[2258]: I0317 18:42:25.067472 2258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" podStartSLOduration=3.067408191 podStartE2EDuration="3.067408191s" podCreationTimestamp="2025-03-17 18:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:25.05383286 +0000 UTC m=+1.287624252" watchObservedRunningTime="2025-03-17 18:42:25.067408191 +0000 UTC m=+1.301199589" Mar 17 18:42:26.871937 sudo[1558]: pam_unix(sudo:session): session closed for user root Mar 17 18:42:26.914559 sshd[1554]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:26.919992 systemd[1]: sshd@5-10.128.0.50:22-139.178.89.65:59062.service: Deactivated successfully. Mar 17 18:42:26.921540 systemd-logind[1317]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:42:26.922362 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:42:26.923922 systemd-logind[1317]: Removed session 5. Mar 17 18:42:29.907111 update_engine[1321]: I0317 18:42:29.907022 1321 update_attempter.cc:509] Updating boot flags... Mar 17 18:42:38.813960 kubelet[2258]: I0317 18:42:38.813919 2258 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:42:38.815405 env[1335]: time="2025-03-17T18:42:38.815261775Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:42:38.816179 kubelet[2258]: I0317 18:42:38.816154 2258 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:42:38.932821 kubelet[2258]: I0317 18:42:38.932773 2258 topology_manager.go:215] "Topology Admit Handler" podUID="266a1e42-d052-408e-a36e-7da75f55f69f" podNamespace="kube-system" podName="cilium-zztvw" Mar 17 18:42:38.943620 kubelet[2258]: I0317 18:42:38.943563 2258 topology_manager.go:215] "Topology Admit Handler" podUID="9c6c0b70-8b17-4bd2-989b-3c9a278429e1" podNamespace="kube-system" podName="kube-proxy-xgsxq" Mar 17 18:42:38.956555 kubelet[2258]: W0317 18:42:38.956491 2258 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal' and this object Mar 17 18:42:38.956769 kubelet[2258]: E0317 18:42:38.956568 2258 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal' and this object Mar 17 18:42:38.956912 kubelet[2258]: W0317 18:42:38.956888 2258 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal' and this object Mar 17 18:42:38.956996 kubelet[2258]: E0317 18:42:38.956925 2258 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal' and this object Mar 17 18:42:38.957265 kubelet[2258]: W0317 18:42:38.957219 2258 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal' and this object Mar 17 18:42:38.957371 kubelet[2258]: E0317 18:42:38.957276 2258 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal' and this object Mar 17 18:42:38.957570 kubelet[2258]: W0317 18:42:38.957517 2258 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal' and this object Mar 17 18:42:38.957672 kubelet[2258]: E0317 18:42:38.957579 2258 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal' and this object Mar 17 18:42:38.957859 kubelet[2258]: W0317 18:42:38.957834 2258 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal' and this object Mar 17 18:42:38.957944 kubelet[2258]: E0317 18:42:38.957890 2258 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal' and this object Mar 17 18:42:39.044323 kubelet[2258]: I0317 18:42:39.044267 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cni-path\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.044572 kubelet[2258]: I0317 18:42:39.044344 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qphnd\" (UniqueName: \"kubernetes.io/projected/266a1e42-d052-408e-a36e-7da75f55f69f-kube-api-access-qphnd\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.044572 kubelet[2258]: I0317 18:42:39.044375 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-lib-modules\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.044572 kubelet[2258]: I0317 18:42:39.044419 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c6c0b70-8b17-4bd2-989b-3c9a278429e1-kube-proxy\") pod \"kube-proxy-xgsxq\" (UID: \"9c6c0b70-8b17-4bd2-989b-3c9a278429e1\") " pod="kube-system/kube-proxy-xgsxq" Mar 17 18:42:39.044572 kubelet[2258]: I0317 18:42:39.044473 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-run\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.044572 kubelet[2258]: I0317 18:42:39.044499 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-bpf-maps\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.044572 kubelet[2258]: I0317 18:42:39.044545 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-etc-cni-netd\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.044931 kubelet[2258]: I0317 18:42:39.044571 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-host-proc-sys-net\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.044931 kubelet[2258]: I0317 18:42:39.044598 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-cgroup\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.044931 kubelet[2258]: I0317 18:42:39.044645 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-config-path\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.044931 kubelet[2258]: I0317 18:42:39.044677 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-host-proc-sys-kernel\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.044931 kubelet[2258]: I0317 18:42:39.044727 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66hps\" (UniqueName: \"kubernetes.io/projected/9c6c0b70-8b17-4bd2-989b-3c9a278429e1-kube-api-access-66hps\") pod \"kube-proxy-xgsxq\" (UID: \"9c6c0b70-8b17-4bd2-989b-3c9a278429e1\") " pod="kube-system/kube-proxy-xgsxq" Mar 17 18:42:39.045210 kubelet[2258]: I0317 18:42:39.044755 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-xtables-lock\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.045210 kubelet[2258]: I0317 18:42:39.044822 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/266a1e42-d052-408e-a36e-7da75f55f69f-clustermesh-secrets\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.045210 kubelet[2258]: I0317 18:42:39.045090 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/266a1e42-d052-408e-a36e-7da75f55f69f-hubble-tls\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.045210 kubelet[2258]: I0317 18:42:39.045121 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-hostproc\") pod \"cilium-zztvw\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " pod="kube-system/cilium-zztvw" Mar 17 18:42:39.045458 kubelet[2258]: I0317 18:42:39.045251 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c6c0b70-8b17-4bd2-989b-3c9a278429e1-xtables-lock\") pod \"kube-proxy-xgsxq\" (UID: \"9c6c0b70-8b17-4bd2-989b-3c9a278429e1\") " pod="kube-system/kube-proxy-xgsxq" Mar 17 18:42:39.045458 kubelet[2258]: I0317 18:42:39.045282 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c6c0b70-8b17-4bd2-989b-3c9a278429e1-lib-modules\") pod \"kube-proxy-xgsxq\" (UID: \"9c6c0b70-8b17-4bd2-989b-3c9a278429e1\") " pod="kube-system/kube-proxy-xgsxq" Mar 17 18:42:39.186495 kubelet[2258]: I0317 18:42:39.186324 2258 topology_manager.go:215] "Topology Admit Handler" podUID="04780724-7228-47d9-965c-fe435db91b1e" podNamespace="kube-system" podName="cilium-operator-599987898-xqxw5" Mar 17 18:42:39.246967 kubelet[2258]: I0317 18:42:39.246931 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04780724-7228-47d9-965c-fe435db91b1e-cilium-config-path\") pod \"cilium-operator-599987898-xqxw5\" (UID: \"04780724-7228-47d9-965c-fe435db91b1e\") " pod="kube-system/cilium-operator-599987898-xqxw5" Mar 17 18:42:39.247303 kubelet[2258]: I0317 18:42:39.247270 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86mtb\" (UniqueName: \"kubernetes.io/projected/04780724-7228-47d9-965c-fe435db91b1e-kube-api-access-86mtb\") pod \"cilium-operator-599987898-xqxw5\" (UID: \"04780724-7228-47d9-965c-fe435db91b1e\") " pod="kube-system/cilium-operator-599987898-xqxw5" Mar 17 18:42:40.147316 kubelet[2258]: E0317 18:42:40.147249 2258 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 18:42:40.148032 kubelet[2258]: E0317 18:42:40.147397 2258 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/266a1e42-d052-408e-a36e-7da75f55f69f-clustermesh-secrets podName:266a1e42-d052-408e-a36e-7da75f55f69f nodeName:}" failed. No retries permitted until 2025-03-17 18:42:40.647366782 +0000 UTC m=+16.881158156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/266a1e42-d052-408e-a36e-7da75f55f69f-clustermesh-secrets") pod "cilium-zztvw" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f") : failed to sync secret cache: timed out waiting for the condition Mar 17 18:42:40.148334 kubelet[2258]: E0317 18:42:40.148307 2258 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:42:40.148614 kubelet[2258]: E0317 18:42:40.148591 2258 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c6c0b70-8b17-4bd2-989b-3c9a278429e1-kube-proxy podName:9c6c0b70-8b17-4bd2-989b-3c9a278429e1 nodeName:}" failed. No retries permitted until 2025-03-17 18:42:40.648565125 +0000 UTC m=+16.882356521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9c6c0b70-8b17-4bd2-989b-3c9a278429e1-kube-proxy") pod "kube-proxy-xgsxq" (UID: "9c6c0b70-8b17-4bd2-989b-3c9a278429e1") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:42:40.187034 kubelet[2258]: E0317 18:42:40.186971 2258 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:42:40.187034 kubelet[2258]: E0317 18:42:40.187027 2258 projected.go:200] Error preparing data for projected volume kube-api-access-66hps for pod kube-system/kube-proxy-xgsxq: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:42:40.188497 kubelet[2258]: E0317 18:42:40.187519 2258 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c6c0b70-8b17-4bd2-989b-3c9a278429e1-kube-api-access-66hps podName:9c6c0b70-8b17-4bd2-989b-3c9a278429e1 nodeName:}" failed. No retries permitted until 2025-03-17 18:42:40.687129333 +0000 UTC m=+16.920920720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-66hps" (UniqueName: "kubernetes.io/projected/9c6c0b70-8b17-4bd2-989b-3c9a278429e1-kube-api-access-66hps") pod "kube-proxy-xgsxq" (UID: "9c6c0b70-8b17-4bd2-989b-3c9a278429e1") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:42:40.205546 kubelet[2258]: E0317 18:42:40.205495 2258 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:42:40.205546 kubelet[2258]: E0317 18:42:40.205557 2258 projected.go:200] Error preparing data for projected volume kube-api-access-qphnd for pod kube-system/cilium-zztvw: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:42:40.205878 kubelet[2258]: E0317 18:42:40.205645 2258 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/266a1e42-d052-408e-a36e-7da75f55f69f-kube-api-access-qphnd podName:266a1e42-d052-408e-a36e-7da75f55f69f nodeName:}" failed. No retries permitted until 2025-03-17 18:42:40.705620386 +0000 UTC m=+16.939411771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qphnd" (UniqueName: "kubernetes.io/projected/266a1e42-d052-408e-a36e-7da75f55f69f-kube-api-access-qphnd") pod "cilium-zztvw" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:42:40.392385 env[1335]: time="2025-03-17T18:42:40.392308217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xqxw5,Uid:04780724-7228-47d9-965c-fe435db91b1e,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:40.429826 env[1335]: time="2025-03-17T18:42:40.428901536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:40.429826 env[1335]: time="2025-03-17T18:42:40.428975507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:40.430106 env[1335]: time="2025-03-17T18:42:40.428994192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:40.430391 env[1335]: time="2025-03-17T18:42:40.430318344Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7 pid=2356 runtime=io.containerd.runc.v2 Mar 17 18:42:40.530209 env[1335]: time="2025-03-17T18:42:40.530154174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xqxw5,Uid:04780724-7228-47d9-965c-fe435db91b1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\"" Mar 17 18:42:40.533647 env[1335]: time="2025-03-17T18:42:40.533594738Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:42:41.012716 systemd[1]: run-containerd-runc-k8s.io-ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7-runc.suuM9L.mount: Deactivated successfully. Mar 17 18:42:41.040331 env[1335]: time="2025-03-17T18:42:41.040243041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zztvw,Uid:266a1e42-d052-408e-a36e-7da75f55f69f,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:41.048721 env[1335]: time="2025-03-17T18:42:41.048642260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgsxq,Uid:9c6c0b70-8b17-4bd2-989b-3c9a278429e1,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:41.074687 env[1335]: time="2025-03-17T18:42:41.074561419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:41.074914 env[1335]: time="2025-03-17T18:42:41.074676606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:41.074914 env[1335]: time="2025-03-17T18:42:41.074696766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:41.079357 env[1335]: time="2025-03-17T18:42:41.075122211Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991 pid=2405 runtime=io.containerd.runc.v2 Mar 17 18:42:41.102966 env[1335]: time="2025-03-17T18:42:41.102858481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:41.103206 env[1335]: time="2025-03-17T18:42:41.102918916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:41.103206 env[1335]: time="2025-03-17T18:42:41.102953997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:41.103371 env[1335]: time="2025-03-17T18:42:41.103294499Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ff22b7cdff4737f172d0b3befae50448d78d61cc6db26f340fa551ebffc74ae pid=2425 runtime=io.containerd.runc.v2 Mar 17 18:42:41.179836 env[1335]: time="2025-03-17T18:42:41.179758536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zztvw,Uid:266a1e42-d052-408e-a36e-7da75f55f69f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\"" Mar 17 18:42:41.206905 env[1335]: time="2025-03-17T18:42:41.206838030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgsxq,Uid:9c6c0b70-8b17-4bd2-989b-3c9a278429e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ff22b7cdff4737f172d0b3befae50448d78d61cc6db26f340fa551ebffc74ae\"" Mar 17 18:42:41.212762 env[1335]: time="2025-03-17T18:42:41.212707548Z" level=info msg="CreateContainer within sandbox \"3ff22b7cdff4737f172d0b3befae50448d78d61cc6db26f340fa551ebffc74ae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:42:41.248155 env[1335]: time="2025-03-17T18:42:41.248098452Z" level=info msg="CreateContainer within sandbox \"3ff22b7cdff4737f172d0b3befae50448d78d61cc6db26f340fa551ebffc74ae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"510499fe7c76d45a541590e702a03d4f6a86f916cdee172aaebdf4e02768fdc2\"" Mar 17 18:42:41.251105 env[1335]: time="2025-03-17T18:42:41.249284776Z" level=info msg="StartContainer for \"510499fe7c76d45a541590e702a03d4f6a86f916cdee172aaebdf4e02768fdc2\"" Mar 17 18:42:41.339470 env[1335]: time="2025-03-17T18:42:41.334653497Z" level=info msg="StartContainer for \"510499fe7c76d45a541590e702a03d4f6a86f916cdee172aaebdf4e02768fdc2\" returns successfully" Mar 17 18:42:42.459808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835558668.mount: Deactivated successfully. Mar 17 18:42:43.294876 env[1335]: time="2025-03-17T18:42:43.294807185Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:43.298473 env[1335]: time="2025-03-17T18:42:43.298395129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:43.300577 env[1335]: time="2025-03-17T18:42:43.300535219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:43.301270 env[1335]: time="2025-03-17T18:42:43.301223577Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:42:43.304512 env[1335]: time="2025-03-17T18:42:43.304470507Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:42:43.305954 env[1335]: time="2025-03-17T18:42:43.305893552Z" level=info msg="CreateContainer within sandbox \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:42:43.329455 env[1335]: time="2025-03-17T18:42:43.329357039Z" level=info msg="CreateContainer within sandbox \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\"" Mar 17 18:42:43.331908 env[1335]: time="2025-03-17T18:42:43.330880768Z" level=info msg="StartContainer for \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\"" Mar 17 18:42:43.410788 env[1335]: time="2025-03-17T18:42:43.410725847Z" level=info msg="StartContainer for \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\" returns successfully" Mar 17 18:42:44.136261 kubelet[2258]: I0317 18:42:44.136179 2258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xgsxq" podStartSLOduration=6.136153381 podStartE2EDuration="6.136153381s" podCreationTimestamp="2025-03-17 18:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:42.073922133 +0000 UTC m=+18.307713538" watchObservedRunningTime="2025-03-17 18:42:44.136153381 +0000 UTC m=+20.369944776" Mar 17 18:42:49.870068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3244970096.mount: Deactivated successfully. Mar 17 18:42:53.325596 env[1335]: time="2025-03-17T18:42:53.325519289Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:53.330405 env[1335]: time="2025-03-17T18:42:53.330305887Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:53.337853 env[1335]: time="2025-03-17T18:42:53.337788292Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:42:53.338695 env[1335]: time="2025-03-17T18:42:53.338330531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:53.342808 env[1335]: time="2025-03-17T18:42:53.342751812Z" level=info msg="CreateContainer within sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:42:53.364270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630244194.mount: Deactivated successfully. Mar 17 18:42:53.378153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount577078035.mount: Deactivated successfully. Mar 17 18:42:53.382522 env[1335]: time="2025-03-17T18:42:53.382412818Z" level=info msg="CreateContainer within sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817\"" Mar 17 18:42:53.383655 env[1335]: time="2025-03-17T18:42:53.383583658Z" level=info msg="StartContainer for \"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817\"" Mar 17 18:42:53.472986 env[1335]: time="2025-03-17T18:42:53.468544196Z" level=info msg="StartContainer for \"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817\" returns successfully" Mar 17 18:42:54.109980 kubelet[2258]: I0317 18:42:54.109901 2258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xqxw5" podStartSLOduration=12.339581372 podStartE2EDuration="15.109848062s" podCreationTimestamp="2025-03-17 18:42:39 +0000 UTC" firstStartedPulling="2025-03-17 18:42:40.532635169 +0000 UTC m=+16.766426556" lastFinishedPulling="2025-03-17 18:42:43.30290185 +0000 UTC m=+19.536693246" observedRunningTime="2025-03-17 18:42:44.350167535 +0000 UTC m=+20.583958929" watchObservedRunningTime="2025-03-17 18:42:54.109848062 +0000 UTC m=+30.343639458" Mar 17 18:42:54.223267 env[1335]: time="2025-03-17T18:42:54.223140186Z" level=error msg="collecting metrics for 0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817" error="cgroups: cgroup deleted: unknown" Mar 17 18:42:54.357113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817-rootfs.mount: Deactivated successfully. Mar 17 18:42:55.870779 env[1335]: time="2025-03-17T18:42:55.870702409Z" level=info msg="shim disconnected" id=0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817 Mar 17 18:42:55.870779 env[1335]: time="2025-03-17T18:42:55.870778022Z" level=warning msg="cleaning up after shim disconnected" id=0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817 namespace=k8s.io Mar 17 18:42:55.871629 env[1335]: time="2025-03-17T18:42:55.870792465Z" level=info msg="cleaning up dead shim" Mar 17 18:42:55.882660 env[1335]: time="2025-03-17T18:42:55.882591132Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2724 runtime=io.containerd.runc.v2\n" Mar 17 18:42:56.100081 env[1335]: time="2025-03-17T18:42:56.098639447Z" level=info msg="CreateContainer within sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:42:56.122042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362192505.mount: Deactivated successfully. Mar 17 18:42:56.148966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount584262615.mount: Deactivated successfully. Mar 17 18:42:56.151602 env[1335]: time="2025-03-17T18:42:56.151535469Z" level=info msg="CreateContainer within sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5\"" Mar 17 18:42:56.152510 env[1335]: time="2025-03-17T18:42:56.152418711Z" level=info msg="StartContainer for \"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5\"" Mar 17 18:42:56.244473 env[1335]: time="2025-03-17T18:42:56.243912252Z" level=info msg="StartContainer for \"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5\" returns successfully" Mar 17 18:42:56.248592 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:42:56.249114 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:42:56.249364 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:42:56.253273 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:42:56.279375 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:42:56.297948 env[1335]: time="2025-03-17T18:42:56.297838770Z" level=info msg="shim disconnected" id=c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5 Mar 17 18:42:56.297948 env[1335]: time="2025-03-17T18:42:56.297907620Z" level=warning msg="cleaning up after shim disconnected" id=c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5 namespace=k8s.io Mar 17 18:42:56.297948 env[1335]: time="2025-03-17T18:42:56.297925361Z" level=info msg="cleaning up dead shim" Mar 17 18:42:56.311042 env[1335]: time="2025-03-17T18:42:56.310974940Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2793 runtime=io.containerd.runc.v2\n" Mar 17 18:42:57.104769 env[1335]: time="2025-03-17T18:42:57.104613997Z" level=info msg="CreateContainer within sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:42:57.118266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5-rootfs.mount: Deactivated successfully. Mar 17 18:42:57.140697 env[1335]: time="2025-03-17T18:42:57.140621555Z" level=info msg="CreateContainer within sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e\"" Mar 17 18:42:57.144753 env[1335]: time="2025-03-17T18:42:57.144702449Z" level=info msg="StartContainer for \"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e\"" Mar 17 18:42:57.250690 env[1335]: time="2025-03-17T18:42:57.250638025Z" level=info msg="StartContainer for \"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e\" returns successfully" Mar 17 18:42:57.280650 env[1335]: time="2025-03-17T18:42:57.280582175Z" level=info msg="shim disconnected" id=65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e Mar 17 18:42:57.280650 env[1335]: time="2025-03-17T18:42:57.280666912Z" level=warning msg="cleaning up after shim disconnected" id=65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e namespace=k8s.io Mar 17 18:42:57.281086 env[1335]: time="2025-03-17T18:42:57.280682444Z" level=info msg="cleaning up dead shim" Mar 17 18:42:57.292819 env[1335]: time="2025-03-17T18:42:57.292753931Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2854 runtime=io.containerd.runc.v2\n" Mar 17 18:42:58.109626 env[1335]: time="2025-03-17T18:42:58.108570956Z" level=info msg="CreateContainer within sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:42:58.117993 systemd[1]: run-containerd-runc-k8s.io-65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e-runc.ocVY8b.mount: Deactivated successfully. Mar 17 18:42:58.118230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e-rootfs.mount: Deactivated successfully. Mar 17 18:42:58.152615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3589826898.mount: Deactivated successfully. Mar 17 18:42:58.165035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540210662.mount: Deactivated successfully. Mar 17 18:42:58.170797 env[1335]: time="2025-03-17T18:42:58.170731973Z" level=info msg="CreateContainer within sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816\"" Mar 17 18:42:58.173401 env[1335]: time="2025-03-17T18:42:58.172206123Z" level=info msg="StartContainer for \"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816\"" Mar 17 18:42:58.248756 env[1335]: time="2025-03-17T18:42:58.248686957Z" level=info msg="StartContainer for \"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816\" returns successfully" Mar 17 18:42:58.277255 env[1335]: time="2025-03-17T18:42:58.277191002Z" level=info msg="shim disconnected" id=7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816 Mar 17 18:42:58.277255 env[1335]: time="2025-03-17T18:42:58.277259528Z" level=warning msg="cleaning up after shim disconnected" id=7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816 namespace=k8s.io Mar 17 18:42:58.277723 env[1335]: time="2025-03-17T18:42:58.277274093Z" level=info msg="cleaning up dead shim" Mar 17 18:42:58.290339 env[1335]: time="2025-03-17T18:42:58.290228005Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2908 runtime=io.containerd.runc.v2\n" Mar 17 18:42:59.126485 env[1335]: time="2025-03-17T18:42:59.119823420Z" level=info msg="CreateContainer within sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:42:59.146929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1755567432.mount: Deactivated successfully. Mar 17 18:42:59.157167 env[1335]: time="2025-03-17T18:42:59.157095905Z" level=info msg="CreateContainer within sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\"" Mar 17 18:42:59.159551 env[1335]: time="2025-03-17T18:42:59.158175134Z" level=info msg="StartContainer for \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\"" Mar 17 18:42:59.253477 env[1335]: time="2025-03-17T18:42:59.252383915Z" level=info msg="StartContainer for \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\" returns successfully" Mar 17 18:42:59.398572 kubelet[2258]: I0317 18:42:59.397028 2258 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:42:59.436920 kubelet[2258]: I0317 18:42:59.436868 2258 topology_manager.go:215] "Topology Admit Handler" podUID="b88b0ea8-799f-4e82-b1f4-cbda1599e71c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rkbrm" Mar 17 18:42:59.443387 kubelet[2258]: I0317 18:42:59.443343 2258 topology_manager.go:215] "Topology Admit Handler" podUID="64900f25-9631-4f91-baa0-679f6bd9bee7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kjmms" Mar 17 18:42:59.543199 kubelet[2258]: I0317 18:42:59.543153 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2dw6\" (UniqueName: \"kubernetes.io/projected/64900f25-9631-4f91-baa0-679f6bd9bee7-kube-api-access-k2dw6\") pod \"coredns-7db6d8ff4d-kjmms\" (UID: \"64900f25-9631-4f91-baa0-679f6bd9bee7\") " pod="kube-system/coredns-7db6d8ff4d-kjmms" Mar 17 18:42:59.543615 kubelet[2258]: I0317 18:42:59.543567 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cs4w\" (UniqueName: \"kubernetes.io/projected/b88b0ea8-799f-4e82-b1f4-cbda1599e71c-kube-api-access-9cs4w\") pod \"coredns-7db6d8ff4d-rkbrm\" (UID: \"b88b0ea8-799f-4e82-b1f4-cbda1599e71c\") " pod="kube-system/coredns-7db6d8ff4d-rkbrm" Mar 17 18:42:59.543873 kubelet[2258]: I0317 18:42:59.543828 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64900f25-9631-4f91-baa0-679f6bd9bee7-config-volume\") pod \"coredns-7db6d8ff4d-kjmms\" (UID: \"64900f25-9631-4f91-baa0-679f6bd9bee7\") " pod="kube-system/coredns-7db6d8ff4d-kjmms" Mar 17 18:42:59.544139 kubelet[2258]: I0317 18:42:59.544074 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88b0ea8-799f-4e82-b1f4-cbda1599e71c-config-volume\") pod \"coredns-7db6d8ff4d-rkbrm\" (UID: \"b88b0ea8-799f-4e82-b1f4-cbda1599e71c\") " pod="kube-system/coredns-7db6d8ff4d-rkbrm" Mar 17 18:42:59.742758 env[1335]: time="2025-03-17T18:42:59.742612727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkbrm,Uid:b88b0ea8-799f-4e82-b1f4-cbda1599e71c,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:59.759272 env[1335]: time="2025-03-17T18:42:59.759214572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kjmms,Uid:64900f25-9631-4f91-baa0-679f6bd9bee7,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:01.577834 systemd-networkd[1078]: cilium_host: Link UP Mar 17 18:43:01.578036 systemd-networkd[1078]: cilium_net: Link UP Mar 17 18:43:01.578043 systemd-networkd[1078]: cilium_net: Gained carrier Mar 17 18:43:01.585541 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:43:01.586325 systemd-networkd[1078]: cilium_host: Gained carrier Mar 17 18:43:01.588650 systemd-networkd[1078]: cilium_host: Gained IPv6LL Mar 17 18:43:01.746200 systemd-networkd[1078]: cilium_vxlan: Link UP Mar 17 18:43:01.746212 systemd-networkd[1078]: cilium_vxlan: Gained carrier Mar 17 18:43:01.946708 systemd-networkd[1078]: cilium_net: Gained IPv6LL Mar 17 18:43:02.039471 kernel: NET: Registered PF_ALG protocol family Mar 17 18:43:02.946676 systemd-networkd[1078]: lxc_health: Link UP Mar 17 18:43:02.990544 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:43:02.991145 systemd-networkd[1078]: lxc_health: Gained carrier Mar 17 18:43:03.078682 kubelet[2258]: I0317 18:43:03.078603 2258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zztvw" podStartSLOduration=12.920937273 podStartE2EDuration="25.078577725s" podCreationTimestamp="2025-03-17 18:42:38 +0000 UTC" firstStartedPulling="2025-03-17 18:42:41.18211846 +0000 UTC m=+17.415909837" lastFinishedPulling="2025-03-17 18:42:53.339758898 +0000 UTC m=+29.573550289" observedRunningTime="2025-03-17 18:43:00.148797225 +0000 UTC m=+36.382588633" watchObservedRunningTime="2025-03-17 18:43:03.078577725 +0000 UTC m=+39.312369122" Mar 17 18:43:03.235134 systemd-networkd[1078]: cilium_vxlan: Gained IPv6LL Mar 17 18:43:03.316710 systemd-networkd[1078]: lxc53c9a2b2f2b5: Link UP Mar 17 18:43:03.328486 kernel: eth0: renamed from tmpe0e3f Mar 17 18:43:03.344883 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc53c9a2b2f2b5: link becomes ready Mar 17 18:43:03.346756 systemd-networkd[1078]: lxc53c9a2b2f2b5: Gained carrier Mar 17 18:43:03.374043 systemd-networkd[1078]: lxcfe6ce70e9498: Link UP Mar 17 18:43:03.384326 kernel: eth0: renamed from tmp87483 Mar 17 18:43:03.411675 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfe6ce70e9498: link becomes ready Mar 17 18:43:03.420942 systemd-networkd[1078]: lxcfe6ce70e9498: Gained carrier Mar 17 18:43:04.515212 systemd-networkd[1078]: lxcfe6ce70e9498: Gained IPv6LL Mar 17 18:43:04.772207 systemd-networkd[1078]: lxc53c9a2b2f2b5: Gained IPv6LL Mar 17 18:43:04.835050 systemd-networkd[1078]: lxc_health: Gained IPv6LL Mar 17 18:43:08.606609 env[1335]: time="2025-03-17T18:43:08.606521281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:08.607346 env[1335]: time="2025-03-17T18:43:08.607251624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:08.607561 env[1335]: time="2025-03-17T18:43:08.607524634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:08.607952 env[1335]: time="2025-03-17T18:43:08.607899363Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/874835bb8754bc37418afa1ab402b8896f8e2b8f7b677c54c29874d8cd432d62 pid=3443 runtime=io.containerd.runc.v2 Mar 17 18:43:08.645125 env[1335]: time="2025-03-17T18:43:08.644826436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:08.645125 env[1335]: time="2025-03-17T18:43:08.644887578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:08.645125 env[1335]: time="2025-03-17T18:43:08.644907443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:08.645754 env[1335]: time="2025-03-17T18:43:08.645654054Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0e3f0bd0a127359e05c4e3072fa82e082d10a570b683b266d6febef8c4b24e1 pid=3458 runtime=io.containerd.runc.v2 Mar 17 18:43:08.803371 env[1335]: time="2025-03-17T18:43:08.803312340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kjmms,Uid:64900f25-9631-4f91-baa0-679f6bd9bee7,Namespace:kube-system,Attempt:0,} returns sandbox id \"874835bb8754bc37418afa1ab402b8896f8e2b8f7b677c54c29874d8cd432d62\"" Mar 17 18:43:08.809320 env[1335]: time="2025-03-17T18:43:08.809262523Z" level=info msg="CreateContainer within sandbox \"874835bb8754bc37418afa1ab402b8896f8e2b8f7b677c54c29874d8cd432d62\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:43:08.846759 env[1335]: time="2025-03-17T18:43:08.846687741Z" level=info msg="CreateContainer within sandbox \"874835bb8754bc37418afa1ab402b8896f8e2b8f7b677c54c29874d8cd432d62\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"763929b1b0c66542d5e76e7db4e2f51b2a18555edf3578f055e0f0d3cd71b9b0\"" Mar 17 18:43:08.848022 env[1335]: time="2025-03-17T18:43:08.847975723Z" level=info msg="StartContainer for \"763929b1b0c66542d5e76e7db4e2f51b2a18555edf3578f055e0f0d3cd71b9b0\"" Mar 17 18:43:08.908593 env[1335]: time="2025-03-17T18:43:08.907663567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkbrm,Uid:b88b0ea8-799f-4e82-b1f4-cbda1599e71c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0e3f0bd0a127359e05c4e3072fa82e082d10a570b683b266d6febef8c4b24e1\"" Mar 17 18:43:08.945702 env[1335]: time="2025-03-17T18:43:08.945645339Z" level=info msg="CreateContainer within sandbox \"e0e3f0bd0a127359e05c4e3072fa82e082d10a570b683b266d6febef8c4b24e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:43:08.977902 env[1335]: time="2025-03-17T18:43:08.977829436Z" level=info msg="CreateContainer within sandbox \"e0e3f0bd0a127359e05c4e3072fa82e082d10a570b683b266d6febef8c4b24e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b688295b198280e277dfb96bfc2b8c3d97c31a657762a07b69658e1fc372d6b1\"" Mar 17 18:43:08.981537 env[1335]: time="2025-03-17T18:43:08.981331633Z" level=info msg="StartContainer for \"b688295b198280e277dfb96bfc2b8c3d97c31a657762a07b69658e1fc372d6b1\"" Mar 17 18:43:09.002862 env[1335]: time="2025-03-17T18:43:09.002797097Z" level=info msg="StartContainer for \"763929b1b0c66542d5e76e7db4e2f51b2a18555edf3578f055e0f0d3cd71b9b0\" returns successfully" Mar 17 18:43:09.110113 env[1335]: time="2025-03-17T18:43:09.110043186Z" level=info msg="StartContainer for \"b688295b198280e277dfb96bfc2b8c3d97c31a657762a07b69658e1fc372d6b1\" returns successfully" Mar 17 18:43:09.196048 kubelet[2258]: I0317 18:43:09.195855 2258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kjmms" podStartSLOduration=30.195830068 podStartE2EDuration="30.195830068s" podCreationTimestamp="2025-03-17 18:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:09.172326297 +0000 UTC m=+45.406117693" watchObservedRunningTime="2025-03-17 18:43:09.195830068 +0000 UTC m=+45.429621465" Mar 17 18:43:09.220054 kubelet[2258]: I0317 18:43:09.219953 2258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rkbrm" podStartSLOduration=30.219931034 podStartE2EDuration="30.219931034s" podCreationTimestamp="2025-03-17 18:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:09.196238199 +0000 UTC m=+45.430029594" watchObservedRunningTime="2025-03-17 18:43:09.219931034 +0000 UTC m=+45.453722430" Mar 17 18:43:09.622663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894193394.mount: Deactivated successfully. Mar 17 18:43:16.598816 systemd[1]: Started sshd@6-10.128.0.50:22-139.178.89.65:54842.service. Mar 17 18:43:16.889732 sshd[3605]: Accepted publickey for core from 139.178.89.65 port 54842 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:16.891622 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:16.899178 systemd-logind[1317]: New session 6 of user core. Mar 17 18:43:16.900138 systemd[1]: Started session-6.scope. Mar 17 18:43:17.199894 sshd[3605]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:17.205070 systemd[1]: sshd@6-10.128.0.50:22-139.178.89.65:54842.service: Deactivated successfully. Mar 17 18:43:17.206764 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:43:17.207735 systemd-logind[1317]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:43:17.210203 systemd-logind[1317]: Removed session 6. Mar 17 18:43:22.244296 systemd[1]: Started sshd@7-10.128.0.50:22-139.178.89.65:44950.service. Mar 17 18:43:22.535157 sshd[3620]: Accepted publickey for core from 139.178.89.65 port 44950 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:22.537974 sshd[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:22.545186 systemd[1]: Started session-7.scope. Mar 17 18:43:22.547228 systemd-logind[1317]: New session 7 of user core. Mar 17 18:43:22.830293 sshd[3620]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:22.835628 systemd[1]: sshd@7-10.128.0.50:22-139.178.89.65:44950.service: Deactivated successfully. Mar 17 18:43:22.836995 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:43:22.838476 systemd-logind[1317]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:43:22.840155 systemd-logind[1317]: Removed session 7. Mar 17 18:43:27.877610 systemd[1]: Started sshd@8-10.128.0.50:22-139.178.89.65:44954.service. Mar 17 18:43:28.174121 sshd[3636]: Accepted publickey for core from 139.178.89.65 port 44954 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:28.176315 sshd[3636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:28.183743 systemd[1]: Started session-8.scope. Mar 17 18:43:28.185063 systemd-logind[1317]: New session 8 of user core. Mar 17 18:43:28.463881 sshd[3636]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:28.468946 systemd[1]: sshd@8-10.128.0.50:22-139.178.89.65:44954.service: Deactivated successfully. Mar 17 18:43:28.471234 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:43:28.472431 systemd-logind[1317]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:43:28.474144 systemd-logind[1317]: Removed session 8. Mar 17 18:43:33.507643 systemd[1]: Started sshd@9-10.128.0.50:22-139.178.89.65:37160.service. Mar 17 18:43:33.796429 sshd[3649]: Accepted publickey for core from 139.178.89.65 port 37160 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:33.798604 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:33.806311 systemd[1]: Started session-9.scope. Mar 17 18:43:33.806982 systemd-logind[1317]: New session 9 of user core. Mar 17 18:43:34.084743 sshd[3649]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:34.089720 systemd[1]: sshd@9-10.128.0.50:22-139.178.89.65:37160.service: Deactivated successfully. Mar 17 18:43:34.091972 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:43:34.092688 systemd-logind[1317]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:43:34.095047 systemd-logind[1317]: Removed session 9. Mar 17 18:43:39.130215 systemd[1]: Started sshd@10-10.128.0.50:22-139.178.89.65:37176.service. Mar 17 18:43:39.425388 sshd[3663]: Accepted publickey for core from 139.178.89.65 port 37176 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:39.427271 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:39.434667 systemd[1]: Started session-10.scope. Mar 17 18:43:39.435587 systemd-logind[1317]: New session 10 of user core. Mar 17 18:43:39.716668 sshd[3663]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:39.721704 systemd[1]: sshd@10-10.128.0.50:22-139.178.89.65:37176.service: Deactivated successfully. Mar 17 18:43:39.724222 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:43:39.724931 systemd-logind[1317]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:43:39.727564 systemd-logind[1317]: Removed session 10. Mar 17 18:43:39.762549 systemd[1]: Started sshd@11-10.128.0.50:22-139.178.89.65:37184.service. Mar 17 18:43:40.058536 sshd[3676]: Accepted publickey for core from 139.178.89.65 port 37184 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:40.059740 sshd[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:40.067957 systemd[1]: Started session-11.scope. Mar 17 18:43:40.068545 systemd-logind[1317]: New session 11 of user core. Mar 17 18:43:40.400766 sshd[3676]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:40.407099 systemd-logind[1317]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:43:40.407591 systemd[1]: sshd@11-10.128.0.50:22-139.178.89.65:37184.service: Deactivated successfully. Mar 17 18:43:40.409021 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:43:40.410890 systemd-logind[1317]: Removed session 11. Mar 17 18:43:40.443656 systemd[1]: Started sshd@12-10.128.0.50:22-139.178.89.65:37188.service. Mar 17 18:43:40.734724 sshd[3687]: Accepted publickey for core from 139.178.89.65 port 37188 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:40.737100 sshd[3687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:40.743879 systemd-logind[1317]: New session 12 of user core. Mar 17 18:43:40.744453 systemd[1]: Started session-12.scope. Mar 17 18:43:41.025940 sshd[3687]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:41.030759 systemd[1]: sshd@12-10.128.0.50:22-139.178.89.65:37188.service: Deactivated successfully. Mar 17 18:43:41.033337 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:43:41.034512 systemd-logind[1317]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:43:41.036540 systemd-logind[1317]: Removed session 12. Mar 17 18:43:46.071535 systemd[1]: Started sshd@13-10.128.0.50:22-139.178.89.65:58516.service. Mar 17 18:43:46.366725 sshd[3702]: Accepted publickey for core from 139.178.89.65 port 58516 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:46.368598 sshd[3702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:46.376360 systemd[1]: Started session-13.scope. Mar 17 18:43:46.377645 systemd-logind[1317]: New session 13 of user core. Mar 17 18:43:46.663939 sshd[3702]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:46.669292 systemd[1]: sshd@13-10.128.0.50:22-139.178.89.65:58516.service: Deactivated successfully. Mar 17 18:43:46.671858 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:43:46.672675 systemd-logind[1317]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:43:46.674894 systemd-logind[1317]: Removed session 13. Mar 17 18:43:51.709974 systemd[1]: Started sshd@14-10.128.0.50:22-139.178.89.65:33052.service. Mar 17 18:43:52.005398 sshd[3715]: Accepted publickey for core from 139.178.89.65 port 33052 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:52.007685 sshd[3715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:52.015310 systemd[1]: Started session-14.scope. Mar 17 18:43:52.015669 systemd-logind[1317]: New session 14 of user core. Mar 17 18:43:52.298667 sshd[3715]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:52.303584 systemd[1]: sshd@14-10.128.0.50:22-139.178.89.65:33052.service: Deactivated successfully. Mar 17 18:43:52.304947 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:43:52.305753 systemd-logind[1317]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:43:52.307038 systemd-logind[1317]: Removed session 14. Mar 17 18:43:52.342937 systemd[1]: Started sshd@15-10.128.0.50:22-139.178.89.65:33062.service. Mar 17 18:43:52.636255 sshd[3728]: Accepted publickey for core from 139.178.89.65 port 33062 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:52.638330 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:52.645632 systemd-logind[1317]: New session 15 of user core. Mar 17 18:43:52.646655 systemd[1]: Started session-15.scope. Mar 17 18:43:52.994204 sshd[3728]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:52.999636 systemd[1]: sshd@15-10.128.0.50:22-139.178.89.65:33062.service: Deactivated successfully. Mar 17 18:43:53.000945 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:43:53.001698 systemd-logind[1317]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:43:53.003302 systemd-logind[1317]: Removed session 15. Mar 17 18:43:53.040216 systemd[1]: Started sshd@16-10.128.0.50:22-139.178.89.65:33068.service. Mar 17 18:43:53.333537 sshd[3739]: Accepted publickey for core from 139.178.89.65 port 33068 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:53.335627 sshd[3739]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:53.342406 systemd-logind[1317]: New session 16 of user core. Mar 17 18:43:53.343254 systemd[1]: Started session-16.scope. Mar 17 18:43:55.254794 sshd[3739]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:55.259854 systemd[1]: sshd@16-10.128.0.50:22-139.178.89.65:33068.service: Deactivated successfully. Mar 17 18:43:55.261649 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:43:55.261722 systemd-logind[1317]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:43:55.265676 systemd-logind[1317]: Removed session 16. Mar 17 18:43:55.299054 systemd[1]: Started sshd@17-10.128.0.50:22-139.178.89.65:33084.service. Mar 17 18:43:55.597517 sshd[3757]: Accepted publickey for core from 139.178.89.65 port 33084 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:55.595878 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:55.605872 systemd[1]: Started session-17.scope. Mar 17 18:43:55.606508 systemd-logind[1317]: New session 17 of user core. Mar 17 18:43:56.030238 sshd[3757]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:56.035048 systemd[1]: sshd@17-10.128.0.50:22-139.178.89.65:33084.service: Deactivated successfully. Mar 17 18:43:56.037371 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:43:56.038501 systemd-logind[1317]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:43:56.041014 systemd-logind[1317]: Removed session 17. Mar 17 18:43:56.077826 systemd[1]: Started sshd@18-10.128.0.50:22-139.178.89.65:33098.service. Mar 17 18:43:56.378702 sshd[3768]: Accepted publickey for core from 139.178.89.65 port 33098 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:43:56.380605 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:56.388031 systemd[1]: Started session-18.scope. Mar 17 18:43:56.389267 systemd-logind[1317]: New session 18 of user core. Mar 17 18:43:56.665601 sshd[3768]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:56.670839 systemd[1]: sshd@18-10.128.0.50:22-139.178.89.65:33098.service: Deactivated successfully. Mar 17 18:43:56.672225 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:43:56.675020 systemd-logind[1317]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:43:56.677561 systemd-logind[1317]: Removed session 18. Mar 17 18:44:01.709582 systemd[1]: Started sshd@19-10.128.0.50:22-139.178.89.65:60116.service. Mar 17 18:44:02.001306 sshd[3784]: Accepted publickey for core from 139.178.89.65 port 60116 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:44:02.003391 sshd[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:02.010808 systemd[1]: Started session-19.scope. Mar 17 18:44:02.011930 systemd-logind[1317]: New session 19 of user core. Mar 17 18:44:02.289706 sshd[3784]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:02.294235 systemd[1]: sshd@19-10.128.0.50:22-139.178.89.65:60116.service: Deactivated successfully. Mar 17 18:44:02.296201 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:44:02.296643 systemd-logind[1317]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:44:02.298708 systemd-logind[1317]: Removed session 19. Mar 17 18:44:05.272862 systemd[1]: Started sshd@20-10.128.0.50:22-165.232.147.130:42262.service. Mar 17 18:44:05.594317 sshd[3797]: Failed password for root from 165.232.147.130 port 42262 ssh2 Mar 17 18:44:05.637795 sshd[3797]: Received disconnect from 165.232.147.130 port 42262:11: Bye Bye [preauth] Mar 17 18:44:05.638051 sshd[3797]: Disconnected from authenticating user root 165.232.147.130 port 42262 [preauth] Mar 17 18:44:05.639798 systemd[1]: sshd@20-10.128.0.50:22-165.232.147.130:42262.service: Deactivated successfully. Mar 17 18:44:07.337299 systemd[1]: Started sshd@21-10.128.0.50:22-139.178.89.65:60118.service. Mar 17 18:44:07.638397 sshd[3801]: Accepted publickey for core from 139.178.89.65 port 60118 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:44:07.640264 sshd[3801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:07.647658 systemd-logind[1317]: New session 20 of user core. Mar 17 18:44:07.647819 systemd[1]: Started session-20.scope. Mar 17 18:44:07.933804 sshd[3801]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:07.938908 systemd[1]: sshd@21-10.128.0.50:22-139.178.89.65:60118.service: Deactivated successfully. Mar 17 18:44:07.940258 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:44:07.943421 systemd-logind[1317]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:44:07.944943 systemd-logind[1317]: Removed session 20. Mar 17 18:44:12.977939 systemd[1]: Started sshd@22-10.128.0.50:22-139.178.89.65:42634.service. Mar 17 18:44:13.270581 sshd[3816]: Accepted publickey for core from 139.178.89.65 port 42634 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:44:13.272701 sshd[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:13.280604 systemd[1]: Started session-21.scope. Mar 17 18:44:13.281638 systemd-logind[1317]: New session 21 of user core. Mar 17 18:44:13.564733 sshd[3816]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:13.569555 systemd-logind[1317]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:44:13.570017 systemd[1]: sshd@22-10.128.0.50:22-139.178.89.65:42634.service: Deactivated successfully. Mar 17 18:44:13.571391 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:44:13.572185 systemd-logind[1317]: Removed session 21. Mar 17 18:44:13.611412 systemd[1]: Started sshd@23-10.128.0.50:22-139.178.89.65:42644.service. Mar 17 18:44:13.909982 sshd[3829]: Accepted publickey for core from 139.178.89.65 port 42644 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:44:13.911982 sshd[3829]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:13.919490 systemd[1]: Started session-22.scope. Mar 17 18:44:13.920771 systemd-logind[1317]: New session 22 of user core. Mar 17 18:44:15.846229 env[1335]: time="2025-03-17T18:44:15.846166219Z" level=info msg="StopContainer for \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\" with timeout 30 (s)" Mar 17 18:44:15.853945 env[1335]: time="2025-03-17T18:44:15.853857411Z" level=info msg="Stop container \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\" with signal terminated" Mar 17 18:44:15.862560 systemd[1]: run-containerd-runc-k8s.io-9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff-runc.KCAfSo.mount: Deactivated successfully. Mar 17 18:44:15.913173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440-rootfs.mount: Deactivated successfully. Mar 17 18:44:15.916225 env[1335]: time="2025-03-17T18:44:15.915335785Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:44:15.923250 env[1335]: time="2025-03-17T18:44:15.923189448Z" level=info msg="StopContainer for \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\" with timeout 2 (s)" Mar 17 18:44:15.923890 env[1335]: time="2025-03-17T18:44:15.923849832Z" level=info msg="Stop container \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\" with signal terminated" Mar 17 18:44:15.935932 systemd-networkd[1078]: lxc_health: Link DOWN Mar 17 18:44:15.935944 systemd-networkd[1078]: lxc_health: Lost carrier Mar 17 18:44:15.939032 env[1335]: time="2025-03-17T18:44:15.938973419Z" level=info msg="shim disconnected" id=c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440 Mar 17 18:44:15.939240 env[1335]: time="2025-03-17T18:44:15.939039354Z" level=warning msg="cleaning up after shim disconnected" id=c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440 namespace=k8s.io Mar 17 18:44:15.939240 env[1335]: time="2025-03-17T18:44:15.939055323Z" level=info msg="cleaning up dead shim" Mar 17 18:44:15.974671 env[1335]: time="2025-03-17T18:44:15.974615048Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3881 runtime=io.containerd.runc.v2\n" Mar 17 18:44:15.978074 env[1335]: time="2025-03-17T18:44:15.978020546Z" level=info msg="StopContainer for \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\" returns successfully" Mar 17 18:44:15.979245 env[1335]: time="2025-03-17T18:44:15.979199385Z" level=info msg="StopPodSandbox for \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\"" Mar 17 18:44:15.979781 env[1335]: time="2025-03-17T18:44:15.979743572Z" level=info msg="Container to stop \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:15.984086 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7-shm.mount: Deactivated successfully. Mar 17 18:44:16.015746 env[1335]: time="2025-03-17T18:44:16.015681643Z" level=info msg="shim disconnected" id=9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff Mar 17 18:44:16.016227 env[1335]: time="2025-03-17T18:44:16.016180388Z" level=warning msg="cleaning up after shim disconnected" id=9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff namespace=k8s.io Mar 17 18:44:16.016408 env[1335]: time="2025-03-17T18:44:16.016383723Z" level=info msg="cleaning up dead shim" Mar 17 18:44:16.032378 env[1335]: time="2025-03-17T18:44:16.032320809Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3922 runtime=io.containerd.runc.v2\n" Mar 17 18:44:16.035721 env[1335]: time="2025-03-17T18:44:16.035663928Z" level=info msg="StopContainer for \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\" returns successfully" Mar 17 18:44:16.036838 env[1335]: time="2025-03-17T18:44:16.036790212Z" level=info msg="StopPodSandbox for \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\"" Mar 17 18:44:16.037341 env[1335]: time="2025-03-17T18:44:16.037302523Z" level=info msg="Container to stop \"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:16.037691 env[1335]: time="2025-03-17T18:44:16.037644965Z" level=info msg="Container to stop \"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:16.037836 env[1335]: time="2025-03-17T18:44:16.037806049Z" level=info msg="Container to stop \"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:16.037982 env[1335]: time="2025-03-17T18:44:16.037951453Z" level=info msg="Container to stop \"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:16.038131 env[1335]: time="2025-03-17T18:44:16.038100719Z" level=info msg="Container to stop \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:16.046960 env[1335]: time="2025-03-17T18:44:16.046905037Z" level=info msg="shim disconnected" id=ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7 Mar 17 18:44:16.047309 env[1335]: time="2025-03-17T18:44:16.047277196Z" level=warning msg="cleaning up after shim disconnected" id=ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7 namespace=k8s.io Mar 17 18:44:16.047497 env[1335]: time="2025-03-17T18:44:16.047471372Z" level=info msg="cleaning up dead shim" Mar 17 18:44:16.067268 env[1335]: time="2025-03-17T18:44:16.067191364Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3946 runtime=io.containerd.runc.v2\n" Mar 17 18:44:16.067732 env[1335]: time="2025-03-17T18:44:16.067683793Z" level=info msg="TearDown network for sandbox \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\" successfully" Mar 17 18:44:16.067732 env[1335]: time="2025-03-17T18:44:16.067730436Z" level=info msg="StopPodSandbox for \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\" returns successfully" Mar 17 18:44:16.103246 env[1335]: time="2025-03-17T18:44:16.101467327Z" level=info msg="shim disconnected" id=f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991 Mar 17 18:44:16.103246 env[1335]: time="2025-03-17T18:44:16.101536605Z" level=warning msg="cleaning up after shim disconnected" id=f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991 namespace=k8s.io Mar 17 18:44:16.103246 env[1335]: time="2025-03-17T18:44:16.101553084Z" level=info msg="cleaning up dead shim" Mar 17 18:44:16.118215 env[1335]: time="2025-03-17T18:44:16.118147355Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3979 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:44:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Mar 17 18:44:16.118682 env[1335]: time="2025-03-17T18:44:16.118620858Z" level=info msg="TearDown network for sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" successfully" Mar 17 18:44:16.118682 env[1335]: time="2025-03-17T18:44:16.118656862Z" level=info msg="StopPodSandbox for \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" returns successfully" Mar 17 18:44:16.194152 kubelet[2258]: I0317 18:44:16.194057 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-hostproc\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.194936 kubelet[2258]: I0317 18:44:16.194905 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-etc-cni-netd\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195115 kubelet[2258]: I0317 18:44:16.194189 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-hostproc" (OuterVolumeSpecName: "hostproc") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:16.195204 kubelet[2258]: I0317 18:44:16.194952 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:16.195204 kubelet[2258]: I0317 18:44:16.195091 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-config-path\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195369 kubelet[2258]: I0317 18:44:16.195206 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-host-proc-sys-kernel\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195369 kubelet[2258]: I0317 18:44:16.195237 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-xtables-lock\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195369 kubelet[2258]: I0317 18:44:16.195291 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86mtb\" (UniqueName: \"kubernetes.io/projected/04780724-7228-47d9-965c-fe435db91b1e-kube-api-access-86mtb\") pod \"04780724-7228-47d9-965c-fe435db91b1e\" (UID: \"04780724-7228-47d9-965c-fe435db91b1e\") " Mar 17 18:44:16.195369 kubelet[2258]: I0317 18:44:16.195323 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-host-proc-sys-net\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195369 kubelet[2258]: I0317 18:44:16.195349 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-cgroup\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195672 kubelet[2258]: I0317 18:44:16.195380 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qphnd\" (UniqueName: \"kubernetes.io/projected/266a1e42-d052-408e-a36e-7da75f55f69f-kube-api-access-qphnd\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195672 kubelet[2258]: I0317 18:44:16.195408 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cni-path\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195672 kubelet[2258]: I0317 18:44:16.195476 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/266a1e42-d052-408e-a36e-7da75f55f69f-hubble-tls\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195672 kubelet[2258]: I0317 18:44:16.195503 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-lib-modules\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195672 kubelet[2258]: I0317 18:44:16.195540 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-run\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195672 kubelet[2258]: I0317 18:44:16.195565 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-bpf-maps\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195989 kubelet[2258]: I0317 18:44:16.195596 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04780724-7228-47d9-965c-fe435db91b1e-cilium-config-path\") pod \"04780724-7228-47d9-965c-fe435db91b1e\" (UID: \"04780724-7228-47d9-965c-fe435db91b1e\") " Mar 17 18:44:16.195989 kubelet[2258]: I0317 18:44:16.195656 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/266a1e42-d052-408e-a36e-7da75f55f69f-clustermesh-secrets\") pod \"266a1e42-d052-408e-a36e-7da75f55f69f\" (UID: \"266a1e42-d052-408e-a36e-7da75f55f69f\") " Mar 17 18:44:16.195989 kubelet[2258]: I0317 18:44:16.195715 2258 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-hostproc\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.195989 kubelet[2258]: I0317 18:44:16.195735 2258 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-etc-cni-netd\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.198247 kubelet[2258]: I0317 18:44:16.198203 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:16.198492 kubelet[2258]: I0317 18:44:16.198304 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:16.198592 kubelet[2258]: I0317 18:44:16.198548 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:16.198666 kubelet[2258]: I0317 18:44:16.198590 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:16.198666 kubelet[2258]: I0317 18:44:16.198617 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:16.199383 kubelet[2258]: I0317 18:44:16.199348 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:16.199559 kubelet[2258]: I0317 18:44:16.199409 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:16.203112 kubelet[2258]: I0317 18:44:16.203074 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cni-path" (OuterVolumeSpecName: "cni-path") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:16.205007 kubelet[2258]: I0317 18:44:16.204618 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:44:16.205230 kubelet[2258]: I0317 18:44:16.205170 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266a1e42-d052-408e-a36e-7da75f55f69f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:44:16.209054 kubelet[2258]: I0317 18:44:16.208980 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04780724-7228-47d9-965c-fe435db91b1e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04780724-7228-47d9-965c-fe435db91b1e" (UID: "04780724-7228-47d9-965c-fe435db91b1e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:44:16.209616 kubelet[2258]: I0317 18:44:16.209549 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266a1e42-d052-408e-a36e-7da75f55f69f-kube-api-access-qphnd" (OuterVolumeSpecName: "kube-api-access-qphnd") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "kube-api-access-qphnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:16.209790 kubelet[2258]: I0317 18:44:16.209420 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266a1e42-d052-408e-a36e-7da75f55f69f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "266a1e42-d052-408e-a36e-7da75f55f69f" (UID: "266a1e42-d052-408e-a36e-7da75f55f69f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:16.213093 kubelet[2258]: I0317 18:44:16.213045 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04780724-7228-47d9-965c-fe435db91b1e-kube-api-access-86mtb" (OuterVolumeSpecName: "kube-api-access-86mtb") pod "04780724-7228-47d9-965c-fe435db91b1e" (UID: "04780724-7228-47d9-965c-fe435db91b1e"). InnerVolumeSpecName "kube-api-access-86mtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:16.296639 kubelet[2258]: I0317 18:44:16.296580 2258 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-lib-modules\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.296639 kubelet[2258]: I0317 18:44:16.296633 2258 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-run\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.296639 kubelet[2258]: I0317 18:44:16.296649 2258 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-bpf-maps\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.296992 kubelet[2258]: I0317 18:44:16.296665 2258 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04780724-7228-47d9-965c-fe435db91b1e-cilium-config-path\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.296992 kubelet[2258]: I0317 18:44:16.296688 2258 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/266a1e42-d052-408e-a36e-7da75f55f69f-clustermesh-secrets\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.296992 kubelet[2258]: I0317 18:44:16.296702 2258 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-host-proc-sys-kernel\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.296992 kubelet[2258]: I0317 18:44:16.296716 2258 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-config-path\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.296992 kubelet[2258]: I0317 18:44:16.296732 2258 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cilium-cgroup\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.296992 kubelet[2258]: I0317 18:44:16.296747 2258 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-xtables-lock\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.296992 kubelet[2258]: I0317 18:44:16.296761 2258 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-86mtb\" (UniqueName: \"kubernetes.io/projected/04780724-7228-47d9-965c-fe435db91b1e-kube-api-access-86mtb\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.297234 kubelet[2258]: I0317 18:44:16.296776 2258 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-host-proc-sys-net\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.297234 kubelet[2258]: I0317 18:44:16.296791 2258 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/266a1e42-d052-408e-a36e-7da75f55f69f-cni-path\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.297234 kubelet[2258]: I0317 18:44:16.296808 2258 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qphnd\" (UniqueName: \"kubernetes.io/projected/266a1e42-d052-408e-a36e-7da75f55f69f-kube-api-access-qphnd\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.297234 kubelet[2258]: I0317 18:44:16.296823 2258 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/266a1e42-d052-408e-a36e-7da75f55f69f-hubble-tls\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:16.323893 kubelet[2258]: I0317 18:44:16.323855 2258 scope.go:117] "RemoveContainer" containerID="c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440" Mar 17 18:44:16.327230 env[1335]: time="2025-03-17T18:44:16.326507804Z" level=info msg="RemoveContainer for \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\"" Mar 17 18:44:16.334332 env[1335]: time="2025-03-17T18:44:16.334104552Z" level=info msg="RemoveContainer for \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\" returns successfully" Mar 17 18:44:16.334654 kubelet[2258]: I0317 18:44:16.334476 2258 scope.go:117] "RemoveContainer" containerID="c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440" Mar 17 18:44:16.336176 env[1335]: time="2025-03-17T18:44:16.336071100Z" level=error msg="ContainerStatus for \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\": not found" Mar 17 18:44:16.339701 kubelet[2258]: E0317 18:44:16.339665 2258 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\": not found" containerID="c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440" Mar 17 18:44:16.340227 kubelet[2258]: I0317 18:44:16.340111 2258 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440"} err="failed to get container status \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9e3055e44826b21dc44cbda81d7e60f8170a7f5e6c1ff5dc5ba7965eacc3440\": not found" Mar 17 18:44:16.340366 kubelet[2258]: I0317 18:44:16.340230 2258 scope.go:117] "RemoveContainer" containerID="9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff" Mar 17 18:44:16.341909 env[1335]: time="2025-03-17T18:44:16.341749478Z" level=info msg="RemoveContainer for \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\"" Mar 17 18:44:16.350157 env[1335]: time="2025-03-17T18:44:16.350006473Z" level=info msg="RemoveContainer for \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\" returns successfully" Mar 17 18:44:16.350557 kubelet[2258]: I0317 18:44:16.350528 2258 scope.go:117] "RemoveContainer" containerID="7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816" Mar 17 18:44:16.359561 env[1335]: time="2025-03-17T18:44:16.353302686Z" level=info msg="RemoveContainer for \"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816\"" Mar 17 18:44:16.368231 env[1335]: time="2025-03-17T18:44:16.368171965Z" level=info msg="RemoveContainer for \"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816\" returns successfully" Mar 17 18:44:16.369662 kubelet[2258]: I0317 18:44:16.369624 2258 scope.go:117] "RemoveContainer" containerID="65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e" Mar 17 18:44:16.371450 env[1335]: time="2025-03-17T18:44:16.371384665Z" level=info msg="RemoveContainer for \"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e\"" Mar 17 18:44:16.376255 env[1335]: time="2025-03-17T18:44:16.376204419Z" level=info msg="RemoveContainer for \"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e\" returns successfully" Mar 17 18:44:16.376519 kubelet[2258]: I0317 18:44:16.376478 2258 scope.go:117] "RemoveContainer" containerID="c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5" Mar 17 18:44:16.377984 env[1335]: time="2025-03-17T18:44:16.377894271Z" level=info msg="RemoveContainer for \"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5\"" Mar 17 18:44:16.382670 env[1335]: time="2025-03-17T18:44:16.382613398Z" level=info msg="RemoveContainer for \"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5\" returns successfully" Mar 17 18:44:16.383044 kubelet[2258]: I0317 18:44:16.383009 2258 scope.go:117] "RemoveContainer" containerID="0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817" Mar 17 18:44:16.384748 env[1335]: time="2025-03-17T18:44:16.384707514Z" level=info msg="RemoveContainer for \"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817\"" Mar 17 18:44:16.389181 env[1335]: time="2025-03-17T18:44:16.389128604Z" level=info msg="RemoveContainer for \"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817\" returns successfully" Mar 17 18:44:16.389537 kubelet[2258]: I0317 18:44:16.389506 2258 scope.go:117] "RemoveContainer" containerID="9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff" Mar 17 18:44:16.389984 env[1335]: time="2025-03-17T18:44:16.389886437Z" level=error msg="ContainerStatus for \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\": not found" Mar 17 18:44:16.390242 kubelet[2258]: E0317 18:44:16.390201 2258 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\": not found" containerID="9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff" Mar 17 18:44:16.390360 kubelet[2258]: I0317 18:44:16.390263 2258 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff"} err="failed to get container status \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff\": not found" Mar 17 18:44:16.390360 kubelet[2258]: I0317 18:44:16.390296 2258 scope.go:117] "RemoveContainer" containerID="7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816" Mar 17 18:44:16.390679 env[1335]: time="2025-03-17T18:44:16.390599489Z" level=error msg="ContainerStatus for \"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816\": not found" Mar 17 18:44:16.390865 kubelet[2258]: E0317 18:44:16.390833 2258 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816\": not found" containerID="7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816" Mar 17 18:44:16.390969 kubelet[2258]: I0317 18:44:16.390871 2258 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816"} err="failed to get container status \"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ef45c8d4d16d236243014d5b75bf0b638940a1b6ddfdf150c5b22d005b49816\": not found" Mar 17 18:44:16.390969 kubelet[2258]: I0317 18:44:16.390907 2258 scope.go:117] "RemoveContainer" containerID="65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e" Mar 17 18:44:16.391224 env[1335]: time="2025-03-17T18:44:16.391130340Z" level=error msg="ContainerStatus for \"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e\": not found" Mar 17 18:44:16.391371 kubelet[2258]: E0317 18:44:16.391341 2258 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e\": not found" containerID="65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e" Mar 17 18:44:16.391502 kubelet[2258]: I0317 18:44:16.391381 2258 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e"} err="failed to get container status \"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"65fa6cd50ecb208cedf0d423c5b9f9f985099acec4ae295b2341527c151d1a3e\": not found" Mar 17 18:44:16.391502 kubelet[2258]: I0317 18:44:16.391406 2258 scope.go:117] "RemoveContainer" containerID="c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5" Mar 17 18:44:16.391864 env[1335]: time="2025-03-17T18:44:16.391671966Z" level=error msg="ContainerStatus for \"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5\": not found" Mar 17 18:44:16.392065 kubelet[2258]: E0317 18:44:16.392033 2258 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5\": not found" containerID="c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5" Mar 17 18:44:16.392178 kubelet[2258]: I0317 18:44:16.392073 2258 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5"} err="failed to get container status \"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2a9232b709bd6bac3d86b0fb6877bbfb19e7ff156903f319b9c429271dd49a5\": not found" Mar 17 18:44:16.392178 kubelet[2258]: I0317 18:44:16.392097 2258 scope.go:117] "RemoveContainer" containerID="0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817" Mar 17 18:44:16.392512 env[1335]: time="2025-03-17T18:44:16.392410037Z" level=error msg="ContainerStatus for \"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817\": not found" Mar 17 18:44:16.392671 kubelet[2258]: E0317 18:44:16.392642 2258 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817\": not found" containerID="0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817" Mar 17 18:44:16.392785 kubelet[2258]: I0317 18:44:16.392680 2258 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817"} err="failed to get container status \"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817\": rpc error: code = NotFound desc = an error occurred when try to find container \"0fe5673a18272aeec044a914b0033f7a98603b3e0ead4a4a5f4420bd6490a817\": not found" Mar 17 18:44:16.838160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c2fcc2b9b57945a91775b9ed5f66ee4ad6e9b6bcce13cc2f9325da53b36aaff-rootfs.mount: Deactivated successfully. Mar 17 18:44:16.838407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991-rootfs.mount: Deactivated successfully. Mar 17 18:44:16.838602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991-shm.mount: Deactivated successfully. Mar 17 18:44:16.838792 systemd[1]: var-lib-kubelet-pods-266a1e42\x2dd052\x2d408e\x2da36e\x2d7da75f55f69f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqphnd.mount: Deactivated successfully. Mar 17 18:44:16.838979 systemd[1]: var-lib-kubelet-pods-266a1e42\x2dd052\x2d408e\x2da36e\x2d7da75f55f69f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:44:16.839219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7-rootfs.mount: Deactivated successfully. Mar 17 18:44:16.839404 systemd[1]: var-lib-kubelet-pods-04780724\x2d7228\x2d47d9\x2d965c\x2dfe435db91b1e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d86mtb.mount: Deactivated successfully. Mar 17 18:44:16.839613 systemd[1]: var-lib-kubelet-pods-266a1e42\x2dd052\x2d408e\x2da36e\x2d7da75f55f69f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:44:17.805831 sshd[3829]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:17.812105 systemd-logind[1317]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:44:17.813624 systemd[1]: sshd@23-10.128.0.50:22-139.178.89.65:42644.service: Deactivated successfully. Mar 17 18:44:17.814869 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:44:17.817041 systemd-logind[1317]: Removed session 22. Mar 17 18:44:17.850120 systemd[1]: Started sshd@24-10.128.0.50:22-139.178.89.65:42660.service. Mar 17 18:44:17.972251 kubelet[2258]: I0317 18:44:17.972183 2258 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04780724-7228-47d9-965c-fe435db91b1e" path="/var/lib/kubelet/pods/04780724-7228-47d9-965c-fe435db91b1e/volumes" Mar 17 18:44:17.973052 kubelet[2258]: I0317 18:44:17.972981 2258 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="266a1e42-d052-408e-a36e-7da75f55f69f" path="/var/lib/kubelet/pods/266a1e42-d052-408e-a36e-7da75f55f69f/volumes" Mar 17 18:44:18.141996 sshd[3998]: Accepted publickey for core from 139.178.89.65 port 42660 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:44:18.144557 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:18.151173 systemd-logind[1317]: New session 23 of user core. Mar 17 18:44:18.151989 systemd[1]: Started session-23.scope. Mar 17 18:44:19.092471 kubelet[2258]: E0317 18:44:19.092410 2258 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:44:19.314812 kubelet[2258]: I0317 18:44:19.314747 2258 topology_manager.go:215] "Topology Admit Handler" podUID="f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" podNamespace="kube-system" podName="cilium-rv4wb" Mar 17 18:44:19.315195 kubelet[2258]: E0317 18:44:19.315167 2258 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="266a1e42-d052-408e-a36e-7da75f55f69f" containerName="mount-cgroup" Mar 17 18:44:19.315385 kubelet[2258]: E0317 18:44:19.315366 2258 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="266a1e42-d052-408e-a36e-7da75f55f69f" containerName="mount-bpf-fs" Mar 17 18:44:19.315554 kubelet[2258]: E0317 18:44:19.315535 2258 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="266a1e42-d052-408e-a36e-7da75f55f69f" containerName="clean-cilium-state" Mar 17 18:44:19.315698 kubelet[2258]: E0317 18:44:19.315681 2258 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="266a1e42-d052-408e-a36e-7da75f55f69f" containerName="cilium-agent" Mar 17 18:44:19.315854 kubelet[2258]: E0317 18:44:19.315836 2258 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="04780724-7228-47d9-965c-fe435db91b1e" containerName="cilium-operator" Mar 17 18:44:19.315990 kubelet[2258]: E0317 18:44:19.315973 2258 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="266a1e42-d052-408e-a36e-7da75f55f69f" containerName="apply-sysctl-overwrites" Mar 17 18:44:19.316176 kubelet[2258]: I0317 18:44:19.316145 2258 memory_manager.go:354] "RemoveStaleState removing state" podUID="04780724-7228-47d9-965c-fe435db91b1e" containerName="cilium-operator" Mar 17 18:44:19.316311 kubelet[2258]: I0317 18:44:19.316293 2258 memory_manager.go:354] "RemoveStaleState removing state" podUID="266a1e42-d052-408e-a36e-7da75f55f69f" containerName="cilium-agent" Mar 17 18:44:19.338424 sshd[3998]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:19.344067 systemd[1]: sshd@24-10.128.0.50:22-139.178.89.65:42660.service: Deactivated successfully. Mar 17 18:44:19.345524 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:44:19.354937 systemd-logind[1317]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:44:19.359226 systemd-logind[1317]: Removed session 23. Mar 17 18:44:19.394944 systemd[1]: Started sshd@25-10.128.0.50:22-139.178.89.65:42674.service. Mar 17 18:44:19.424823 kubelet[2258]: I0317 18:44:19.424199 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-clustermesh-secrets\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.424823 kubelet[2258]: I0317 18:44:19.424255 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-host-proc-sys-kernel\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.424823 kubelet[2258]: I0317 18:44:19.424284 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-xtables-lock\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.424823 kubelet[2258]: I0317 18:44:19.424317 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-bpf-maps\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.424823 kubelet[2258]: I0317 18:44:19.424344 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-hostproc\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.424823 kubelet[2258]: I0317 18:44:19.424371 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cni-path\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.425358 kubelet[2258]: I0317 18:44:19.424397 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-config-path\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.425358 kubelet[2258]: I0317 18:44:19.424430 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-cgroup\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.425358 kubelet[2258]: I0317 18:44:19.424478 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-etc-cni-netd\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.425358 kubelet[2258]: I0317 18:44:19.424504 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-lib-modules\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.425358 kubelet[2258]: I0317 18:44:19.424531 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-host-proc-sys-net\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.425358 kubelet[2258]: I0317 18:44:19.424562 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-run\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.425689 kubelet[2258]: I0317 18:44:19.424590 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-hubble-tls\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.425689 kubelet[2258]: I0317 18:44:19.424616 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-ipsec-secrets\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.425689 kubelet[2258]: I0317 18:44:19.424641 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24jg6\" (UniqueName: \"kubernetes.io/projected/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-kube-api-access-24jg6\") pod \"cilium-rv4wb\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " pod="kube-system/cilium-rv4wb" Mar 17 18:44:19.632064 env[1335]: time="2025-03-17T18:44:19.630994896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rv4wb,Uid:f63bc1b2-2137-4ae9-a2a6-ad98635bbf71,Namespace:kube-system,Attempt:0,}" Mar 17 18:44:19.659365 env[1335]: time="2025-03-17T18:44:19.659246776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:44:19.659651 env[1335]: time="2025-03-17T18:44:19.659307384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:44:19.659651 env[1335]: time="2025-03-17T18:44:19.659353034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:44:19.659901 env[1335]: time="2025-03-17T18:44:19.659665035Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede pid=4023 runtime=io.containerd.runc.v2 Mar 17 18:44:19.699872 sshd[4009]: Accepted publickey for core from 139.178.89.65 port 42674 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:44:19.702307 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:19.711699 systemd[1]: Started session-24.scope. Mar 17 18:44:19.712609 systemd-logind[1317]: New session 24 of user core. Mar 17 18:44:19.739182 env[1335]: time="2025-03-17T18:44:19.739122834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rv4wb,Uid:f63bc1b2-2137-4ae9-a2a6-ad98635bbf71,Namespace:kube-system,Attempt:0,} returns sandbox id \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\"" Mar 17 18:44:19.744775 env[1335]: time="2025-03-17T18:44:19.744655119Z" level=info msg="CreateContainer within sandbox \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:44:19.763644 env[1335]: time="2025-03-17T18:44:19.763573991Z" level=info msg="CreateContainer within sandbox \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"da3e7e2cbfe19c37b7e4539715b445cd55335b73e1fc4ce29e7154b206bfd73d\"" Mar 17 18:44:19.764782 env[1335]: time="2025-03-17T18:44:19.764740157Z" level=info msg="StartContainer for \"da3e7e2cbfe19c37b7e4539715b445cd55335b73e1fc4ce29e7154b206bfd73d\"" Mar 17 18:44:19.848799 env[1335]: time="2025-03-17T18:44:19.848729966Z" level=info msg="StartContainer for \"da3e7e2cbfe19c37b7e4539715b445cd55335b73e1fc4ce29e7154b206bfd73d\" returns successfully" Mar 17 18:44:19.907716 env[1335]: time="2025-03-17T18:44:19.907559218Z" level=info msg="shim disconnected" id=da3e7e2cbfe19c37b7e4539715b445cd55335b73e1fc4ce29e7154b206bfd73d Mar 17 18:44:19.908123 env[1335]: time="2025-03-17T18:44:19.908090753Z" level=warning msg="cleaning up after shim disconnected" id=da3e7e2cbfe19c37b7e4539715b445cd55335b73e1fc4ce29e7154b206bfd73d namespace=k8s.io Mar 17 18:44:19.908288 env[1335]: time="2025-03-17T18:44:19.908264909Z" level=info msg="cleaning up dead shim" Mar 17 18:44:19.922556 env[1335]: time="2025-03-17T18:44:19.922455455Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4118 runtime=io.containerd.runc.v2\n" Mar 17 18:44:20.045062 sshd[4009]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:20.051357 systemd-logind[1317]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:44:20.052079 systemd[1]: sshd@25-10.128.0.50:22-139.178.89.65:42674.service: Deactivated successfully. Mar 17 18:44:20.053389 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:44:20.056249 systemd-logind[1317]: Removed session 24. Mar 17 18:44:20.096134 systemd[1]: Started sshd@26-10.128.0.50:22-139.178.89.65:42676.service. Mar 17 18:44:20.357987 env[1335]: time="2025-03-17T18:44:20.356748096Z" level=info msg="StopPodSandbox for \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\"" Mar 17 18:44:20.357987 env[1335]: time="2025-03-17T18:44:20.356854659Z" level=info msg="Container to stop \"da3e7e2cbfe19c37b7e4539715b445cd55335b73e1fc4ce29e7154b206bfd73d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:20.399660 sshd[4134]: Accepted publickey for core from 139.178.89.65 port 42676 ssh2: RSA SHA256:MwXPWHAmIHbbkjBOl9game4w0Y2Rjfi7lGZx9rtQRJk Mar 17 18:44:20.401997 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:20.412719 systemd[1]: Started session-25.scope. Mar 17 18:44:20.414258 systemd-logind[1317]: New session 25 of user core. Mar 17 18:44:20.432370 env[1335]: time="2025-03-17T18:44:20.432278207Z" level=info msg="shim disconnected" id=71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede Mar 17 18:44:20.432793 env[1335]: time="2025-03-17T18:44:20.432760216Z" level=warning msg="cleaning up after shim disconnected" id=71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede namespace=k8s.io Mar 17 18:44:20.432965 env[1335]: time="2025-03-17T18:44:20.432943334Z" level=info msg="cleaning up dead shim" Mar 17 18:44:20.446770 env[1335]: time="2025-03-17T18:44:20.446705348Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4159 runtime=io.containerd.runc.v2\n" Mar 17 18:44:20.447202 env[1335]: time="2025-03-17T18:44:20.447158682Z" level=info msg="TearDown network for sandbox \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\" successfully" Mar 17 18:44:20.447340 env[1335]: time="2025-03-17T18:44:20.447201292Z" level=info msg="StopPodSandbox for \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\" returns successfully" Mar 17 18:44:20.532548 kubelet[2258]: I0317 18:44:20.532473 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-etc-cni-netd\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.532548 kubelet[2258]: I0317 18:44:20.532536 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-run\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533391 kubelet[2258]: I0317 18:44:20.532565 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cni-path\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533391 kubelet[2258]: I0317 18:44:20.532637 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24jg6\" (UniqueName: \"kubernetes.io/projected/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-kube-api-access-24jg6\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533391 kubelet[2258]: I0317 18:44:20.532663 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-xtables-lock\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533391 kubelet[2258]: I0317 18:44:20.532692 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-bpf-maps\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533391 kubelet[2258]: I0317 18:44:20.532722 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-ipsec-secrets\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533391 kubelet[2258]: I0317 18:44:20.532749 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-lib-modules\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533804 kubelet[2258]: I0317 18:44:20.532777 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-host-proc-sys-kernel\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533804 kubelet[2258]: I0317 18:44:20.532804 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-hostproc\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533804 kubelet[2258]: I0317 18:44:20.532831 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-config-path\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533804 kubelet[2258]: I0317 18:44:20.532860 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-hubble-tls\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533804 kubelet[2258]: I0317 18:44:20.532905 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-clustermesh-secrets\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.533804 kubelet[2258]: I0317 18:44:20.532950 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-host-proc-sys-net\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.534166 kubelet[2258]: I0317 18:44:20.532983 2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-cgroup\") pod \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\" (UID: \"f63bc1b2-2137-4ae9-a2a6-ad98635bbf71\") " Mar 17 18:44:20.534166 kubelet[2258]: I0317 18:44:20.533114 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:20.534166 kubelet[2258]: I0317 18:44:20.533168 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:20.534166 kubelet[2258]: I0317 18:44:20.533198 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:20.534166 kubelet[2258]: I0317 18:44:20.533221 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cni-path" (OuterVolumeSpecName: "cni-path") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:20.534606 kubelet[2258]: I0317 18:44:20.534541 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:20.535715 kubelet[2258]: I0317 18:44:20.534688 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-hostproc" (OuterVolumeSpecName: "hostproc") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:20.535908 kubelet[2258]: I0317 18:44:20.534779 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:20.536110 kubelet[2258]: I0317 18:44:20.534805 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:20.536293 kubelet[2258]: I0317 18:44:20.535643 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:20.537933 kubelet[2258]: I0317 18:44:20.537889 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:44:20.538072 kubelet[2258]: I0317 18:44:20.537957 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:20.548585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede-rootfs.mount: Deactivated successfully. Mar 17 18:44:20.549976 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede-shm.mount: Deactivated successfully. Mar 17 18:44:20.550215 systemd[1]: var-lib-kubelet-pods-f63bc1b2\x2d2137\x2d4ae9\x2da2a6\x2dad98635bbf71-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:44:20.550904 kubelet[2258]: I0317 18:44:20.550858 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:44:20.551215 kubelet[2258]: I0317 18:44:20.551180 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:44:20.552064 kubelet[2258]: I0317 18:44:20.552031 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:20.555795 kubelet[2258]: I0317 18:44:20.555756 2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-kube-api-access-24jg6" (OuterVolumeSpecName: "kube-api-access-24jg6") pod "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" (UID: "f63bc1b2-2137-4ae9-a2a6-ad98635bbf71"). InnerVolumeSpecName "kube-api-access-24jg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:20.560838 systemd[1]: var-lib-kubelet-pods-f63bc1b2\x2d2137\x2d4ae9\x2da2a6\x2dad98635bbf71-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24jg6.mount: Deactivated successfully. Mar 17 18:44:20.561107 systemd[1]: var-lib-kubelet-pods-f63bc1b2\x2d2137\x2d4ae9\x2da2a6\x2dad98635bbf71-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:44:20.561277 systemd[1]: var-lib-kubelet-pods-f63bc1b2\x2d2137\x2d4ae9\x2da2a6\x2dad98635bbf71-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:44:20.634356 kubelet[2258]: I0317 18:44:20.634209 2258 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-host-proc-sys-net\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.634689 kubelet[2258]: I0317 18:44:20.634656 2258 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-cgroup\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.634820 kubelet[2258]: I0317 18:44:20.634807 2258 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-etc-cni-netd\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.634934 kubelet[2258]: I0317 18:44:20.634921 2258 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-run\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.635032 kubelet[2258]: I0317 18:44:20.635020 2258 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cni-path\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.635127 kubelet[2258]: I0317 18:44:20.635115 2258 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-ipsec-secrets\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.635230 kubelet[2258]: I0317 18:44:20.635218 2258 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-24jg6\" (UniqueName: \"kubernetes.io/projected/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-kube-api-access-24jg6\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.635325 kubelet[2258]: I0317 18:44:20.635313 2258 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-xtables-lock\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.635415 kubelet[2258]: I0317 18:44:20.635403 2258 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-bpf-maps\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.635555 kubelet[2258]: I0317 18:44:20.635539 2258 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-hostproc\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.635686 kubelet[2258]: I0317 18:44:20.635672 2258 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-lib-modules\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.635796 kubelet[2258]: I0317 18:44:20.635779 2258 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-host-proc-sys-kernel\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.635893 kubelet[2258]: I0317 18:44:20.635881 2258 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-clustermesh-secrets\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.635985 kubelet[2258]: I0317 18:44:20.635971 2258 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-cilium-config-path\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:20.636084 kubelet[2258]: I0317 18:44:20.636072 2258 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71-hubble-tls\") on node \"ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 18:44:21.360240 kubelet[2258]: I0317 18:44:21.360201 2258 scope.go:117] "RemoveContainer" containerID="da3e7e2cbfe19c37b7e4539715b445cd55335b73e1fc4ce29e7154b206bfd73d" Mar 17 18:44:21.361951 env[1335]: time="2025-03-17T18:44:21.361903279Z" level=info msg="RemoveContainer for \"da3e7e2cbfe19c37b7e4539715b445cd55335b73e1fc4ce29e7154b206bfd73d\"" Mar 17 18:44:21.367226 env[1335]: time="2025-03-17T18:44:21.367171684Z" level=info msg="RemoveContainer for \"da3e7e2cbfe19c37b7e4539715b445cd55335b73e1fc4ce29e7154b206bfd73d\" returns successfully" Mar 17 18:44:21.408757 kubelet[2258]: I0317 18:44:21.408698 2258 topology_manager.go:215] "Topology Admit Handler" podUID="0c643464-b993-4539-9af5-1cba5313b063" podNamespace="kube-system" podName="cilium-694mp" Mar 17 18:44:21.409067 kubelet[2258]: E0317 18:44:21.409046 2258 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" containerName="mount-cgroup" Mar 17 18:44:21.409238 kubelet[2258]: I0317 18:44:21.409219 2258 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" containerName="mount-cgroup" Mar 17 18:44:21.543615 kubelet[2258]: I0317 18:44:21.543559 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4z28\" (UniqueName: \"kubernetes.io/projected/0c643464-b993-4539-9af5-1cba5313b063-kube-api-access-g4z28\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544271 kubelet[2258]: I0317 18:44:21.543641 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c643464-b993-4539-9af5-1cba5313b063-hostproc\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544271 kubelet[2258]: I0317 18:44:21.543675 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c643464-b993-4539-9af5-1cba5313b063-clustermesh-secrets\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544271 kubelet[2258]: I0317 18:44:21.543700 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c643464-b993-4539-9af5-1cba5313b063-cilium-config-path\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544271 kubelet[2258]: I0317 18:44:21.543728 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c643464-b993-4539-9af5-1cba5313b063-host-proc-sys-net\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544271 kubelet[2258]: I0317 18:44:21.543755 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c643464-b993-4539-9af5-1cba5313b063-etc-cni-netd\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544271 kubelet[2258]: I0317 18:44:21.543783 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c643464-b993-4539-9af5-1cba5313b063-cilium-ipsec-secrets\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544657 kubelet[2258]: I0317 18:44:21.543812 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c643464-b993-4539-9af5-1cba5313b063-hubble-tls\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544657 kubelet[2258]: I0317 18:44:21.543842 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c643464-b993-4539-9af5-1cba5313b063-cilium-run\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544657 kubelet[2258]: I0317 18:44:21.543867 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c643464-b993-4539-9af5-1cba5313b063-cilium-cgroup\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544657 kubelet[2258]: I0317 18:44:21.543892 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c643464-b993-4539-9af5-1cba5313b063-cni-path\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544657 kubelet[2258]: I0317 18:44:21.543955 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c643464-b993-4539-9af5-1cba5313b063-xtables-lock\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544657 kubelet[2258]: I0317 18:44:21.543994 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c643464-b993-4539-9af5-1cba5313b063-host-proc-sys-kernel\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544976 kubelet[2258]: I0317 18:44:21.544019 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c643464-b993-4539-9af5-1cba5313b063-bpf-maps\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.544976 kubelet[2258]: I0317 18:44:21.544055 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c643464-b993-4539-9af5-1cba5313b063-lib-modules\") pod \"cilium-694mp\" (UID: \"0c643464-b993-4539-9af5-1cba5313b063\") " pod="kube-system/cilium-694mp" Mar 17 18:44:21.723998 env[1335]: time="2025-03-17T18:44:21.723842294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-694mp,Uid:0c643464-b993-4539-9af5-1cba5313b063,Namespace:kube-system,Attempt:0,}" Mar 17 18:44:21.752870 env[1335]: time="2025-03-17T18:44:21.752534376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:44:21.752870 env[1335]: time="2025-03-17T18:44:21.752612555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:44:21.752870 env[1335]: time="2025-03-17T18:44:21.752633596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:44:21.753419 env[1335]: time="2025-03-17T18:44:21.753350624Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4 pid=4191 runtime=io.containerd.runc.v2 Mar 17 18:44:21.810352 env[1335]: time="2025-03-17T18:44:21.810289901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-694mp,Uid:0c643464-b993-4539-9af5-1cba5313b063,Namespace:kube-system,Attempt:0,} returns sandbox id \"025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4\"" Mar 17 18:44:21.815323 env[1335]: time="2025-03-17T18:44:21.815244287Z" level=info msg="CreateContainer within sandbox \"025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:44:21.831251 env[1335]: time="2025-03-17T18:44:21.831178890Z" level=info msg="CreateContainer within sandbox \"025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"170df6863429705dd84d5fbcd109e6b02f2a523e2d3e18876135fc748894c7a0\"" Mar 17 18:44:21.833664 env[1335]: time="2025-03-17T18:44:21.832129444Z" level=info msg="StartContainer for \"170df6863429705dd84d5fbcd109e6b02f2a523e2d3e18876135fc748894c7a0\"" Mar 17 18:44:21.900607 env[1335]: time="2025-03-17T18:44:21.900547880Z" level=info msg="StartContainer for \"170df6863429705dd84d5fbcd109e6b02f2a523e2d3e18876135fc748894c7a0\" returns successfully" Mar 17 18:44:21.944569 env[1335]: time="2025-03-17T18:44:21.944493110Z" level=info msg="shim disconnected" id=170df6863429705dd84d5fbcd109e6b02f2a523e2d3e18876135fc748894c7a0 Mar 17 18:44:21.945360 env[1335]: time="2025-03-17T18:44:21.945297986Z" level=warning msg="cleaning up after shim disconnected" id=170df6863429705dd84d5fbcd109e6b02f2a523e2d3e18876135fc748894c7a0 namespace=k8s.io Mar 17 18:44:21.945360 env[1335]: time="2025-03-17T18:44:21.945334572Z" level=info msg="cleaning up dead shim" Mar 17 18:44:21.958420 env[1335]: time="2025-03-17T18:44:21.958243385Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4275 runtime=io.containerd.runc.v2\n" Mar 17 18:44:21.971942 kubelet[2258]: I0317 18:44:21.971880 2258 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f63bc1b2-2137-4ae9-a2a6-ad98635bbf71" path="/var/lib/kubelet/pods/f63bc1b2-2137-4ae9-a2a6-ad98635bbf71/volumes" Mar 17 18:44:22.369829 env[1335]: time="2025-03-17T18:44:22.369756284Z" level=info msg="CreateContainer within sandbox \"025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:44:22.388060 env[1335]: time="2025-03-17T18:44:22.387978982Z" level=info msg="CreateContainer within sandbox \"025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f09a288796571bb3d4572daf3560a6dc0a8f5a1ae0dfc9b226c377792bb49433\"" Mar 17 18:44:22.389054 env[1335]: time="2025-03-17T18:44:22.389001605Z" level=info msg="StartContainer for \"f09a288796571bb3d4572daf3560a6dc0a8f5a1ae0dfc9b226c377792bb49433\"" Mar 17 18:44:22.468134 env[1335]: time="2025-03-17T18:44:22.468080994Z" level=info msg="StartContainer for \"f09a288796571bb3d4572daf3560a6dc0a8f5a1ae0dfc9b226c377792bb49433\" returns successfully" Mar 17 18:44:22.497717 env[1335]: time="2025-03-17T18:44:22.497646513Z" level=info msg="shim disconnected" id=f09a288796571bb3d4572daf3560a6dc0a8f5a1ae0dfc9b226c377792bb49433 Mar 17 18:44:22.498088 env[1335]: time="2025-03-17T18:44:22.498052912Z" level=warning msg="cleaning up after shim disconnected" id=f09a288796571bb3d4572daf3560a6dc0a8f5a1ae0dfc9b226c377792bb49433 namespace=k8s.io Mar 17 18:44:22.498314 env[1335]: time="2025-03-17T18:44:22.498288278Z" level=info msg="cleaning up dead shim" Mar 17 18:44:22.526587 env[1335]: time="2025-03-17T18:44:22.526212607Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4338 runtime=io.containerd.runc.v2\n" Mar 17 18:44:23.386481 env[1335]: time="2025-03-17T18:44:23.383533935Z" level=info msg="CreateContainer within sandbox \"025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:44:23.434323 env[1335]: time="2025-03-17T18:44:23.432777007Z" level=info msg="CreateContainer within sandbox \"025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"04cfa1b89729353fe023f954bad5a6d9c693497279f860a8b6243340fedc6082\"" Mar 17 18:44:23.433305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2208615876.mount: Deactivated successfully. Mar 17 18:44:23.435708 env[1335]: time="2025-03-17T18:44:23.435629308Z" level=info msg="StartContainer for \"04cfa1b89729353fe023f954bad5a6d9c693497279f860a8b6243340fedc6082\"" Mar 17 18:44:23.556928 env[1335]: time="2025-03-17T18:44:23.556861174Z" level=info msg="StartContainer for \"04cfa1b89729353fe023f954bad5a6d9c693497279f860a8b6243340fedc6082\" returns successfully" Mar 17 18:44:23.595134 env[1335]: time="2025-03-17T18:44:23.595048011Z" level=info msg="shim disconnected" id=04cfa1b89729353fe023f954bad5a6d9c693497279f860a8b6243340fedc6082 Mar 17 18:44:23.595134 env[1335]: time="2025-03-17T18:44:23.595115776Z" level=warning msg="cleaning up after shim disconnected" id=04cfa1b89729353fe023f954bad5a6d9c693497279f860a8b6243340fedc6082 namespace=k8s.io Mar 17 18:44:23.595134 env[1335]: time="2025-03-17T18:44:23.595133060Z" level=info msg="cleaning up dead shim" Mar 17 18:44:23.610627 env[1335]: time="2025-03-17T18:44:23.610557030Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4399 runtime=io.containerd.runc.v2\n" Mar 17 18:44:23.668158 systemd[1]: run-containerd-runc-k8s.io-04cfa1b89729353fe023f954bad5a6d9c693497279f860a8b6243340fedc6082-runc.0ueBp5.mount: Deactivated successfully. Mar 17 18:44:23.668384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04cfa1b89729353fe023f954bad5a6d9c693497279f860a8b6243340fedc6082-rootfs.mount: Deactivated successfully. Mar 17 18:44:23.948582 env[1335]: time="2025-03-17T18:44:23.948185085Z" level=info msg="StopPodSandbox for \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\"" Mar 17 18:44:23.948582 env[1335]: time="2025-03-17T18:44:23.948319339Z" level=info msg="TearDown network for sandbox \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\" successfully" Mar 17 18:44:23.948582 env[1335]: time="2025-03-17T18:44:23.948370815Z" level=info msg="StopPodSandbox for \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\" returns successfully" Mar 17 18:44:23.949226 env[1335]: time="2025-03-17T18:44:23.949185702Z" level=info msg="RemovePodSandbox for \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\"" Mar 17 18:44:23.949379 env[1335]: time="2025-03-17T18:44:23.949230960Z" level=info msg="Forcibly stopping sandbox \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\"" Mar 17 18:44:23.949379 env[1335]: time="2025-03-17T18:44:23.949342403Z" level=info msg="TearDown network for sandbox \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\" successfully" Mar 17 18:44:23.953905 env[1335]: time="2025-03-17T18:44:23.953846366Z" level=info msg="RemovePodSandbox \"ae62f2624b719dc761785ca349c6118b61da12d569a7030f86c60e2d56e26cc7\" returns successfully" Mar 17 18:44:23.954428 env[1335]: time="2025-03-17T18:44:23.954391722Z" level=info msg="StopPodSandbox for \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\"" Mar 17 18:44:23.954734 env[1335]: time="2025-03-17T18:44:23.954667999Z" level=info msg="TearDown network for sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" successfully" Mar 17 18:44:23.954734 env[1335]: time="2025-03-17T18:44:23.954719387Z" level=info msg="StopPodSandbox for \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" returns successfully" Mar 17 18:44:23.955116 env[1335]: time="2025-03-17T18:44:23.955083477Z" level=info msg="RemovePodSandbox for \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\"" Mar 17 18:44:23.955213 env[1335]: time="2025-03-17T18:44:23.955123215Z" level=info msg="Forcibly stopping sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\"" Mar 17 18:44:23.955282 env[1335]: time="2025-03-17T18:44:23.955227647Z" level=info msg="TearDown network for sandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" successfully" Mar 17 18:44:23.959528 env[1335]: time="2025-03-17T18:44:23.959473660Z" level=info msg="RemovePodSandbox \"f987a26c2f04950d2a98d39f5571c9e0b410a5ff38a62994c715e52d062c1991\" returns successfully" Mar 17 18:44:23.959986 env[1335]: time="2025-03-17T18:44:23.959949776Z" level=info msg="StopPodSandbox for \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\"" Mar 17 18:44:23.960128 env[1335]: time="2025-03-17T18:44:23.960063010Z" level=info msg="TearDown network for sandbox \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\" successfully" Mar 17 18:44:23.960214 env[1335]: time="2025-03-17T18:44:23.960127558Z" level=info msg="StopPodSandbox for \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\" returns successfully" Mar 17 18:44:23.960588 env[1335]: time="2025-03-17T18:44:23.960542891Z" level=info msg="RemovePodSandbox for \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\"" Mar 17 18:44:23.960700 env[1335]: time="2025-03-17T18:44:23.960579956Z" level=info msg="Forcibly stopping sandbox \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\"" Mar 17 18:44:23.960700 env[1335]: time="2025-03-17T18:44:23.960683384Z" level=info msg="TearDown network for sandbox \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\" successfully" Mar 17 18:44:23.964929 env[1335]: time="2025-03-17T18:44:23.964867860Z" level=info msg="RemovePodSandbox \"71811b596fa0753def1d12300b83ed71413dbc8f23fbf3b99377630bdd7cbede\" returns successfully" Mar 17 18:44:24.094860 kubelet[2258]: E0317 18:44:24.094778 2258 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:44:24.384487 env[1335]: time="2025-03-17T18:44:24.384285870Z" level=info msg="CreateContainer within sandbox \"025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:44:24.405099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446228374.mount: Deactivated successfully. Mar 17 18:44:24.415345 env[1335]: time="2025-03-17T18:44:24.415287138Z" level=info msg="CreateContainer within sandbox \"025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"97ae9fe30c4401d3de25419b001100c01c522abaca71469056187a0e045e1625\"" Mar 17 18:44:24.420244 env[1335]: time="2025-03-17T18:44:24.420198247Z" level=info msg="StartContainer for \"97ae9fe30c4401d3de25419b001100c01c522abaca71469056187a0e045e1625\"" Mar 17 18:44:24.507804 env[1335]: time="2025-03-17T18:44:24.507742733Z" level=info msg="StartContainer for \"97ae9fe30c4401d3de25419b001100c01c522abaca71469056187a0e045e1625\" returns successfully" Mar 17 18:44:24.547522 env[1335]: time="2025-03-17T18:44:24.547378832Z" level=error msg="collecting metrics for 97ae9fe30c4401d3de25419b001100c01c522abaca71469056187a0e045e1625" error="cgroups: cgroup deleted: unknown" Mar 17 18:44:24.549571 env[1335]: time="2025-03-17T18:44:24.549396597Z" level=info msg="shim disconnected" id=97ae9fe30c4401d3de25419b001100c01c522abaca71469056187a0e045e1625 Mar 17 18:44:24.549571 env[1335]: time="2025-03-17T18:44:24.549471837Z" level=warning msg="cleaning up after shim disconnected" id=97ae9fe30c4401d3de25419b001100c01c522abaca71469056187a0e045e1625 namespace=k8s.io Mar 17 18:44:24.549571 env[1335]: time="2025-03-17T18:44:24.549487953Z" level=info msg="cleaning up dead shim" Mar 17 18:44:24.575517 env[1335]: time="2025-03-17T18:44:24.575428393Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4460 runtime=io.containerd.runc.v2\n" Mar 17 18:44:24.668241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97ae9fe30c4401d3de25419b001100c01c522abaca71469056187a0e045e1625-rootfs.mount: Deactivated successfully. Mar 17 18:44:25.391026 env[1335]: time="2025-03-17T18:44:25.390960536Z" level=info msg="CreateContainer within sandbox \"025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:44:25.417302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074087927.mount: Deactivated successfully. Mar 17 18:44:25.434526 env[1335]: time="2025-03-17T18:44:25.434411179Z" level=info msg="CreateContainer within sandbox \"025a9b1c7327e813c4a30ab013118c101222eb221c22be0fc35bb7e8d93302b4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89de985afe266db94b92f68bad8d921f3eb7448cb2325781db37aa014324fe11\"" Mar 17 18:44:25.436978 env[1335]: time="2025-03-17T18:44:25.435729615Z" level=info msg="StartContainer for \"89de985afe266db94b92f68bad8d921f3eb7448cb2325781db37aa014324fe11\"" Mar 17 18:44:25.523470 env[1335]: time="2025-03-17T18:44:25.521432754Z" level=info msg="StartContainer for \"89de985afe266db94b92f68bad8d921f3eb7448cb2325781db37aa014324fe11\" returns successfully" Mar 17 18:44:26.011505 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:44:26.414581 kubelet[2258]: I0317 18:44:26.414476 2258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-694mp" podStartSLOduration=5.414414318 podStartE2EDuration="5.414414318s" podCreationTimestamp="2025-03-17 18:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:44:26.412968207 +0000 UTC m=+122.646759602" watchObservedRunningTime="2025-03-17 18:44:26.414414318 +0000 UTC m=+122.648205713" Mar 17 18:44:26.435648 kubelet[2258]: I0317 18:44:26.435564 2258 setters.go:580] "Node became not ready" node="ci-3510-3-7-347b294db6c45ef4d774.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:44:26Z","lastTransitionTime":"2025-03-17T18:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:44:26.828104 systemd[1]: run-containerd-runc-k8s.io-89de985afe266db94b92f68bad8d921f3eb7448cb2325781db37aa014324fe11-runc.tuLvKe.mount: Deactivated successfully. Mar 17 18:44:29.034339 systemd[1]: run-containerd-runc-k8s.io-89de985afe266db94b92f68bad8d921f3eb7448cb2325781db37aa014324fe11-runc.Pyxu2X.mount: Deactivated successfully. Mar 17 18:44:29.252784 systemd-networkd[1078]: lxc_health: Link UP Mar 17 18:44:29.268475 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:44:29.269476 systemd-networkd[1078]: lxc_health: Gained carrier Mar 17 18:44:30.531631 systemd-networkd[1078]: lxc_health: Gained IPv6LL Mar 17 18:44:31.468117 systemd[1]: run-containerd-runc-k8s.io-89de985afe266db94b92f68bad8d921f3eb7448cb2325781db37aa014324fe11-runc.gEMV5l.mount: Deactivated successfully. Mar 17 18:44:33.844420 systemd[1]: run-containerd-runc-k8s.io-89de985afe266db94b92f68bad8d921f3eb7448cb2325781db37aa014324fe11-runc.pYlcFf.mount: Deactivated successfully. Mar 17 18:44:36.058269 systemd[1]: run-containerd-runc-k8s.io-89de985afe266db94b92f68bad8d921f3eb7448cb2325781db37aa014324fe11-runc.wwT0Op.mount: Deactivated successfully. Mar 17 18:44:36.176711 sshd[4134]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:36.180924 systemd[1]: sshd@26-10.128.0.50:22-139.178.89.65:42676.service: Deactivated successfully. Mar 17 18:44:36.182241 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:44:36.184292 systemd-logind[1317]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:44:36.186198 systemd-logind[1317]: Removed session 25.