Dec 13 02:14:28.087389 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:14:28.087428 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:14:28.087446 kernel: BIOS-provided physical RAM map: Dec 13 02:14:28.087464 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 02:14:28.087476 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 02:14:28.087490 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 02:14:28.087510 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 02:14:28.087525 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 02:14:28.087539 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable Dec 13 02:14:28.087561 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data Dec 13 02:14:28.087576 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable Dec 13 02:14:28.087589 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 02:14:28.087603 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 02:14:28.087617 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 02:14:28.087641 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 02:14:28.087657 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 02:14:28.087672 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 02:14:28.087687 kernel: NX (Execute Disable) protection: active Dec 13 02:14:28.087702 kernel: efi: EFI v2.70 by EDK II Dec 13 02:14:28.087717 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 Dec 13 02:14:28.087733 kernel: random: crng init done Dec 13 02:14:28.087748 kernel: SMBIOS 2.4 present. Dec 13 02:14:28.087767 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 02:14:28.087782 kernel: Hypervisor detected: KVM Dec 13 02:14:28.087795 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:14:28.087809 kernel: kvm-clock: cpu 0, msr 18419b001, primary cpu clock Dec 13 02:14:28.087823 kernel: kvm-clock: using sched offset of 12724350522 cycles Dec 13 02:14:28.087838 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:14:28.087854 kernel: tsc: Detected 2299.998 MHz processor Dec 13 02:14:28.087869 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:14:28.087885 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:14:28.087901 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 02:14:28.087920 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:14:28.087936 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 02:14:28.087951 kernel: Using GB pages for direct mapping Dec 13 02:14:28.087967 kernel: Secure boot disabled Dec 13 02:14:28.087982 kernel: ACPI: Early table checksum verification disabled Dec 13 02:14:28.087997 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 02:14:28.088012 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 02:14:28.088029 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 02:14:28.088055 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 02:14:28.088071 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 02:14:28.088087 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 02:14:28.088104 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 02:14:28.088121 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 02:14:28.088138 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 02:14:28.088178 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 02:14:28.088195 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 02:14:28.088211 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 02:14:28.088227 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 02:14:28.088243 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 02:14:28.088260 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 02:14:28.088276 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 02:14:28.088293 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 02:14:28.088309 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 02:14:28.088330 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 02:14:28.088347 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 02:14:28.088367 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:14:28.088383 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:14:28.088399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 02:14:28.088416 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 02:14:28.088432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 02:14:28.088449 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 02:14:28.088466 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 02:14:28.088487 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 02:14:28.088503 kernel: Zone ranges: Dec 13 02:14:28.088519 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:14:28.088534 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 02:14:28.088549 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:14:28.088573 kernel: Movable zone start for each node Dec 13 02:14:28.088589 kernel: Early memory node ranges Dec 13 02:14:28.088606 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 02:14:28.088623 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 02:14:28.088644 kernel: node 0: [mem 0x0000000000100000-0x00000000bd276fff] Dec 13 02:14:28.088660 kernel: node 0: [mem 0x00000000bd281000-0x00000000bf8ecfff] Dec 13 02:14:28.088676 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 02:14:28.088693 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:14:28.088710 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 02:14:28.088726 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:14:28.088743 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 02:14:28.088760 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 02:14:28.088776 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 13 02:14:28.088797 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 02:14:28.088813 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 02:14:28.088829 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:14:28.088846 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:14:28.088863 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:14:28.088879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:14:28.088896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:14:28.088913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:14:28.088929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:14:28.088948 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:14:28.088964 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:14:28.088981 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 02:14:28.088996 kernel: Booting paravirtualized kernel on KVM Dec 13 02:14:28.089013 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:14:28.089030 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:14:28.089046 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:14:28.089063 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:14:28.089079 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:14:28.089099 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:14:28.089116 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:14:28.089133 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Dec 13 02:14:28.089150 kernel: Policy zone: Normal Dec 13 02:14:28.089184 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:14:28.089201 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:14:28.089218 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 02:14:28.089234 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:14:28.089250 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:14:28.089270 kernel: Memory: 7515408K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 344876K reserved, 0K cma-reserved) Dec 13 02:14:28.089287 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:14:28.089303 kernel: Kernel/User page tables isolation: enabled Dec 13 02:14:28.089320 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:14:28.089337 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:14:28.089353 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:14:28.089371 kernel: rcu: RCU event tracing is enabled. Dec 13 02:14:28.089388 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:14:28.089409 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:14:28.089439 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:14:28.089457 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:14:28.089477 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:14:28.089495 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:14:28.089513 kernel: Console: colour dummy device 80x25 Dec 13 02:14:28.089530 kernel: printk: console [ttyS0] enabled Dec 13 02:14:28.089548 kernel: ACPI: Core revision 20210730 Dec 13 02:14:28.089574 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:14:28.089592 kernel: x2apic enabled Dec 13 02:14:28.089614 kernel: Switched APIC routing to physical x2apic. Dec 13 02:14:28.089632 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 02:14:28.089651 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:14:28.089669 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 02:14:28.089687 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 02:14:28.089705 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 02:14:28.089723 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:14:28.089745 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 02:14:28.089763 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 02:14:28.089780 kernel: Spectre V2 : Mitigation: IBRS Dec 13 02:14:28.089798 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:14:28.089815 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:14:28.089833 kernel: RETBleed: Mitigation: IBRS Dec 13 02:14:28.089850 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:14:28.089868 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Dec 13 02:14:28.089886 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 02:14:28.089907 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 02:14:28.089924 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:14:28.089943 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:14:28.089961 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:14:28.089978 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:14:28.089996 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:14:28.090014 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 02:14:28.090031 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:14:28.090049 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:14:28.090070 kernel: LSM: Security Framework initializing Dec 13 02:14:28.090087 kernel: SELinux: Initializing. Dec 13 02:14:28.090106 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:14:28.090123 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:14:28.090141 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 02:14:28.090172 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 02:14:28.090190 kernel: signal: max sigframe size: 1776 Dec 13 02:14:28.090207 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:14:28.090224 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:14:28.090247 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:14:28.090265 kernel: x86: Booting SMP configuration: Dec 13 02:14:28.090282 kernel: .... node #0, CPUs: #1 Dec 13 02:14:28.090299 kernel: kvm-clock: cpu 1, msr 18419b041, secondary cpu clock Dec 13 02:14:28.090317 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:14:28.090337 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:14:28.090354 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:14:28.090373 kernel: smpboot: Max logical packages: 1 Dec 13 02:14:28.090394 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 02:14:28.090411 kernel: devtmpfs: initialized Dec 13 02:14:28.090429 kernel: x86/mm: Memory block size: 128MB Dec 13 02:14:28.090446 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 02:14:28.090464 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:14:28.090482 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:14:28.090500 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:14:28.090517 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:14:28.090535 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:14:28.090563 kernel: audit: type=2000 audit(1734056066.713:1): state=initialized audit_enabled=0 res=1 Dec 13 02:14:28.090579 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:14:28.090597 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:14:28.090614 kernel: cpuidle: using governor menu Dec 13 02:14:28.090631 kernel: ACPI: bus type PCI registered Dec 13 02:14:28.090648 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:14:28.090666 kernel: dca service started, version 1.12.1 Dec 13 02:14:28.090684 kernel: PCI: Using configuration type 1 for base access Dec 13 02:14:28.090702 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:14:28.090724 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:14:28.090742 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:14:28.090759 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:14:28.090777 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:14:28.090794 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:14:28.090812 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:14:28.090830 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:14:28.090847 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:14:28.090865 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:14:28.090888 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:14:28.090906 kernel: ACPI: Interpreter enabled Dec 13 02:14:28.090924 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:14:28.090941 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:14:28.090959 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:14:28.090976 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:14:28.090994 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:14:28.091234 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:14:28.091414 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:14:28.091438 kernel: PCI host bridge to bus 0000:00 Dec 13 02:14:28.091601 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:14:28.091744 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:14:28.091885 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:14:28.092023 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 02:14:28.092179 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:14:28.092368 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:14:28.092540 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 02:14:28.104403 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 02:14:28.104590 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:14:28.104770 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 02:14:28.104981 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 02:14:28.110408 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 02:14:28.110619 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:14:28.110807 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 02:14:28.110987 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 02:14:28.111204 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:14:28.111386 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 02:14:28.111560 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 02:14:28.111590 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:14:28.111609 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:14:28.111627 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:14:28.111644 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:14:28.111662 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:14:28.111680 kernel: iommu: Default domain type: Translated Dec 13 02:14:28.111698 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:14:28.111717 kernel: vgaarb: loaded Dec 13 02:14:28.111735 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:14:28.111757 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:14:28.111774 kernel: PTP clock support registered Dec 13 02:14:28.111792 kernel: Registered efivars operations Dec 13 02:14:28.111818 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:14:28.111836 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:14:28.111853 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 02:14:28.111871 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 02:14:28.111888 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff] Dec 13 02:14:28.111905 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 02:14:28.111926 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 02:14:28.111944 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:14:28.111963 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:14:28.111981 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:14:28.111999 kernel: pnp: PnP ACPI init Dec 13 02:14:28.112016 kernel: pnp: PnP ACPI: found 7 devices Dec 13 02:14:28.112034 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:14:28.112052 kernel: NET: Registered PF_INET protocol family Dec 13 02:14:28.112070 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:14:28.112092 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 02:14:28.112110 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:14:28.112128 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:14:28.112146 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 02:14:28.112177 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 02:14:28.112194 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:14:28.112212 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:14:28.112230 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:14:28.112252 kernel: NET: Registered PF_XDP protocol family Dec 13 02:14:28.112421 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:14:28.112582 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:14:28.112738 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:14:28.112930 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 02:14:28.113119 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:14:28.113150 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:14:28.116296 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 02:14:28.116326 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 02:14:28.116344 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:14:28.116363 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:14:28.116380 kernel: clocksource: Switched to clocksource tsc Dec 13 02:14:28.116397 kernel: Initialise system trusted keyrings Dec 13 02:14:28.116414 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 02:14:28.116430 kernel: Key type asymmetric registered Dec 13 02:14:28.116447 kernel: Asymmetric key parser 'x509' registered Dec 13 02:14:28.116468 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:14:28.116484 kernel: io scheduler mq-deadline registered Dec 13 02:14:28.116502 kernel: io scheduler kyber registered Dec 13 02:14:28.116519 kernel: io scheduler bfq registered Dec 13 02:14:28.116536 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:14:28.116554 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 02:14:28.116751 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 02:14:28.116776 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 02:14:28.116957 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 02:14:28.116985 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 02:14:28.117146 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 02:14:28.120239 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:14:28.120265 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:14:28.120283 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 02:14:28.120300 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 02:14:28.120317 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 02:14:28.120525 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 02:14:28.120557 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:14:28.120574 kernel: i8042: Warning: Keylock active Dec 13 02:14:28.120591 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:14:28.120608 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:14:28.120781 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:14:28.120933 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:14:28.121086 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:14:27 UTC (1734056067) Dec 13 02:14:28.121254 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:14:28.121282 kernel: intel_pstate: CPU model not supported Dec 13 02:14:28.121299 kernel: pstore: Registered efi as persistent store backend Dec 13 02:14:28.121316 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:14:28.121332 kernel: Segment Routing with IPv6 Dec 13 02:14:28.121350 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:14:28.121367 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:14:28.121384 kernel: Key type dns_resolver registered Dec 13 02:14:28.121401 kernel: IPI shorthand broadcast: enabled Dec 13 02:14:28.121418 kernel: sched_clock: Marking stable (750035564, 183404028)->(965607353, -32167761) Dec 13 02:14:28.121438 kernel: registered taskstats version 1 Dec 13 02:14:28.121456 kernel: Loading compiled-in X.509 certificates Dec 13 02:14:28.121473 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:14:28.121491 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:14:28.121507 kernel: Key type .fscrypt registered Dec 13 02:14:28.121524 kernel: Key type fscrypt-provisioning registered Dec 13 02:14:28.121542 kernel: pstore: Using crash dump compression: deflate Dec 13 02:14:28.121560 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:14:28.121578 kernel: ima: No architecture policies found Dec 13 02:14:28.121598 kernel: clk: Disabling unused clocks Dec 13 02:14:28.121615 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:14:28.121631 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:14:28.121647 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:14:28.121665 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:14:28.121692 kernel: Run /init as init process Dec 13 02:14:28.121710 kernel: with arguments: Dec 13 02:14:28.121728 kernel: /init Dec 13 02:14:28.121744 kernel: with environment: Dec 13 02:14:28.121764 kernel: HOME=/ Dec 13 02:14:28.121780 kernel: TERM=linux Dec 13 02:14:28.121798 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:14:28.121820 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:14:28.121843 systemd[1]: Detected virtualization kvm. Dec 13 02:14:28.121862 systemd[1]: Detected architecture x86-64. Dec 13 02:14:28.121880 systemd[1]: Running in initrd. Dec 13 02:14:28.121903 systemd[1]: No hostname configured, using default hostname. Dec 13 02:14:28.121921 systemd[1]: Hostname set to . Dec 13 02:14:28.121941 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:14:28.121960 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:14:28.121979 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:14:28.121995 systemd[1]: Reached target cryptsetup.target. Dec 13 02:14:28.122011 systemd[1]: Reached target paths.target. Dec 13 02:14:28.122027 systemd[1]: Reached target slices.target. Dec 13 02:14:28.122049 systemd[1]: Reached target swap.target. Dec 13 02:14:28.122067 systemd[1]: Reached target timers.target. Dec 13 02:14:28.122087 systemd[1]: Listening on iscsid.socket. Dec 13 02:14:28.122106 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:14:28.122124 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:14:28.122141 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:14:28.122174 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:14:28.129447 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:14:28.129605 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:14:28.129625 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:14:28.129661 systemd[1]: Reached target sockets.target. Dec 13 02:14:28.129806 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:14:28.129825 systemd[1]: Finished network-cleanup.service. Dec 13 02:14:28.129843 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:14:28.129860 systemd[1]: Starting systemd-journald.service... Dec 13 02:14:28.129881 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:14:28.129899 systemd[1]: Starting systemd-resolved.service... Dec 13 02:14:28.130043 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:14:28.130061 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:14:28.130080 kernel: audit: type=1130 audit(1734056068.124:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.130098 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:14:28.130122 systemd-journald[189]: Journal started Dec 13 02:14:28.130448 systemd-journald[189]: Runtime Journal (/run/log/journal/a3ddbb5e27dd6c52958b0dffd3199ee6) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:14:28.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.106380 systemd-modules-load[190]: Inserted module 'overlay' Dec 13 02:14:28.139409 kernel: audit: type=1130 audit(1734056068.129:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.139448 systemd[1]: Started systemd-journald.service. Dec 13 02:14:28.149176 kernel: audit: type=1130 audit(1734056068.142:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.143717 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:14:28.184571 kernel: audit: type=1130 audit(1734056068.152:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.184609 kernel: audit: type=1130 audit(1734056068.171:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.154538 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:14:28.158454 systemd-resolved[191]: Positive Trust Anchors: Dec 13 02:14:28.158467 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:14:28.158528 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:14:28.163313 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:14:28.164939 systemd-resolved[191]: Defaulting to hostname 'linux'. Dec 13 02:14:28.168476 systemd[1]: Started systemd-resolved.service. Dec 13 02:14:28.172369 systemd[1]: Reached target nss-lookup.target. Dec 13 02:14:28.180512 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:14:28.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.191206 kernel: audit: type=1130 audit(1734056068.183:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.201184 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:14:28.210572 systemd-modules-load[190]: Inserted module 'br_netfilter' Dec 13 02:14:28.211188 kernel: Bridge firewalling registered Dec 13 02:14:28.216129 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:14:28.232287 kernel: audit: type=1130 audit(1734056068.218:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.220624 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:14:28.239017 dracut-cmdline[205]: dracut-dracut-053 Dec 13 02:14:28.242953 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:14:28.252266 kernel: SCSI subsystem initialized Dec 13 02:14:28.271188 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:14:28.271253 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:14:28.275187 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:14:28.279246 systemd-modules-load[190]: Inserted module 'dm_multipath' Dec 13 02:14:28.280414 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:14:28.300297 kernel: audit: type=1130 audit(1734056068.286:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.288438 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:14:28.303016 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:14:28.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.318197 kernel: audit: type=1130 audit(1734056068.313:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.343197 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:14:28.363198 kernel: iscsi: registered transport (tcp) Dec 13 02:14:28.390441 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:14:28.390531 kernel: QLogic iSCSI HBA Driver Dec 13 02:14:28.434323 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:14:28.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.436743 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:14:28.493211 kernel: raid6: avx2x4 gen() 18273 MB/s Dec 13 02:14:28.510206 kernel: raid6: avx2x4 xor() 7598 MB/s Dec 13 02:14:28.527198 kernel: raid6: avx2x2 gen() 18237 MB/s Dec 13 02:14:28.545201 kernel: raid6: avx2x2 xor() 18647 MB/s Dec 13 02:14:28.562206 kernel: raid6: avx2x1 gen() 14099 MB/s Dec 13 02:14:28.580201 kernel: raid6: avx2x1 xor() 16182 MB/s Dec 13 02:14:28.597201 kernel: raid6: sse2x4 gen() 11061 MB/s Dec 13 02:14:28.614195 kernel: raid6: sse2x4 xor() 6689 MB/s Dec 13 02:14:28.631197 kernel: raid6: sse2x2 gen() 12124 MB/s Dec 13 02:14:28.648198 kernel: raid6: sse2x2 xor() 7451 MB/s Dec 13 02:14:28.665196 kernel: raid6: sse2x1 gen() 10589 MB/s Dec 13 02:14:28.683367 kernel: raid6: sse2x1 xor() 5198 MB/s Dec 13 02:14:28.683398 kernel: raid6: using algorithm avx2x4 gen() 18273 MB/s Dec 13 02:14:28.683421 kernel: raid6: .... xor() 7598 MB/s, rmw enabled Dec 13 02:14:28.684472 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:14:28.700198 kernel: xor: automatically using best checksumming function avx Dec 13 02:14:28.807198 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:14:28.818873 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:14:28.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.819000 audit: BPF prog-id=7 op=LOAD Dec 13 02:14:28.819000 audit: BPF prog-id=8 op=LOAD Dec 13 02:14:28.821184 systemd[1]: Starting systemd-udevd.service... Dec 13 02:14:28.836935 systemd-udevd[388]: Using default interface naming scheme 'v252'. Dec 13 02:14:28.843962 systemd[1]: Started systemd-udevd.service. Dec 13 02:14:28.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.849325 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:14:28.871673 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Dec 13 02:14:28.910781 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:14:28.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:28.915489 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:14:28.978587 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:14:28.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:29.060183 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:14:29.089185 kernel: scsi host0: Virtio SCSI HBA Dec 13 02:14:29.102189 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 02:14:29.104253 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:14:29.104305 kernel: AES CTR mode by8 optimization enabled Dec 13 02:14:29.187920 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 02:14:29.206072 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 02:14:29.206274 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 02:14:29.206438 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 02:14:29.206579 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:14:29.206720 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:14:29.206736 kernel: GPT:17805311 != 25165823 Dec 13 02:14:29.206749 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:14:29.206763 kernel: GPT:17805311 != 25165823 Dec 13 02:14:29.206776 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:14:29.206790 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:14:29.206809 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 02:14:29.250276 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:14:29.281449 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (437) Dec 13 02:14:29.269433 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:14:29.296264 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:14:29.320123 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:14:29.329504 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:14:29.346416 systemd[1]: Starting disk-uuid.service... Dec 13 02:14:29.370446 disk-uuid[517]: Primary Header is updated. Dec 13 02:14:29.370446 disk-uuid[517]: Secondary Entries is updated. Dec 13 02:14:29.370446 disk-uuid[517]: Secondary Header is updated. Dec 13 02:14:29.396253 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:14:29.407196 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:14:29.432192 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:14:30.424186 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:14:30.424260 disk-uuid[518]: The operation has completed successfully. Dec 13 02:14:30.486110 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:14:30.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:30.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:30.486279 systemd[1]: Finished disk-uuid.service. Dec 13 02:14:30.508073 systemd[1]: Starting verity-setup.service... Dec 13 02:14:30.534244 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:14:30.605191 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:14:30.606704 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:14:30.632875 systemd[1]: Finished verity-setup.service. Dec 13 02:14:30.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:30.705182 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:14:30.706142 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:14:30.718485 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:14:30.719595 systemd[1]: Starting ignition-setup.service... Dec 13 02:14:30.756126 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:14:30.756192 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:14:30.756218 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:14:30.768210 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:14:30.768563 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:14:30.785526 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:14:30.799959 systemd[1]: Finished ignition-setup.service. Dec 13 02:14:30.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:30.809406 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:14:30.886990 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:14:30.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:30.887000 audit: BPF prog-id=9 op=LOAD Dec 13 02:14:30.889107 systemd[1]: Starting systemd-networkd.service... Dec 13 02:14:30.921504 systemd-networkd[692]: lo: Link UP Dec 13 02:14:30.921515 systemd-networkd[692]: lo: Gained carrier Dec 13 02:14:30.922314 systemd-networkd[692]: Enumeration completed Dec 13 02:14:30.922686 systemd-networkd[692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:14:30.922894 systemd[1]: Started systemd-networkd.service. Dec 13 02:14:30.924945 systemd-networkd[692]: eth0: Link UP Dec 13 02:14:30.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:30.924953 systemd-networkd[692]: eth0: Gained carrier Dec 13 02:14:30.934819 systemd-networkd[692]: eth0: DHCPv4 address 10.128.0.35/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:14:30.963671 systemd[1]: Reached target network.target. Dec 13 02:14:31.010399 systemd[1]: Starting iscsiuio.service... Dec 13 02:14:31.020035 systemd[1]: Started iscsiuio.service. Dec 13 02:14:31.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.032566 systemd[1]: Starting iscsid.service... Dec 13 02:14:31.044412 systemd[1]: Started iscsid.service. Dec 13 02:14:31.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.065559 iscsid[702]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:14:31.065559 iscsid[702]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 02:14:31.065559 iscsid[702]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:14:31.065559 iscsid[702]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:14:31.065559 iscsid[702]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:14:31.065559 iscsid[702]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:14:31.065559 iscsid[702]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:14:31.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.059566 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:14:31.089818 ignition[614]: Ignition 2.14.0 Dec 13 02:14:31.079225 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:14:31.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.089832 ignition[614]: Stage: fetch-offline Dec 13 02:14:31.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.110778 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:14:31.089904 ignition[614]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:31.136689 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:14:31.089942 ignition[614]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:31.151443 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:14:31.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.107484 ignition[614]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:31.168460 systemd[1]: Reached target remote-fs.target. Dec 13 02:14:31.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.107679 ignition[614]: parsed url from cmdline: "" Dec 13 02:14:31.187451 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:14:31.107684 ignition[614]: no config URL provided Dec 13 02:14:31.211407 systemd[1]: Starting ignition-fetch.service... Dec 13 02:14:31.107691 ignition[614]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:14:31.225732 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:14:31.107701 ignition[614]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:14:31.242426 unknown[717]: fetched base config from "system" Dec 13 02:14:31.107710 ignition[614]: failed to fetch config: resource requires networking Dec 13 02:14:31.242438 unknown[717]: fetched base config from "system" Dec 13 02:14:31.108119 ignition[614]: Ignition finished successfully Dec 13 02:14:31.242448 unknown[717]: fetched user config from "gcp" Dec 13 02:14:31.223131 ignition[717]: Ignition 2.14.0 Dec 13 02:14:31.244743 systemd[1]: Finished ignition-fetch.service. Dec 13 02:14:31.223142 ignition[717]: Stage: fetch Dec 13 02:14:31.256675 systemd[1]: Starting ignition-kargs.service... Dec 13 02:14:31.223292 ignition[717]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:31.288634 systemd[1]: Finished ignition-kargs.service. Dec 13 02:14:31.223322 ignition[717]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:31.305493 systemd[1]: Starting ignition-disks.service... Dec 13 02:14:31.230949 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:31.327666 systemd[1]: Finished ignition-disks.service. Dec 13 02:14:31.231184 ignition[717]: parsed url from cmdline: "" Dec 13 02:14:31.337437 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:14:31.231193 ignition[717]: no config URL provided Dec 13 02:14:31.352291 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:14:31.231277 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:14:31.366308 systemd[1]: Reached target local-fs.target. Dec 13 02:14:31.231295 ignition[717]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:14:31.366431 systemd[1]: Reached target sysinit.target. Dec 13 02:14:31.231355 ignition[717]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 02:14:31.387288 systemd[1]: Reached target basic.target. Dec 13 02:14:31.240100 ignition[717]: GET result: OK Dec 13 02:14:31.401468 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:14:31.240195 ignition[717]: parsing config with SHA512: 1ed48d80c8ffb51a79b5f8e65f68060d8f1435b9e1bcb750e45a294dde0b1abbdc0d48b7bb02771fb6d05f0b5153533bae133c77eb8cea146db5c4f7f4dbc404 Dec 13 02:14:31.243019 ignition[717]: fetch: fetch complete Dec 13 02:14:31.243026 ignition[717]: fetch: fetch passed Dec 13 02:14:31.243075 ignition[717]: Ignition finished successfully Dec 13 02:14:31.270539 ignition[723]: Ignition 2.14.0 Dec 13 02:14:31.270547 ignition[723]: Stage: kargs Dec 13 02:14:31.270678 ignition[723]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:31.270707 ignition[723]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:31.277713 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:31.278798 ignition[723]: kargs: kargs passed Dec 13 02:14:31.278848 ignition[723]: Ignition finished successfully Dec 13 02:14:31.317204 ignition[729]: Ignition 2.14.0 Dec 13 02:14:31.317215 ignition[729]: Stage: disks Dec 13 02:14:31.317345 ignition[729]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:31.317377 ignition[729]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:31.325464 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:31.326728 ignition[729]: disks: disks passed Dec 13 02:14:31.326776 ignition[729]: Ignition finished successfully Dec 13 02:14:31.444238 systemd-fsck[737]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 02:14:31.665088 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:14:31.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.674534 systemd[1]: Mounting sysroot.mount... Dec 13 02:14:31.703419 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:14:31.700754 systemd[1]: Mounted sysroot.mount. Dec 13 02:14:31.710528 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:14:31.728418 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:14:31.740858 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:14:31.740908 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:14:31.740940 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:14:31.756526 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:14:31.844335 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (743) Dec 13 02:14:31.844376 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:14:31.844399 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:14:31.844422 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:14:31.844450 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:14:31.779642 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:14:31.838481 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:14:31.878298 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:14:31.853926 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:14:31.896303 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:14:31.907288 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:14:31.917300 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:14:31.940154 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:14:31.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.941395 systemd[1]: Starting ignition-mount.service... Dec 13 02:14:31.962226 systemd[1]: Starting sysroot-boot.service... Dec 13 02:14:31.970353 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:14:31.970483 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:14:32.005393 ignition[808]: INFO : Ignition 2.14.0 Dec 13 02:14:32.005393 ignition[808]: INFO : Stage: mount Dec 13 02:14:32.005393 ignition[808]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:32.005393 ignition[808]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:32.005393 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:32.005393 ignition[808]: INFO : mount: mount passed Dec 13 02:14:32.005393 ignition[808]: INFO : Ignition finished successfully Dec 13 02:14:32.146260 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (818) Dec 13 02:14:32.146296 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:14:32.146314 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:14:32.146329 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:14:32.146350 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:14:32.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:32.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:32.001761 systemd[1]: Finished sysroot-boot.service. Dec 13 02:14:32.013574 systemd[1]: Finished ignition-mount.service. Dec 13 02:14:32.029332 systemd[1]: Starting ignition-files.service... Dec 13 02:14:32.039601 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:14:32.186318 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (839) Dec 13 02:14:32.186395 ignition[837]: INFO : Ignition 2.14.0 Dec 13 02:14:32.186395 ignition[837]: INFO : Stage: files Dec 13 02:14:32.186395 ignition[837]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:32.186395 ignition[837]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:32.186395 ignition[837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:32.186395 ignition[837]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:14:32.186395 ignition[837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:14:32.186395 ignition[837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:14:32.186395 ignition[837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:14:32.186395 ignition[837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:14:32.186395 ignition[837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:14:32.186395 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Dec 13 02:14:32.186395 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:14:32.186395 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2636516590" Dec 13 02:14:32.186395 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2636516590": device or resource busy Dec 13 02:14:32.186395 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2636516590", trying btrfs: device or resource busy Dec 13 02:14:32.186395 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2636516590" Dec 13 02:14:32.186395 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2636516590" Dec 13 02:14:32.186395 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem2636516590" Dec 13 02:14:32.096380 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem2636516590" Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem584748134" Dec 13 02:14:32.453338 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(7): op(8): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem584748134": device or resource busy Dec 13 02:14:32.453338 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(7): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem584748134", trying btrfs: device or resource busy Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem584748134" Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem584748134" Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [started] unmounting "/mnt/oem584748134" Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [finished] unmounting "/mnt/oem584748134" Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:14:32.453338 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:14:32.157319 unknown[837]: wrote ssh authorized keys file for user: core Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4095705250" Dec 13 02:14:32.705355 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4095705250": device or resource busy Dec 13 02:14:32.705355 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4095705250", trying btrfs: device or resource busy Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4095705250" Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4095705250" Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem4095705250" Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem4095705250" Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:14:32.705355 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:14:32.281294 systemd-networkd[692]: eth0: Gained IPv6LL Dec 13 02:14:32.958418 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:14:32.958418 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2507133330" Dec 13 02:14:32.958418 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(12): op(13): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2507133330": device or resource busy Dec 13 02:14:32.958418 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(12): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2507133330", trying btrfs: device or resource busy Dec 13 02:14:32.958418 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2507133330" Dec 13 02:14:32.958418 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2507133330" Dec 13 02:14:32.958418 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [started] unmounting "/mnt/oem2507133330" Dec 13 02:14:32.958418 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [finished] unmounting "/mnt/oem2507133330" Dec 13 02:14:32.958418 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:14:32.958418 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:14:32.958418 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(16): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 02:14:32.958418 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(16): GET result: OK Dec 13 02:14:33.238418 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 13 02:14:33.238467 kernel: audit: type=1130 audit(1734056073.097:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.238493 kernel: audit: type=1130 audit(1734056073.196:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.093451 systemd[1]: Finished ignition-files.service. Dec 13 02:14:33.293340 kernel: audit: type=1130 audit(1734056073.245:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.293385 kernel: audit: type=1131 audit(1734056073.245:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.293545 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(17): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(17): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(18): [started] processing unit "oem-gce.service" Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(18): [finished] processing unit "oem-gce.service" Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(19): [started] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(19): [finished] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(1b): [started] setting preset to enabled for "oem-gce.service" Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(1b): [finished] setting preset to enabled for "oem-gce.service" Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(1c): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:14:33.293545 ignition[837]: INFO : files: op(1c): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:14:33.293545 ignition[837]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:14:33.293545 ignition[837]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:14:33.293545 ignition[837]: INFO : files: files passed Dec 13 02:14:33.293545 ignition[837]: INFO : Ignition finished successfully Dec 13 02:14:33.679313 kernel: audit: type=1130 audit(1734056073.360:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.679395 kernel: audit: type=1131 audit(1734056073.381:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.679422 kernel: audit: type=1130 audit(1734056073.477:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.679445 kernel: audit: type=1131 audit(1734056073.589:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.108030 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:14:33.147502 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:14:33.722339 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:14:33.148565 systemd[1]: Starting ignition-quench.service... Dec 13 02:14:33.172812 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:14:33.197924 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:14:33.198081 systemd[1]: Finished ignition-quench.service. Dec 13 02:14:33.246660 systemd[1]: Reached target ignition-complete.target. Dec 13 02:14:33.302408 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:14:33.857342 kernel: audit: type=1131 audit(1734056073.828:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.340544 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:14:33.340668 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:14:33.908348 kernel: audit: type=1131 audit(1734056073.873:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.382968 systemd[1]: Reached target initrd-fs.target. Dec 13 02:14:33.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.429445 systemd[1]: Reached target initrd.target. Dec 13 02:14:33.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.441596 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:14:33.951319 ignition[875]: INFO : Ignition 2.14.0 Dec 13 02:14:33.951319 ignition[875]: INFO : Stage: umount Dec 13 02:14:33.951319 ignition[875]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:33.951319 ignition[875]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:33.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.442764 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:14:34.031453 iscsid[702]: iscsid shutting down. Dec 13 02:14:34.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.046481 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:34.046481 ignition[875]: INFO : umount: umount passed Dec 13 02:14:34.046481 ignition[875]: INFO : Ignition finished successfully Dec 13 02:14:34.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.459786 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:14:34.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.479974 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:14:34.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.524765 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:14:34.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.534578 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:14:34.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.552594 systemd[1]: Stopped target timers.target. Dec 13 02:14:34.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.571570 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:14:33.571757 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:14:33.590796 systemd[1]: Stopped target initrd.target. Dec 13 02:14:33.631607 systemd[1]: Stopped target basic.target. Dec 13 02:14:33.652588 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:14:33.664590 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:14:33.687586 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:14:34.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.705617 systemd[1]: Stopped target remote-fs.target. Dec 13 02:14:34.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.730601 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:14:33.752595 systemd[1]: Stopped target sysinit.target. Dec 13 02:14:34.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.767586 systemd[1]: Stopped target local-fs.target. Dec 13 02:14:34.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.783588 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:14:33.799561 systemd[1]: Stopped target swap.target. Dec 13 02:14:33.813489 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:14:33.813674 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:14:33.829733 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:14:34.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.865511 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:14:34.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.406000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:14:33.865701 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:14:33.874777 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:14:33.875033 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:14:34.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.918658 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:14:34.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.918835 systemd[1]: Stopped ignition-files.service. Dec 13 02:14:34.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.935040 systemd[1]: Stopping ignition-mount.service... Dec 13 02:14:33.959811 systemd[1]: Stopping iscsid.service... Dec 13 02:14:34.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.978288 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:14:33.978548 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:14:33.998652 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:14:34.023289 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:14:34.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.023543 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:14:34.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.039536 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:14:34.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.039718 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:14:34.058142 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:14:34.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.059355 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:14:34.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.059467 systemd[1]: Stopped iscsid.service. Dec 13 02:14:34.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.072979 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:14:34.073087 systemd[1]: Stopped ignition-mount.service. Dec 13 02:14:34.087910 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:14:34.088016 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:14:34.104055 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:14:34.104213 systemd[1]: Stopped ignition-disks.service. Dec 13 02:14:34.740316 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Dec 13 02:14:34.118442 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:14:34.118501 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:14:34.133431 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:14:34.133487 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:14:34.148469 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:14:34.148532 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:14:34.164494 systemd[1]: Stopped target paths.target. Dec 13 02:14:34.178372 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:14:34.182262 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:14:34.186477 systemd[1]: Stopped target slices.target. Dec 13 02:14:34.200529 systemd[1]: Stopped target sockets.target. Dec 13 02:14:34.225451 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:14:34.225506 systemd[1]: Closed iscsid.socket. Dec 13 02:14:34.233499 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:14:34.233565 systemd[1]: Stopped ignition-setup.service. Dec 13 02:14:34.254484 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:14:34.254550 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:14:34.270614 systemd[1]: Stopping iscsiuio.service... Dec 13 02:14:34.285723 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:14:34.285846 systemd[1]: Stopped iscsiuio.service. Dec 13 02:14:34.300639 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:14:34.300750 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:14:34.317237 systemd[1]: Stopped target network.target. Dec 13 02:14:34.331370 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:14:34.331447 systemd[1]: Closed iscsiuio.socket. Dec 13 02:14:34.345530 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:14:34.349228 systemd-networkd[692]: eth0: DHCPv6 lease lost Dec 13 02:14:34.749000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:14:34.360510 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:14:34.377602 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:14:34.377721 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:14:34.392964 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:14:34.393092 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:14:34.407934 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:14:34.407974 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:14:34.425273 systemd[1]: Stopping network-cleanup.service... Dec 13 02:14:34.438244 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:14:34.438332 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:14:34.455386 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:14:34.455458 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:14:34.471521 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:14:34.471574 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:14:34.487555 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:14:34.503841 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:14:34.504514 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:14:34.504659 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:14:34.510678 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:14:34.510908 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:14:34.529438 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:14:34.529485 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:14:34.547396 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:14:34.547459 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:14:34.566492 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:14:34.566550 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:14:34.584435 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:14:34.584491 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:14:34.601452 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:14:34.618257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:14:34.618448 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:14:34.633838 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:14:34.633955 systemd[1]: Stopped network-cleanup.service. Dec 13 02:14:34.649592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:14:34.649693 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:14:34.666503 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:14:34.682447 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:14:34.704250 systemd[1]: Switching root. Dec 13 02:14:34.751868 systemd-journald[189]: Journal stopped Dec 13 02:14:39.295624 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:14:39.295739 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:14:39.295770 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:14:39.295799 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:14:39.295824 kernel: SELinux: policy capability open_perms=1 Dec 13 02:14:39.295846 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:14:39.295867 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:14:39.295889 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:14:39.295912 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:14:39.295933 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:14:39.295956 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:14:39.295981 systemd[1]: Successfully loaded SELinux policy in 107.943ms. Dec 13 02:14:39.296044 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.936ms. Dec 13 02:14:39.296071 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:14:39.296096 systemd[1]: Detected virtualization kvm. Dec 13 02:14:39.296119 systemd[1]: Detected architecture x86-64. Dec 13 02:14:39.296143 systemd[1]: Detected first boot. Dec 13 02:14:39.296191 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:14:39.297134 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:14:39.297190 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:14:39.297219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:14:39.297252 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:14:39.297279 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:14:39.297307 kernel: kauditd_printk_skb: 48 callbacks suppressed Dec 13 02:14:39.297330 kernel: audit: type=1334 audit(1734056078.413:88): prog-id=12 op=LOAD Dec 13 02:14:39.297353 kernel: audit: type=1334 audit(1734056078.413:89): prog-id=3 op=UNLOAD Dec 13 02:14:39.297375 kernel: audit: type=1334 audit(1734056078.418:90): prog-id=13 op=LOAD Dec 13 02:14:39.297402 kernel: audit: type=1334 audit(1734056078.432:91): prog-id=14 op=LOAD Dec 13 02:14:39.297423 kernel: audit: type=1334 audit(1734056078.432:92): prog-id=4 op=UNLOAD Dec 13 02:14:39.297446 kernel: audit: type=1334 audit(1734056078.432:93): prog-id=5 op=UNLOAD Dec 13 02:14:39.297470 kernel: audit: type=1334 audit(1734056078.446:94): prog-id=15 op=LOAD Dec 13 02:14:39.297492 kernel: audit: type=1334 audit(1734056078.446:95): prog-id=12 op=UNLOAD Dec 13 02:14:39.297516 kernel: audit: type=1334 audit(1734056078.474:96): prog-id=16 op=LOAD Dec 13 02:14:39.297538 kernel: audit: type=1334 audit(1734056078.481:97): prog-id=17 op=LOAD Dec 13 02:14:39.297561 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:14:39.297586 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:14:39.297614 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:14:39.298525 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:14:39.298567 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:14:39.298593 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:14:39.298619 systemd[1]: Created slice system-getty.slice. Dec 13 02:14:39.298643 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:14:39.298667 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:14:39.298697 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:14:39.298722 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:14:39.298746 systemd[1]: Created slice user.slice. Dec 13 02:14:39.298778 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:14:39.298804 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:14:39.298828 systemd[1]: Set up automount boot.automount. Dec 13 02:14:39.298852 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:14:39.298876 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:14:39.298899 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:14:39.298927 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:14:39.298952 systemd[1]: Reached target integritysetup.target. Dec 13 02:14:39.298976 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:14:39.299000 systemd[1]: Reached target remote-fs.target. Dec 13 02:14:39.299021 systemd[1]: Reached target slices.target. Dec 13 02:14:39.299045 systemd[1]: Reached target swap.target. Dec 13 02:14:39.299069 systemd[1]: Reached target torcx.target. Dec 13 02:14:39.299093 systemd[1]: Reached target veritysetup.target. Dec 13 02:14:39.299117 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:14:39.299140 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:14:39.299182 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:14:39.299206 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:14:39.299230 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:14:39.299252 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:14:39.299276 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:14:39.299300 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:14:39.299325 systemd[1]: Mounting media.mount... Dec 13 02:14:39.299353 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:39.299376 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:14:39.299402 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:14:39.299425 systemd[1]: Mounting tmp.mount... Dec 13 02:14:39.299449 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:14:39.299472 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:14:39.299496 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:14:39.299518 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:14:39.299542 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:14:39.299565 systemd[1]: Starting modprobe@drm.service... Dec 13 02:14:39.299589 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:14:39.299615 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:14:39.299639 systemd[1]: Starting modprobe@loop.service... Dec 13 02:14:39.299662 kernel: fuse: init (API version 7.34) Dec 13 02:14:39.299686 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:14:39.299709 kernel: loop: module loaded Dec 13 02:14:39.299733 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:14:39.299762 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:14:39.299792 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:14:39.299816 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:14:39.299844 systemd[1]: Stopped systemd-journald.service. Dec 13 02:14:39.299867 systemd[1]: Starting systemd-journald.service... Dec 13 02:14:39.299891 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:14:39.299914 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:14:39.299937 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:14:39.299966 systemd-journald[998]: Journal started Dec 13 02:14:39.300059 systemd-journald[998]: Runtime Journal (/run/log/journal/a3ddbb5e27dd6c52958b0dffd3199ee6) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:14:35.029000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:14:35.176000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:14:35.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:14:35.176000 audit: BPF prog-id=10 op=LOAD Dec 13 02:14:35.176000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:14:35.177000 audit: BPF prog-id=11 op=LOAD Dec 13 02:14:35.177000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:14:35.325000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:14:35.325000 audit[908]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:14:35.325000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:14:35.336000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:14:35.336000 audit[908]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:14:35.336000 audit: CWD cwd="/" Dec 13 02:14:35.336000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:35.336000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:35.336000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:14:38.413000 audit: BPF prog-id=12 op=LOAD Dec 13 02:14:38.413000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:14:38.418000 audit: BPF prog-id=13 op=LOAD Dec 13 02:14:38.432000 audit: BPF prog-id=14 op=LOAD Dec 13 02:14:38.432000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:14:38.432000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:14:38.446000 audit: BPF prog-id=15 op=LOAD Dec 13 02:14:38.446000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:14:38.474000 audit: BPF prog-id=16 op=LOAD Dec 13 02:14:38.481000 audit: BPF prog-id=17 op=LOAD Dec 13 02:14:38.481000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:14:38.481000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:14:38.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:38.504000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:14:38.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:38.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.248000 audit: BPF prog-id=18 op=LOAD Dec 13 02:14:39.248000 audit: BPF prog-id=19 op=LOAD Dec 13 02:14:39.248000 audit: BPF prog-id=20 op=LOAD Dec 13 02:14:39.248000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:14:39.248000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:14:39.289000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:14:39.289000 audit[998]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffec447f340 a2=4000 a3=7ffec447f3dc items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:14:39.289000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:14:35.318280 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:14:38.412731 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:14:35.319543 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:14:38.484077 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:14:35.319580 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:14:35.319635 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:14:35.319657 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:14:35.319715 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:14:35.319741 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:14:35.320042 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:14:35.320112 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:14:35.320139 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:14:35.322113 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:14:35.322957 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:14:35.322995 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:14:35.323023 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:14:35.323056 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:14:35.323083 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:14:37.822337 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:37Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:14:37.822640 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:37Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:14:37.822794 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:37Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:14:37.823020 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:37Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:14:37.823081 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:37Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:14:37.823151 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-12-13T02:14:37Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:14:39.319206 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:14:39.339662 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:14:39.339763 systemd[1]: Stopped verity-setup.service. Dec 13 02:14:39.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.360235 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:39.369216 systemd[1]: Started systemd-journald.service. Dec 13 02:14:39.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.378777 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:14:39.387525 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:14:39.394464 systemd[1]: Mounted media.mount. Dec 13 02:14:39.401438 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:14:39.410448 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:14:39.419413 systemd[1]: Mounted tmp.mount. Dec 13 02:14:39.426507 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:14:39.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.435595 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:14:39.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.444638 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:14:39.444847 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:14:39.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.454682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:14:39.454907 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:14:39.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.463667 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:14:39.463878 systemd[1]: Finished modprobe@drm.service. Dec 13 02:14:39.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.472693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:14:39.472986 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:14:39.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.481629 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:14:39.481837 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:14:39.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.490634 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:14:39.490832 systemd[1]: Finished modprobe@loop.service. Dec 13 02:14:39.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.499649 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:14:39.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.508630 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:14:39.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.517608 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:14:39.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.526608 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:14:39.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.536009 systemd[1]: Reached target network-pre.target. Dec 13 02:14:39.545677 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:14:39.555742 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:14:39.563300 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:14:39.566057 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:14:39.574772 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:14:39.583316 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:14:39.584924 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:14:39.592335 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:14:39.594014 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:14:39.594739 systemd-journald[998]: Time spent on flushing to /var/log/journal/a3ddbb5e27dd6c52958b0dffd3199ee6 is 62.496ms for 1140 entries. Dec 13 02:14:39.594739 systemd-journald[998]: System Journal (/var/log/journal/a3ddbb5e27dd6c52958b0dffd3199ee6) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:14:39.677908 systemd-journald[998]: Received client request to flush runtime journal. Dec 13 02:14:39.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.610988 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:14:39.619877 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:14:39.630609 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:14:39.640469 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:14:39.649667 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:14:39.658685 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:14:39.670829 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:14:39.679964 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:14:39.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.688983 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:14:39.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:39.700076 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 02:14:40.259315 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:14:40.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:40.266000 audit: BPF prog-id=21 op=LOAD Dec 13 02:14:40.266000 audit: BPF prog-id=22 op=LOAD Dec 13 02:14:40.267000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:14:40.267000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:14:40.269094 systemd[1]: Starting systemd-udevd.service... Dec 13 02:14:40.292448 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Dec 13 02:14:40.336297 systemd[1]: Started systemd-udevd.service. Dec 13 02:14:40.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:40.345000 audit: BPF prog-id=23 op=LOAD Dec 13 02:14:40.347995 systemd[1]: Starting systemd-networkd.service... Dec 13 02:14:40.362000 audit: BPF prog-id=24 op=LOAD Dec 13 02:14:40.362000 audit: BPF prog-id=25 op=LOAD Dec 13 02:14:40.362000 audit: BPF prog-id=26 op=LOAD Dec 13 02:14:40.364739 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:14:40.409306 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:14:40.445477 systemd[1]: Started systemd-userdbd.service. Dec 13 02:14:40.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:40.503191 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:14:40.581221 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:14:40.592258 systemd-networkd[1028]: lo: Link UP Dec 13 02:14:40.592271 systemd-networkd[1028]: lo: Gained carrier Dec 13 02:14:40.593041 systemd-networkd[1028]: Enumeration completed Dec 13 02:14:40.593214 systemd[1]: Started systemd-networkd.service. Dec 13 02:14:40.594125 systemd-networkd[1028]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:14:40.596388 systemd-networkd[1028]: eth0: Link UP Dec 13 02:14:40.596400 systemd-networkd[1028]: eth0: Gained carrier Dec 13 02:14:40.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:40.607350 systemd-networkd[1028]: eth0: DHCPv4 address 10.128.0.35/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:14:40.635197 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 02:14:40.649199 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1016) Dec 13 02:14:40.634000 audit[1044]: AVC avc: denied { confidentiality } for pid=1044 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:14:40.661193 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:14:40.634000 audit[1044]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555806291390 a1=337fc a2=7fd5278e0bc5 a3=5 items=110 ppid=1015 pid=1044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:14:40.634000 audit: CWD cwd="/" Dec 13 02:14:40.634000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=1 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=2 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=3 name=(null) inode=14634 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=4 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=5 name=(null) inode=14635 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=6 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=7 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=8 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=9 name=(null) inode=14637 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=10 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=11 name=(null) inode=14638 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=12 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=13 name=(null) inode=14639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=14 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=15 name=(null) inode=14640 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.680195 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 02:14:40.704706 kernel: EDAC MC: Ver: 3.0.0 Dec 13 02:14:40.634000 audit: PATH item=16 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=17 name=(null) inode=14641 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=18 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=19 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=20 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=21 name=(null) inode=14643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=22 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=23 name=(null) inode=14644 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=24 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=25 name=(null) inode=14645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=26 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=27 name=(null) inode=14646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=28 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=29 name=(null) inode=14647 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=30 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=31 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=32 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=33 name=(null) inode=14649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=34 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=35 name=(null) inode=14650 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=36 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=37 name=(null) inode=14651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=38 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=39 name=(null) inode=14652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=40 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=41 name=(null) inode=14653 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=42 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=43 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=44 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=45 name=(null) inode=14655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=46 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=47 name=(null) inode=14656 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=48 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=49 name=(null) inode=14657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=50 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=51 name=(null) inode=14658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=52 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=53 name=(null) inode=14659 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=55 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=56 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=57 name=(null) inode=14661 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=58 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=59 name=(null) inode=14662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=60 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=61 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=62 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=63 name=(null) inode=14664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=64 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=65 name=(null) inode=14665 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=66 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=67 name=(null) inode=14666 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=68 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=69 name=(null) inode=14667 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=70 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=71 name=(null) inode=14668 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=72 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=73 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=74 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=75 name=(null) inode=14670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=76 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=77 name=(null) inode=14671 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=78 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=79 name=(null) inode=14672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=80 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=81 name=(null) inode=14673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=82 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=83 name=(null) inode=14674 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=84 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=85 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=86 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=87 name=(null) inode=14676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=88 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=89 name=(null) inode=14677 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=90 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=91 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=92 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=93 name=(null) inode=14679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=94 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=95 name=(null) inode=14680 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=96 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=97 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=98 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=99 name=(null) inode=14682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=100 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=101 name=(null) inode=14683 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=102 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=103 name=(null) inode=14684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=104 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=105 name=(null) inode=14685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=106 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=107 name=(null) inode=14686 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PATH item=109 name=(null) inode=14687 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:40.634000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:14:40.727208 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:14:40.728578 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:14:40.742185 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:14:40.757674 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:14:40.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:40.767815 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:14:40.795607 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:14:40.824426 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:14:40.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:40.833537 systemd[1]: Reached target cryptsetup.target. Dec 13 02:14:40.843813 systemd[1]: Starting lvm2-activation.service... Dec 13 02:14:40.850192 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:14:40.882561 systemd[1]: Finished lvm2-activation.service. Dec 13 02:14:40.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:40.891491 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:14:40.900322 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:14:40.900372 systemd[1]: Reached target local-fs.target. Dec 13 02:14:40.908322 systemd[1]: Reached target machines.target. Dec 13 02:14:40.917818 systemd[1]: Starting ldconfig.service... Dec 13 02:14:40.926356 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:14:40.926452 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:40.928152 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:14:40.936879 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:14:40.948101 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:14:40.950107 systemd[1]: Starting systemd-sysext.service... Dec 13 02:14:40.950860 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1055 (bootctl) Dec 13 02:14:40.953535 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:14:40.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:40.973313 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:14:40.981082 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:14:40.987838 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:14:40.988585 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:14:41.010215 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 02:14:41.106379 systemd-fsck[1067]: fsck.fat 4.2 (2021-01-31) Dec 13 02:14:41.106379 systemd-fsck[1067]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 02:14:41.110887 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:14:41.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.125332 systemd[1]: Mounting boot.mount... Dec 13 02:14:41.161092 systemd[1]: Mounted boot.mount. Dec 13 02:14:41.184558 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:14:41.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.329706 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:14:41.330527 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:14:41.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.388189 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:14:41.417275 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 02:14:41.439846 (sd-sysext)[1071]: Using extensions 'kubernetes'. Dec 13 02:14:41.440527 (sd-sysext)[1071]: Merged extensions into '/usr'. Dec 13 02:14:41.465108 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:41.468237 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:14:41.475690 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:14:41.480328 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:14:41.489500 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:14:41.498373 systemd[1]: Starting modprobe@loop.service... Dec 13 02:14:41.506360 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:14:41.506594 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:41.506799 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:41.511419 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:14:41.518769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:14:41.518974 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:14:41.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.527917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:14:41.528106 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:14:41.533384 ldconfig[1054]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:14:41.536837 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:14:41.537044 systemd[1]: Finished modprobe@loop.service. Dec 13 02:14:41.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.545906 systemd[1]: Finished ldconfig.service. Dec 13 02:14:41.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.553962 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:14:41.554175 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:14:41.555744 systemd[1]: Finished systemd-sysext.service. Dec 13 02:14:41.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.565808 systemd[1]: Starting ensure-sysext.service... Dec 13 02:14:41.574565 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:14:41.585495 systemd[1]: Reloading. Dec 13 02:14:41.607416 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:14:41.619777 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:14:41.630682 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:14:41.686665 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-12-13T02:14:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:14:41.686713 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-12-13T02:14:41Z" level=info msg="torcx already run" Dec 13 02:14:41.845541 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:14:41.845575 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:14:41.884798 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:14:41.960000 audit: BPF prog-id=27 op=LOAD Dec 13 02:14:41.960000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:14:41.960000 audit: BPF prog-id=28 op=LOAD Dec 13 02:14:41.960000 audit: BPF prog-id=29 op=LOAD Dec 13 02:14:41.960000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:14:41.960000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:14:41.964000 audit: BPF prog-id=30 op=LOAD Dec 13 02:14:41.964000 audit: BPF prog-id=24 op=UNLOAD Dec 13 02:14:41.964000 audit: BPF prog-id=31 op=LOAD Dec 13 02:14:41.964000 audit: BPF prog-id=32 op=LOAD Dec 13 02:14:41.964000 audit: BPF prog-id=25 op=UNLOAD Dec 13 02:14:41.964000 audit: BPF prog-id=26 op=UNLOAD Dec 13 02:14:41.965000 audit: BPF prog-id=33 op=LOAD Dec 13 02:14:41.965000 audit: BPF prog-id=34 op=LOAD Dec 13 02:14:41.965000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:14:41.965000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:14:41.967000 audit: BPF prog-id=35 op=LOAD Dec 13 02:14:41.967000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:14:41.972271 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:14:41.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:41.986865 systemd[1]: Starting audit-rules.service... Dec 13 02:14:41.995659 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:14:42.006116 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:14:42.016221 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:14:42.024000 audit: BPF prog-id=36 op=LOAD Dec 13 02:14:42.027140 systemd[1]: Starting systemd-resolved.service... Dec 13 02:14:42.035000 audit: BPF prog-id=37 op=LOAD Dec 13 02:14:42.037991 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:14:42.046702 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:14:42.054000 audit[1166]: SYSTEM_BOOT pid=1166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:14:42.057129 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:14:42.065985 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:14:42.066247 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:14:42.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:42.070000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:14:42.070000 audit[1172]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe81e246b0 a2=420 a3=0 items=0 ppid=1142 pid=1172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:14:42.070000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:14:42.072403 augenrules[1172]: No rules Dec 13 02:14:42.075659 systemd[1]: Finished audit-rules.service. Dec 13 02:14:42.083596 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:14:42.102780 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:42.105687 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:14:42.107941 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:14:42.117117 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:14:42.126232 systemd[1]: Starting modprobe@loop.service... Dec 13 02:14:42.135107 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:14:42.141592 enable-oslogin[1180]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:14:42.143360 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:14:42.143610 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:42.145918 systemd[1]: Starting systemd-update-done.service... Dec 13 02:14:42.153278 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:14:42.153500 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:42.156247 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:14:42.165010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:14:42.165234 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:14:42.173951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:14:42.174148 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:14:42.183102 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:14:42.183359 systemd[1]: Finished modprobe@loop.service. Dec 13 02:14:42.192053 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:14:42.192302 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:14:42.200918 systemd[1]: Finished systemd-update-done.service. Dec 13 02:14:42.211516 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:14:42.211740 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:14:42.214577 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:42.215019 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:14:42.219934 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:14:42.226365 systemd-resolved[1159]: Positive Trust Anchors: Dec 13 02:14:42.226384 systemd-resolved[1159]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:14:42.226438 systemd-resolved[1159]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:14:42.228998 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:14:42.230827 systemd-timesyncd[1163]: Contacted time server 169.254.169.254:123 (169.254.169.254). Dec 13 02:14:42.231310 systemd-timesyncd[1163]: Initial clock synchronization to Fri 2024-12-13 02:14:42.101905 UTC. Dec 13 02:14:42.237837 systemd[1]: Starting modprobe@loop.service... Dec 13 02:14:42.246785 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:14:42.252692 enable-oslogin[1186]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:14:42.255343 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:14:42.255577 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:42.255772 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:14:42.255925 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:42.257720 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:14:42.264377 systemd-resolved[1159]: Defaulting to hostname 'linux'. Dec 13 02:14:42.267396 systemd[1]: Started systemd-resolved.service. Dec 13 02:14:42.275821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:14:42.276038 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:14:42.284827 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:14:42.285032 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:14:42.293781 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:14:42.293995 systemd[1]: Finished modprobe@loop.service. Dec 13 02:14:42.302758 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:14:42.302995 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:14:42.311999 systemd[1]: Reached target network.target. Dec 13 02:14:42.320423 systemd[1]: Reached target nss-lookup.target. Dec 13 02:14:42.329463 systemd[1]: Reached target time-set.target. Dec 13 02:14:42.338427 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:14:42.338708 systemd[1]: Reached target sysinit.target. Dec 13 02:14:42.347591 systemd[1]: Started motdgen.path. Dec 13 02:14:42.354536 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:14:42.364736 systemd[1]: Started logrotate.timer. Dec 13 02:14:42.371633 systemd[1]: Started mdadm.timer. Dec 13 02:14:42.378480 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:14:42.387393 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:14:42.387635 systemd[1]: Reached target paths.target. Dec 13 02:14:42.394415 systemd[1]: Reached target timers.target. Dec 13 02:14:42.401875 systemd[1]: Listening on dbus.socket. Dec 13 02:14:42.412037 systemd[1]: Starting docker.socket... Dec 13 02:14:42.423292 systemd[1]: Listening on sshd.socket. Dec 13 02:14:42.430562 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:42.430870 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:14:42.433742 systemd[1]: Listening on docker.socket. Dec 13 02:14:42.442845 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:14:42.443135 systemd[1]: Reached target sockets.target. Dec 13 02:14:42.451408 systemd[1]: Reached target basic.target. Dec 13 02:14:42.458372 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:14:42.458612 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:14:42.460696 systemd[1]: Starting containerd.service... Dec 13 02:14:42.469124 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:14:42.481807 systemd[1]: Starting dbus.service... Dec 13 02:14:42.490072 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:14:42.500594 systemd[1]: Starting extend-filesystems.service... Dec 13 02:14:42.507499 jq[1192]: false Dec 13 02:14:42.507306 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:14:42.509526 systemd[1]: Starting modprobe@drm.service... Dec 13 02:14:42.518190 systemd[1]: Starting motdgen.service... Dec 13 02:14:42.530232 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:14:42.539320 systemd[1]: Starting sshd-keygen.service... Dec 13 02:14:42.549429 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:14:42.558293 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:42.558556 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 02:14:42.559415 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:14:42.561459 systemd[1]: Starting update-engine.service... Dec 13 02:14:42.571222 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:14:42.584459 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:14:42.584764 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:14:42.585388 systemd-networkd[1028]: eth0: Gained IPv6LL Dec 13 02:14:42.585716 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:14:42.587359 systemd[1]: Finished modprobe@drm.service. Dec 13 02:14:42.589474 jq[1213]: true Dec 13 02:14:42.597098 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:14:42.597418 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:14:42.609265 extend-filesystems[1194]: Found loop1 Dec 13 02:14:42.609265 extend-filesystems[1194]: Found sda Dec 13 02:14:42.609265 extend-filesystems[1194]: Found sda1 Dec 13 02:14:42.609265 extend-filesystems[1194]: Found sda2 Dec 13 02:14:42.609265 extend-filesystems[1194]: Found sda3 Dec 13 02:14:42.609265 extend-filesystems[1194]: Found usr Dec 13 02:14:42.609265 extend-filesystems[1194]: Found sda4 Dec 13 02:14:42.609265 extend-filesystems[1194]: Found sda6 Dec 13 02:14:42.609265 extend-filesystems[1194]: Found sda7 Dec 13 02:14:42.609265 extend-filesystems[1194]: Found sda9 Dec 13 02:14:42.609265 extend-filesystems[1194]: Checking size of /dev/sda9 Dec 13 02:14:42.913368 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 02:14:42.913443 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 02:14:42.913486 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:14:42.619054 systemd[1]: Starting systemd-logind.service... Dec 13 02:14:42.913676 update_engine[1208]: I1213 02:14:42.715612 1208 main.cc:92] Flatcar Update Engine starting Dec 13 02:14:42.913676 update_engine[1208]: I1213 02:14:42.743925 1208 update_check_scheduler.cc:74] Next update check in 4m54s Dec 13 02:14:42.715135 dbus-daemon[1191]: [system] SELinux support is enabled Dec 13 02:14:42.917655 extend-filesystems[1194]: Resized partition /dev/sda9 Dec 13 02:14:42.630887 systemd[1]: Finished ensure-sysext.service. Dec 13 02:14:42.735926 dbus-daemon[1191]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1028 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:14:42.926908 extend-filesystems[1236]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:14:42.926908 extend-filesystems[1236]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:14:42.926908 extend-filesystems[1236]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 02:14:42.926908 extend-filesystems[1236]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 02:14:42.980372 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:14:42.639876 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:14:42.766686 dbus-daemon[1191]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:14:42.980703 jq[1216]: true Dec 13 02:14:42.981180 extend-filesystems[1194]: Resized filesystem in /dev/sda9 Dec 13 02:14:42.640126 systemd[1]: Finished motdgen.service. Dec 13 02:14:42.651119 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:14:42.673903 systemd[1]: Reached target network-online.target. Dec 13 02:14:42.990337 bash[1253]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:14:42.684599 systemd[1]: Starting kubelet.service... Dec 13 02:14:42.693127 systemd[1]: Starting oem-gce.service... Dec 13 02:14:42.715398 systemd[1]: Started dbus.service. Dec 13 02:14:42.734130 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:14:42.992008 mkfs.ext4[1234]: mke2fs 1.46.5 (30-Dec-2021) Dec 13 02:14:42.992008 mkfs.ext4[1234]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Dec 13 02:14:42.992008 mkfs.ext4[1234]: Creating filesystem with 262144 4k blocks and 65536 inodes Dec 13 02:14:42.992008 mkfs.ext4[1234]: Filesystem UUID: a15091e9-7f87-4218-9f65-f693cdb6b9f9 Dec 13 02:14:42.992008 mkfs.ext4[1234]: Superblock backups stored on blocks: Dec 13 02:14:42.992008 mkfs.ext4[1234]: 32768, 98304, 163840, 229376 Dec 13 02:14:42.992008 mkfs.ext4[1234]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:14:42.992008 mkfs.ext4[1234]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:14:42.992008 mkfs.ext4[1234]: Creating journal (8192 blocks): done Dec 13 02:14:42.992008 mkfs.ext4[1234]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:14:42.734202 systemd[1]: Reached target system-config.target. Dec 13 02:14:42.742519 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:14:42.742553 systemd[1]: Reached target user-config.target. Dec 13 02:14:42.993446 umount[1245]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Dec 13 02:14:42.770014 systemd[1]: Started update-engine.service. Dec 13 02:14:42.788305 systemd[1]: Started locksmithd.service. Dec 13 02:14:42.836666 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:14:42.843908 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:14:42.844216 systemd[1]: Finished extend-filesystems.service. Dec 13 02:14:42.853918 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:14:42.879279 systemd-logind[1219]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:14:42.879314 systemd-logind[1219]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:14:42.879347 systemd-logind[1219]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:14:42.879599 systemd-logind[1219]: New seat seat0. Dec 13 02:14:42.882622 systemd[1]: Started systemd-logind.service. Dec 13 02:14:43.023806 coreos-metadata[1190]: Dec 13 02:14:43.004 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 02:14:43.037761 coreos-metadata[1190]: Dec 13 02:14:43.037 INFO Fetch failed with 404: resource not found Dec 13 02:14:43.037936 coreos-metadata[1190]: Dec 13 02:14:43.037 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 02:14:43.039044 coreos-metadata[1190]: Dec 13 02:14:43.038 INFO Fetch successful Dec 13 02:14:43.039044 coreos-metadata[1190]: Dec 13 02:14:43.038 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 02:14:43.039537 coreos-metadata[1190]: Dec 13 02:14:43.039 INFO Fetch failed with 404: resource not found Dec 13 02:14:43.039636 coreos-metadata[1190]: Dec 13 02:14:43.039 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 02:14:43.040340 coreos-metadata[1190]: Dec 13 02:14:43.040 INFO Fetch failed with 404: resource not found Dec 13 02:14:43.040340 coreos-metadata[1190]: Dec 13 02:14:43.040 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 02:14:43.041513 coreos-metadata[1190]: Dec 13 02:14:43.041 INFO Fetch successful Dec 13 02:14:43.044329 unknown[1190]: wrote ssh authorized keys file for user: core Dec 13 02:14:43.059151 env[1218]: time="2024-12-13T02:14:43.059041273Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:14:43.084869 update-ssh-keys[1265]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:14:43.085359 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:14:43.223927 dbus-daemon[1191]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:14:43.224120 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:14:43.225003 dbus-daemon[1191]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1255 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:14:43.229510 env[1218]: time="2024-12-13T02:14:43.229449966Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:14:43.229659 env[1218]: time="2024-12-13T02:14:43.229629728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:43.235923 systemd[1]: Starting polkit.service... Dec 13 02:14:43.238619 env[1218]: time="2024-12-13T02:14:43.238564437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:14:43.238619 env[1218]: time="2024-12-13T02:14:43.238618192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:43.247186 env[1218]: time="2024-12-13T02:14:43.243059502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:14:43.247186 env[1218]: time="2024-12-13T02:14:43.243109012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:43.247186 env[1218]: time="2024-12-13T02:14:43.243134694Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:14:43.247186 env[1218]: time="2024-12-13T02:14:43.243197329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:43.247186 env[1218]: time="2024-12-13T02:14:43.243347433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:43.247186 env[1218]: time="2024-12-13T02:14:43.243691726Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:43.247186 env[1218]: time="2024-12-13T02:14:43.243953820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:14:43.247186 env[1218]: time="2024-12-13T02:14:43.243984602Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:14:43.247186 env[1218]: time="2024-12-13T02:14:43.244064782Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:14:43.247186 env[1218]: time="2024-12-13T02:14:43.244085012Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:14:43.258511 env[1218]: time="2024-12-13T02:14:43.258444804Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:14:43.258632 env[1218]: time="2024-12-13T02:14:43.258523614Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:14:43.258632 env[1218]: time="2024-12-13T02:14:43.258548774Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:14:43.258632 env[1218]: time="2024-12-13T02:14:43.258612129Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:14:43.258784 env[1218]: time="2024-12-13T02:14:43.258634681Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:14:43.258784 env[1218]: time="2024-12-13T02:14:43.258761859Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:14:43.258889 env[1218]: time="2024-12-13T02:14:43.258787392Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:14:43.258889 env[1218]: time="2024-12-13T02:14:43.258834065Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:14:43.258889 env[1218]: time="2024-12-13T02:14:43.258860213Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:14:43.259024 env[1218]: time="2024-12-13T02:14:43.258908453Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:14:43.259024 env[1218]: time="2024-12-13T02:14:43.258934203Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:14:43.259024 env[1218]: time="2024-12-13T02:14:43.258955404Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:14:43.259219 env[1218]: time="2024-12-13T02:14:43.259188156Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:14:43.259552 env[1218]: time="2024-12-13T02:14:43.259518759Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:14:43.260176 env[1218]: time="2024-12-13T02:14:43.260117830Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:14:43.260269 env[1218]: time="2024-12-13T02:14:43.260199333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.260269 env[1218]: time="2024-12-13T02:14:43.260229321Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:14:43.260443 env[1218]: time="2024-12-13T02:14:43.260395513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.260516 env[1218]: time="2024-12-13T02:14:43.260452008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.260516 env[1218]: time="2024-12-13T02:14:43.260474971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.260618 env[1218]: time="2024-12-13T02:14:43.260513799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.260618 env[1218]: time="2024-12-13T02:14:43.260536866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.260618 env[1218]: time="2024-12-13T02:14:43.260559238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.260618 env[1218]: time="2024-12-13T02:14:43.260601243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.260784 env[1218]: time="2024-12-13T02:14:43.260621816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.260784 env[1218]: time="2024-12-13T02:14:43.260670651Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:14:43.260939 env[1218]: time="2024-12-13T02:14:43.260912488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.261005 env[1218]: time="2024-12-13T02:14:43.260948908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.261005 env[1218]: time="2024-12-13T02:14:43.260992407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.261120 env[1218]: time="2024-12-13T02:14:43.261014821Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:14:43.261120 env[1218]: time="2024-12-13T02:14:43.261041950Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:14:43.261120 env[1218]: time="2024-12-13T02:14:43.261079352Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:14:43.261301 env[1218]: time="2024-12-13T02:14:43.261130208Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:14:43.261301 env[1218]: time="2024-12-13T02:14:43.261245224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:14:43.261782 env[1218]: time="2024-12-13T02:14:43.261672921Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:14:43.264607 env[1218]: time="2024-12-13T02:14:43.261808460Z" level=info msg="Connect containerd service" Dec 13 02:14:43.264607 env[1218]: time="2024-12-13T02:14:43.261857464Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:14:43.264607 env[1218]: time="2024-12-13T02:14:43.263653504Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:14:43.264607 env[1218]: time="2024-12-13T02:14:43.263804407Z" level=info msg="Start subscribing containerd event" Dec 13 02:14:43.264607 env[1218]: time="2024-12-13T02:14:43.263870917Z" level=info msg="Start recovering state" Dec 13 02:14:43.264607 env[1218]: time="2024-12-13T02:14:43.263944552Z" level=info msg="Start event monitor" Dec 13 02:14:43.264607 env[1218]: time="2024-12-13T02:14:43.263965006Z" level=info msg="Start snapshots syncer" Dec 13 02:14:43.264607 env[1218]: time="2024-12-13T02:14:43.263978342Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:14:43.264607 env[1218]: time="2024-12-13T02:14:43.263991633Z" level=info msg="Start streaming server" Dec 13 02:14:43.265495 env[1218]: time="2024-12-13T02:14:43.265465315Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:14:43.265636 env[1218]: time="2024-12-13T02:14:43.265612521Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:14:43.292321 systemd[1]: Started containerd.service. Dec 13 02:14:43.292692 env[1218]: time="2024-12-13T02:14:43.292537544Z" level=info msg="containerd successfully booted in 0.412612s" Dec 13 02:14:43.300275 polkitd[1267]: Started polkitd version 121 Dec 13 02:14:43.326042 polkitd[1267]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:14:43.326177 polkitd[1267]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:14:43.329133 polkitd[1267]: Finished loading, compiling and executing 2 rules Dec 13 02:14:43.332672 dbus-daemon[1191]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:14:43.333405 polkitd[1267]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:14:43.334072 systemd[1]: Started polkit.service. Dec 13 02:14:43.358351 systemd-hostnamed[1255]: Hostname set to (transient) Dec 13 02:14:43.362724 systemd-resolved[1159]: System hostname changed to 'ci-3510-3-6-2826bf8d81ff35b3c585.c.flatcar-212911.internal'. Dec 13 02:14:44.256703 sshd_keygen[1214]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:14:44.300215 systemd[1]: Finished sshd-keygen.service. Dec 13 02:14:44.309773 systemd[1]: Starting issuegen.service... Dec 13 02:14:44.325947 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:14:44.326215 systemd[1]: Finished issuegen.service. Dec 13 02:14:44.335566 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:14:44.358623 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:14:44.370270 systemd[1]: Started getty@tty1.service. Dec 13 02:14:44.380400 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:14:44.389668 systemd[1]: Reached target getty.target. Dec 13 02:14:44.428381 locksmithd[1249]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:14:44.625034 systemd[1]: Started kubelet.service. Dec 13 02:14:45.631233 kubelet[1295]: E1213 02:14:45.631181 1295 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:14:45.634123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:14:45.634390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:14:45.634802 systemd[1]: kubelet.service: Consumed 1.374s CPU time. Dec 13 02:14:47.778824 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Dec 13 02:14:50.005289 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:14:50.026285 systemd-nspawn[1302]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Dec 13 02:14:50.026285 systemd-nspawn[1302]: Press ^] three times within 1s to kill container. Dec 13 02:14:50.040213 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:14:50.124325 systemd[1]: Started oem-gce.service. Dec 13 02:14:50.124856 systemd[1]: Reached target multi-user.target. Dec 13 02:14:50.127097 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:14:50.138112 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:14:50.138388 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:14:50.138656 systemd[1]: Startup finished in 1.030s (kernel) + 7.106s (initrd) + 15.231s (userspace) = 23.368s. Dec 13 02:14:50.183698 systemd-nspawn[1302]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 02:14:50.183884 systemd-nspawn[1302]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 02:14:50.183983 systemd-nspawn[1302]: + /usr/bin/google_instance_setup Dec 13 02:14:50.762790 instance-setup[1308]: INFO Running google_set_multiqueue. Dec 13 02:14:50.777068 instance-setup[1308]: INFO Set channels for eth0 to 2. Dec 13 02:14:50.780770 instance-setup[1308]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 02:14:50.782272 instance-setup[1308]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 02:14:50.782616 instance-setup[1308]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 02:14:50.783994 instance-setup[1308]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 02:14:50.784411 instance-setup[1308]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 02:14:50.785814 instance-setup[1308]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 02:14:50.786265 instance-setup[1308]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 02:14:50.787612 instance-setup[1308]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 02:14:50.798695 instance-setup[1308]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 02:14:50.798857 instance-setup[1308]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 02:14:50.837589 systemd-nspawn[1302]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 02:14:51.004528 systemd[1]: Created slice system-sshd.slice. Dec 13 02:14:51.007778 systemd[1]: Started sshd@0-10.128.0.35:22-139.178.68.195:45554.service. Dec 13 02:14:51.176399 startup-script[1339]: INFO Starting startup scripts. Dec 13 02:14:51.188623 startup-script[1339]: INFO No startup scripts found in metadata. Dec 13 02:14:51.188783 startup-script[1339]: INFO Finished running startup scripts. Dec 13 02:14:51.222024 systemd-nspawn[1302]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 02:14:51.222732 systemd-nspawn[1302]: + daemon_pids=() Dec 13 02:14:51.222958 systemd-nspawn[1302]: + for d in accounts clock_skew network Dec 13 02:14:51.223440 systemd-nspawn[1302]: + daemon_pids+=($!) Dec 13 02:14:51.223669 systemd-nspawn[1302]: + for d in accounts clock_skew network Dec 13 02:14:51.224094 systemd-nspawn[1302]: + daemon_pids+=($!) Dec 13 02:14:51.224334 systemd-nspawn[1302]: + for d in accounts clock_skew network Dec 13 02:14:51.224781 systemd-nspawn[1302]: + daemon_pids+=($!) Dec 13 02:14:51.224911 systemd-nspawn[1302]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 02:14:51.224911 systemd-nspawn[1302]: + /usr/bin/systemd-notify --ready Dec 13 02:14:51.225730 systemd-nspawn[1302]: + /usr/bin/google_network_daemon Dec 13 02:14:51.226009 systemd-nspawn[1302]: + /usr/bin/google_clock_skew_daemon Dec 13 02:14:51.233260 systemd-nspawn[1302]: + /usr/bin/google_accounts_daemon Dec 13 02:14:51.306660 systemd-nspawn[1302]: + wait -n 36 37 38 Dec 13 02:14:51.337712 sshd[1341]: Accepted publickey for core from 139.178.68.195 port 45554 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:14:51.341302 sshd[1341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:51.360625 systemd[1]: Created slice user-500.slice. Dec 13 02:14:51.362637 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:14:51.371001 systemd-logind[1219]: New session 1 of user core. Dec 13 02:14:51.381220 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:14:51.384713 systemd[1]: Starting user@500.service... Dec 13 02:14:51.419973 (systemd)[1350]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:51.648245 systemd[1350]: Queued start job for default target default.target. Dec 13 02:14:51.649085 systemd[1350]: Reached target paths.target. Dec 13 02:14:51.649121 systemd[1350]: Reached target sockets.target. Dec 13 02:14:51.649143 systemd[1350]: Reached target timers.target. Dec 13 02:14:51.649187 systemd[1350]: Reached target basic.target. Dec 13 02:14:51.649268 systemd[1350]: Reached target default.target. Dec 13 02:14:51.649322 systemd[1350]: Startup finished in 207ms. Dec 13 02:14:51.649361 systemd[1]: Started user@500.service. Dec 13 02:14:51.650881 systemd[1]: Started session-1.scope. Dec 13 02:14:51.876706 systemd[1]: Started sshd@1-10.128.0.35:22-139.178.68.195:45566.service. Dec 13 02:14:52.066201 google-networking[1347]: INFO Starting Google Networking daemon. Dec 13 02:14:52.169009 google-clock-skew[1346]: INFO Starting Google Clock Skew daemon. Dec 13 02:14:52.174699 groupadd[1368]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 02:14:52.180089 groupadd[1368]: group added to /etc/gshadow: name=google-sudoers Dec 13 02:14:52.182778 google-clock-skew[1346]: INFO Clock drift token has changed: 0. Dec 13 02:14:52.185258 sshd[1361]: Accepted publickey for core from 139.178.68.195 port 45566 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:14:52.186770 sshd[1361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:52.188995 groupadd[1368]: new group: name=google-sudoers, GID=1000 Dec 13 02:14:52.192808 google-clock-skew[1346]: WARNING Failed to sync system time with hardware clock. Dec 13 02:14:52.193732 systemd-nspawn[1302]: hwclock: Cannot access the Hardware Clock via any known method. Dec 13 02:14:52.193732 systemd-nspawn[1302]: hwclock: Use the --verbose option to see the details of our search for an access method. Dec 13 02:14:52.195883 systemd[1]: Started session-2.scope. Dec 13 02:14:52.196964 systemd-logind[1219]: New session 2 of user core. Dec 13 02:14:52.216508 google-accounts[1345]: INFO Starting Google Accounts daemon. Dec 13 02:14:52.242245 google-accounts[1345]: WARNING OS Login not installed. Dec 13 02:14:52.243435 google-accounts[1345]: INFO Creating a new user account for 0. Dec 13 02:14:52.248867 systemd-nspawn[1302]: useradd: invalid user name '0': use --badname to ignore Dec 13 02:14:52.249802 google-accounts[1345]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 02:14:52.399193 sshd[1361]: pam_unix(sshd:session): session closed for user core Dec 13 02:14:52.404869 systemd-logind[1219]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:14:52.405148 systemd[1]: sshd@1-10.128.0.35:22-139.178.68.195:45566.service: Deactivated successfully. Dec 13 02:14:52.406327 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:14:52.407511 systemd-logind[1219]: Removed session 2. Dec 13 02:14:52.445711 systemd[1]: Started sshd@2-10.128.0.35:22-139.178.68.195:45582.service. Dec 13 02:14:52.735378 sshd[1384]: Accepted publickey for core from 139.178.68.195 port 45582 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:14:52.737557 sshd[1384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:52.743553 systemd-logind[1219]: New session 3 of user core. Dec 13 02:14:52.744313 systemd[1]: Started session-3.scope. Dec 13 02:14:52.945615 sshd[1384]: pam_unix(sshd:session): session closed for user core Dec 13 02:14:52.949680 systemd[1]: sshd@2-10.128.0.35:22-139.178.68.195:45582.service: Deactivated successfully. Dec 13 02:14:52.950705 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:14:52.951608 systemd-logind[1219]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:14:52.952867 systemd-logind[1219]: Removed session 3. Dec 13 02:14:52.990699 systemd[1]: Started sshd@3-10.128.0.35:22-139.178.68.195:45596.service. Dec 13 02:14:53.277683 sshd[1390]: Accepted publickey for core from 139.178.68.195 port 45596 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:14:53.279633 sshd[1390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:53.285230 systemd-logind[1219]: New session 4 of user core. Dec 13 02:14:53.286398 systemd[1]: Started session-4.scope. Dec 13 02:14:53.491462 sshd[1390]: pam_unix(sshd:session): session closed for user core Dec 13 02:14:53.495543 systemd[1]: sshd@3-10.128.0.35:22-139.178.68.195:45596.service: Deactivated successfully. Dec 13 02:14:53.496658 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:14:53.497687 systemd-logind[1219]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:14:53.499118 systemd-logind[1219]: Removed session 4. Dec 13 02:14:53.539472 systemd[1]: Started sshd@4-10.128.0.35:22-139.178.68.195:45600.service. Dec 13 02:14:53.834377 sshd[1396]: Accepted publickey for core from 139.178.68.195 port 45600 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:14:53.836201 sshd[1396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:53.842749 systemd[1]: Started session-5.scope. Dec 13 02:14:53.843566 systemd-logind[1219]: New session 5 of user core. Dec 13 02:14:54.032344 sudo[1399]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:14:54.032758 sudo[1399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:14:54.050194 systemd[1]: Starting coreos-metadata.service... Dec 13 02:14:54.100210 coreos-metadata[1403]: Dec 13 02:14:54.100 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 13 02:14:54.101680 coreos-metadata[1403]: Dec 13 02:14:54.101 INFO Fetch successful Dec 13 02:14:54.101817 coreos-metadata[1403]: Dec 13 02:14:54.101 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 13 02:14:54.102671 coreos-metadata[1403]: Dec 13 02:14:54.102 INFO Fetch successful Dec 13 02:14:54.102671 coreos-metadata[1403]: Dec 13 02:14:54.102 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 13 02:14:54.103780 coreos-metadata[1403]: Dec 13 02:14:54.103 INFO Fetch successful Dec 13 02:14:54.103972 coreos-metadata[1403]: Dec 13 02:14:54.103 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 13 02:14:54.104894 coreos-metadata[1403]: Dec 13 02:14:54.104 INFO Fetch successful Dec 13 02:14:54.115096 systemd[1]: Finished coreos-metadata.service. Dec 13 02:14:55.138033 systemd[1]: Stopped kubelet.service. Dec 13 02:14:55.138765 systemd[1]: kubelet.service: Consumed 1.374s CPU time. Dec 13 02:14:55.141778 systemd[1]: Starting kubelet.service... Dec 13 02:14:55.176395 systemd[1]: Reloading. Dec 13 02:14:55.300213 /usr/lib/systemd/system-generators/torcx-generator[1463]: time="2024-12-13T02:14:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:14:55.300256 /usr/lib/systemd/system-generators/torcx-generator[1463]: time="2024-12-13T02:14:55Z" level=info msg="torcx already run" Dec 13 02:14:55.430716 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:14:55.430742 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:14:55.454558 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:14:55.621022 systemd[1]: Started kubelet.service. Dec 13 02:14:55.624496 systemd[1]: Stopping kubelet.service... Dec 13 02:14:55.625255 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:14:55.625532 systemd[1]: Stopped kubelet.service. Dec 13 02:14:55.628383 systemd[1]: Starting kubelet.service... Dec 13 02:14:55.825353 systemd[1]: Started kubelet.service. Dec 13 02:14:55.884608 kubelet[1511]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:14:55.884608 kubelet[1511]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:14:55.884608 kubelet[1511]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:14:55.885300 kubelet[1511]: I1213 02:14:55.884745 1511 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:14:56.555224 kubelet[1511]: I1213 02:14:56.555152 1511 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:14:56.555224 kubelet[1511]: I1213 02:14:56.555204 1511 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:14:56.555528 kubelet[1511]: I1213 02:14:56.555494 1511 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:14:56.578385 kubelet[1511]: I1213 02:14:56.577751 1511 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:14:56.601839 kubelet[1511]: I1213 02:14:56.601794 1511 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:14:56.602452 kubelet[1511]: I1213 02:14:56.602410 1511 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:14:56.603029 kubelet[1511]: I1213 02:14:56.602609 1511 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.128.0.35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:14:56.604440 kubelet[1511]: I1213 02:14:56.604414 1511 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:14:56.604628 kubelet[1511]: I1213 02:14:56.604612 1511 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:14:56.606717 kubelet[1511]: I1213 02:14:56.606659 1511 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:14:56.608145 kubelet[1511]: I1213 02:14:56.608110 1511 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:14:56.608145 kubelet[1511]: I1213 02:14:56.608142 1511 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:14:56.608334 kubelet[1511]: I1213 02:14:56.608195 1511 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:14:56.608334 kubelet[1511]: I1213 02:14:56.608217 1511 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:14:56.608795 kubelet[1511]: E1213 02:14:56.608743 1511 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:14:56.608884 kubelet[1511]: E1213 02:14:56.608843 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:14:56.614501 kubelet[1511]: I1213 02:14:56.614479 1511 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:14:56.625619 kubelet[1511]: I1213 02:14:56.625569 1511 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:14:56.625745 kubelet[1511]: W1213 02:14:56.625656 1511 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:14:56.626354 kubelet[1511]: I1213 02:14:56.626319 1511 server.go:1264] "Started kubelet" Dec 13 02:14:56.627627 kubelet[1511]: I1213 02:14:56.627579 1511 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:14:56.629081 kubelet[1511]: I1213 02:14:56.629039 1511 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:14:56.634227 kubelet[1511]: I1213 02:14:56.634139 1511 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:14:56.634609 kubelet[1511]: I1213 02:14:56.634590 1511 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:14:56.640867 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:14:56.641761 kubelet[1511]: I1213 02:14:56.641735 1511 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:14:56.646947 kubelet[1511]: I1213 02:14:56.646926 1511 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:14:56.647838 kubelet[1511]: I1213 02:14:56.647816 1511 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:14:56.648058 kubelet[1511]: I1213 02:14:56.648043 1511 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:14:56.649001 kubelet[1511]: I1213 02:14:56.648969 1511 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:14:56.651801 kubelet[1511]: I1213 02:14:56.651746 1511 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:14:56.651983 kubelet[1511]: I1213 02:14:56.651966 1511 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:14:56.661524 kubelet[1511]: E1213 02:14:56.661495 1511 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:14:56.662917 kubelet[1511]: E1213 02:14:56.662884 1511 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.35\" not found" node="10.128.0.35" Dec 13 02:14:56.673424 kubelet[1511]: I1213 02:14:56.673400 1511 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:14:56.673424 kubelet[1511]: I1213 02:14:56.673421 1511 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:14:56.673664 kubelet[1511]: I1213 02:14:56.673444 1511 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:14:56.678085 kubelet[1511]: I1213 02:14:56.678042 1511 policy_none.go:49] "None policy: Start" Dec 13 02:14:56.678796 kubelet[1511]: I1213 02:14:56.678765 1511 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:14:56.678902 kubelet[1511]: I1213 02:14:56.678804 1511 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:14:56.691058 systemd[1]: Created slice kubepods.slice. Dec 13 02:14:56.699348 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:14:56.705480 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:14:56.714787 kubelet[1511]: I1213 02:14:56.714673 1511 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:14:56.716960 kubelet[1511]: I1213 02:14:56.714970 1511 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:14:56.716960 kubelet[1511]: I1213 02:14:56.715206 1511 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:14:56.724591 kubelet[1511]: E1213 02:14:56.724550 1511 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.35\" not found" Dec 13 02:14:56.748345 kubelet[1511]: I1213 02:14:56.748305 1511 kubelet_node_status.go:73] "Attempting to register node" node="10.128.0.35" Dec 13 02:14:56.756556 kubelet[1511]: I1213 02:14:56.756529 1511 kubelet_node_status.go:76] "Successfully registered node" node="10.128.0.35" Dec 13 02:14:56.773101 kubelet[1511]: I1213 02:14:56.773058 1511 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 02:14:56.773802 env[1218]: time="2024-12-13T02:14:56.773674359Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:14:56.774467 kubelet[1511]: I1213 02:14:56.774446 1511 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 02:14:56.810424 kubelet[1511]: I1213 02:14:56.810288 1511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:14:56.815070 kubelet[1511]: I1213 02:14:56.815035 1511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:14:56.815239 kubelet[1511]: I1213 02:14:56.815080 1511 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:14:56.815239 kubelet[1511]: I1213 02:14:56.815106 1511 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:14:56.815239 kubelet[1511]: E1213 02:14:56.815191 1511 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 02:14:56.853688 sudo[1399]: pam_unix(sudo:session): session closed for user root Dec 13 02:14:56.898532 sshd[1396]: pam_unix(sshd:session): session closed for user core Dec 13 02:14:56.903188 systemd-logind[1219]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:14:56.903466 systemd[1]: sshd@4-10.128.0.35:22-139.178.68.195:45600.service: Deactivated successfully. Dec 13 02:14:56.904566 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:14:56.905687 systemd-logind[1219]: Removed session 5. Dec 13 02:14:57.557528 kubelet[1511]: I1213 02:14:57.557459 1511 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 02:14:57.558185 kubelet[1511]: W1213 02:14:57.557701 1511 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:14:57.558185 kubelet[1511]: W1213 02:14:57.557750 1511 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:14:57.558410 kubelet[1511]: W1213 02:14:57.558151 1511 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:14:57.609514 kubelet[1511]: I1213 02:14:57.609468 1511 apiserver.go:52] "Watching apiserver" Dec 13 02:14:57.609735 kubelet[1511]: E1213 02:14:57.609489 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:14:57.613527 kubelet[1511]: I1213 02:14:57.613479 1511 topology_manager.go:215] "Topology Admit Handler" podUID="370f0b5f-3ee0-43cb-a377-5793d1ec2c18" podNamespace="kube-system" podName="cilium-d9hlx" Dec 13 02:14:57.613716 kubelet[1511]: I1213 02:14:57.613692 1511 topology_manager.go:215] "Topology Admit Handler" podUID="f491cead-be48-4a53-a6dc-f3226f0ae6b8" podNamespace="kube-system" podName="kube-proxy-tfm4m" Dec 13 02:14:57.621770 systemd[1]: Created slice kubepods-burstable-pod370f0b5f_3ee0_43cb_a377_5793d1ec2c18.slice. Dec 13 02:14:57.632131 systemd[1]: Created slice kubepods-besteffort-podf491cead_be48_4a53_a6dc_f3226f0ae6b8.slice. Dec 13 02:14:57.649057 kubelet[1511]: I1213 02:14:57.649025 1511 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:14:57.655684 kubelet[1511]: I1213 02:14:57.655647 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-clustermesh-secrets\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.655833 kubelet[1511]: I1213 02:14:57.655762 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-host-proc-sys-net\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.655905 kubelet[1511]: I1213 02:14:57.655837 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f491cead-be48-4a53-a6dc-f3226f0ae6b8-kube-proxy\") pod \"kube-proxy-tfm4m\" (UID: \"f491cead-be48-4a53-a6dc-f3226f0ae6b8\") " pod="kube-system/kube-proxy-tfm4m" Dec 13 02:14:57.655905 kubelet[1511]: I1213 02:14:57.655870 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f491cead-be48-4a53-a6dc-f3226f0ae6b8-xtables-lock\") pod \"kube-proxy-tfm4m\" (UID: \"f491cead-be48-4a53-a6dc-f3226f0ae6b8\") " pod="kube-system/kube-proxy-tfm4m" Dec 13 02:14:57.656032 kubelet[1511]: I1213 02:14:57.655972 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-run\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.656095 kubelet[1511]: I1213 02:14:57.656049 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-hostproc\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.656151 kubelet[1511]: I1213 02:14:57.656113 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-config-path\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.656151 kubelet[1511]: I1213 02:14:57.656143 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-hubble-tls\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.656296 kubelet[1511]: I1213 02:14:57.656223 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spthn\" (UniqueName: \"kubernetes.io/projected/f491cead-be48-4a53-a6dc-f3226f0ae6b8-kube-api-access-spthn\") pod \"kube-proxy-tfm4m\" (UID: \"f491cead-be48-4a53-a6dc-f3226f0ae6b8\") " pod="kube-system/kube-proxy-tfm4m" Dec 13 02:14:57.656353 kubelet[1511]: I1213 02:14:57.656304 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-cgroup\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.656414 kubelet[1511]: I1213 02:14:57.656385 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cni-path\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.656474 kubelet[1511]: I1213 02:14:57.656416 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv2mq\" (UniqueName: \"kubernetes.io/projected/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-kube-api-access-bv2mq\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.656530 kubelet[1511]: I1213 02:14:57.656498 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f491cead-be48-4a53-a6dc-f3226f0ae6b8-lib-modules\") pod \"kube-proxy-tfm4m\" (UID: \"f491cead-be48-4a53-a6dc-f3226f0ae6b8\") " pod="kube-system/kube-proxy-tfm4m" Dec 13 02:14:57.656593 kubelet[1511]: I1213 02:14:57.656526 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-bpf-maps\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.656701 kubelet[1511]: I1213 02:14:57.656607 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-xtables-lock\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.656774 kubelet[1511]: I1213 02:14:57.656758 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-host-proc-sys-kernel\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.656833 kubelet[1511]: I1213 02:14:57.656819 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-etc-cni-netd\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.656904 kubelet[1511]: I1213 02:14:57.656848 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-lib-modules\") pod \"cilium-d9hlx\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " pod="kube-system/cilium-d9hlx" Dec 13 02:14:57.931083 env[1218]: time="2024-12-13T02:14:57.930920462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d9hlx,Uid:370f0b5f-3ee0-43cb-a377-5793d1ec2c18,Namespace:kube-system,Attempt:0,}" Dec 13 02:14:57.940609 env[1218]: time="2024-12-13T02:14:57.940559551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tfm4m,Uid:f491cead-be48-4a53-a6dc-f3226f0ae6b8,Namespace:kube-system,Attempt:0,}" Dec 13 02:14:58.490192 env[1218]: time="2024-12-13T02:14:58.490120548Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:58.491516 env[1218]: time="2024-12-13T02:14:58.491457640Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:58.494456 env[1218]: time="2024-12-13T02:14:58.494412718Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:58.496436 env[1218]: time="2024-12-13T02:14:58.496394119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:58.502927 env[1218]: time="2024-12-13T02:14:58.502872792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:58.504988 env[1218]: time="2024-12-13T02:14:58.504915413Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:58.506925 env[1218]: time="2024-12-13T02:14:58.506870334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:58.509060 env[1218]: time="2024-12-13T02:14:58.509007566Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:58.534398 env[1218]: time="2024-12-13T02:14:58.533870631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:14:58.534398 env[1218]: time="2024-12-13T02:14:58.533934710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:14:58.534398 env[1218]: time="2024-12-13T02:14:58.533955426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:14:58.534674 env[1218]: time="2024-12-13T02:14:58.534545792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0 pid=1561 runtime=io.containerd.runc.v2 Dec 13 02:14:58.544489 env[1218]: time="2024-12-13T02:14:58.544370703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:14:58.544489 env[1218]: time="2024-12-13T02:14:58.544423877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:14:58.544489 env[1218]: time="2024-12-13T02:14:58.544442863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:14:58.545045 env[1218]: time="2024-12-13T02:14:58.544978540Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9898f69b6972fa1242399a00de32d58033d2fb263e6c90deb471b156c444def3 pid=1579 runtime=io.containerd.runc.v2 Dec 13 02:14:58.561751 systemd[1]: Started cri-containerd-53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0.scope. Dec 13 02:14:58.573892 systemd[1]: Started cri-containerd-9898f69b6972fa1242399a00de32d58033d2fb263e6c90deb471b156c444def3.scope. Dec 13 02:14:58.610263 kubelet[1511]: E1213 02:14:58.610181 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:14:58.625817 env[1218]: time="2024-12-13T02:14:58.625755389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d9hlx,Uid:370f0b5f-3ee0-43cb-a377-5793d1ec2c18,Namespace:kube-system,Attempt:0,} returns sandbox id \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\"" Dec 13 02:14:58.629671 env[1218]: time="2024-12-13T02:14:58.629617449Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:14:58.634710 env[1218]: time="2024-12-13T02:14:58.634337380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tfm4m,Uid:f491cead-be48-4a53-a6dc-f3226f0ae6b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9898f69b6972fa1242399a00de32d58033d2fb263e6c90deb471b156c444def3\"" Dec 13 02:14:58.772104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2181573803.mount: Deactivated successfully. Dec 13 02:14:59.610670 kubelet[1511]: E1213 02:14:59.610610 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:00.611419 kubelet[1511]: E1213 02:15:00.611363 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:01.612243 kubelet[1511]: E1213 02:15:01.612175 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:02.613251 kubelet[1511]: E1213 02:15:02.613199 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:03.613783 kubelet[1511]: E1213 02:15:03.613732 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:04.613944 kubelet[1511]: E1213 02:15:04.613870 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:05.439316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420227034.mount: Deactivated successfully. Dec 13 02:15:05.614252 kubelet[1511]: E1213 02:15:05.614119 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:06.615416 kubelet[1511]: E1213 02:15:06.615321 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:07.616306 kubelet[1511]: E1213 02:15:07.616211 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:08.617052 kubelet[1511]: E1213 02:15:08.616970 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:08.695304 env[1218]: time="2024-12-13T02:15:08.695220195Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:08.698485 env[1218]: time="2024-12-13T02:15:08.698421267Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:08.700657 env[1218]: time="2024-12-13T02:15:08.700615179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:08.701644 env[1218]: time="2024-12-13T02:15:08.701587363Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:15:08.704825 env[1218]: time="2024-12-13T02:15:08.704673045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 02:15:08.706556 env[1218]: time="2024-12-13T02:15:08.706502831Z" level=info msg="CreateContainer within sandbox \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:15:08.722971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76284486.mount: Deactivated successfully. Dec 13 02:15:08.734434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335117958.mount: Deactivated successfully. Dec 13 02:15:08.740154 env[1218]: time="2024-12-13T02:15:08.740098473Z" level=info msg="CreateContainer within sandbox \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7\"" Dec 13 02:15:08.741189 env[1218]: time="2024-12-13T02:15:08.741129428Z" level=info msg="StartContainer for \"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7\"" Dec 13 02:15:08.773421 systemd[1]: Started cri-containerd-0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7.scope. Dec 13 02:15:08.815221 env[1218]: time="2024-12-13T02:15:08.814125386Z" level=info msg="StartContainer for \"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7\" returns successfully" Dec 13 02:15:08.834603 systemd[1]: cri-containerd-0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7.scope: Deactivated successfully. Dec 13 02:15:09.617928 kubelet[1511]: E1213 02:15:09.617856 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:09.718874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7-rootfs.mount: Deactivated successfully. Dec 13 02:15:10.617699 env[1218]: time="2024-12-13T02:15:10.617635230Z" level=info msg="shim disconnected" id=0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7 Dec 13 02:15:10.617699 env[1218]: time="2024-12-13T02:15:10.617697784Z" level=warning msg="cleaning up after shim disconnected" id=0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7 namespace=k8s.io Dec 13 02:15:10.618386 env[1218]: time="2024-12-13T02:15:10.617712185Z" level=info msg="cleaning up dead shim" Dec 13 02:15:10.618609 kubelet[1511]: E1213 02:15:10.618527 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:10.630263 env[1218]: time="2024-12-13T02:15:10.630189922Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1687 runtime=io.containerd.runc.v2\n" Dec 13 02:15:10.853380 env[1218]: time="2024-12-13T02:15:10.853305710Z" level=info msg="CreateContainer within sandbox \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:15:10.868960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2938347272.mount: Deactivated successfully. Dec 13 02:15:10.879816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457757985.mount: Deactivated successfully. Dec 13 02:15:10.883685 env[1218]: time="2024-12-13T02:15:10.883611247Z" level=info msg="CreateContainer within sandbox \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808\"" Dec 13 02:15:10.884778 env[1218]: time="2024-12-13T02:15:10.884722018Z" level=info msg="StartContainer for \"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808\"" Dec 13 02:15:10.941348 systemd[1]: Started cri-containerd-f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808.scope. Dec 13 02:15:11.035082 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:15:11.035616 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:15:11.036604 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:15:11.039279 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:15:11.047343 env[1218]: time="2024-12-13T02:15:11.045925689Z" level=info msg="StartContainer for \"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808\" returns successfully" Dec 13 02:15:11.051177 systemd[1]: cri-containerd-f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808.scope: Deactivated successfully. Dec 13 02:15:11.065454 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:15:11.235772 env[1218]: time="2024-12-13T02:15:11.234919231Z" level=info msg="shim disconnected" id=f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808 Dec 13 02:15:11.235772 env[1218]: time="2024-12-13T02:15:11.235210707Z" level=warning msg="cleaning up after shim disconnected" id=f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808 namespace=k8s.io Dec 13 02:15:11.235772 env[1218]: time="2024-12-13T02:15:11.235249208Z" level=info msg="cleaning up dead shim" Dec 13 02:15:11.249428 env[1218]: time="2024-12-13T02:15:11.249359609Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1751 runtime=io.containerd.runc.v2\n" Dec 13 02:15:11.618979 kubelet[1511]: E1213 02:15:11.618875 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:11.859645 env[1218]: time="2024-12-13T02:15:11.859590361Z" level=info msg="CreateContainer within sandbox \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:15:11.864580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808-rootfs.mount: Deactivated successfully. Dec 13 02:15:11.899146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435927698.mount: Deactivated successfully. Dec 13 02:15:11.913311 env[1218]: time="2024-12-13T02:15:11.913258140Z" level=info msg="CreateContainer within sandbox \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a\"" Dec 13 02:15:11.915239 env[1218]: time="2024-12-13T02:15:11.915203641Z" level=info msg="StartContainer for \"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a\"" Dec 13 02:15:11.960564 systemd[1]: Started cri-containerd-d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a.scope. Dec 13 02:15:12.024887 env[1218]: time="2024-12-13T02:15:12.024830014Z" level=info msg="StartContainer for \"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a\" returns successfully" Dec 13 02:15:12.030693 systemd[1]: cri-containerd-d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a.scope: Deactivated successfully. Dec 13 02:15:12.191487 env[1218]: time="2024-12-13T02:15:12.191339908Z" level=info msg="shim disconnected" id=d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a Dec 13 02:15:12.191805 env[1218]: time="2024-12-13T02:15:12.191773448Z" level=warning msg="cleaning up after shim disconnected" id=d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a namespace=k8s.io Dec 13 02:15:12.191927 env[1218]: time="2024-12-13T02:15:12.191905327Z" level=info msg="cleaning up dead shim" Dec 13 02:15:12.220483 env[1218]: time="2024-12-13T02:15:12.220423702Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1809 runtime=io.containerd.runc.v2\n" Dec 13 02:15:12.571454 env[1218]: time="2024-12-13T02:15:12.571379631Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:12.573941 env[1218]: time="2024-12-13T02:15:12.573892464Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:12.576287 env[1218]: time="2024-12-13T02:15:12.576238505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:12.578405 env[1218]: time="2024-12-13T02:15:12.578361014Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:12.579067 env[1218]: time="2024-12-13T02:15:12.579011951Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 02:15:12.582024 env[1218]: time="2024-12-13T02:15:12.581973313Z" level=info msg="CreateContainer within sandbox \"9898f69b6972fa1242399a00de32d58033d2fb263e6c90deb471b156c444def3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:15:12.600389 env[1218]: time="2024-12-13T02:15:12.600328159Z" level=info msg="CreateContainer within sandbox \"9898f69b6972fa1242399a00de32d58033d2fb263e6c90deb471b156c444def3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"689dce916d3fc012be3c35c7b25ea0314001a2d90a60f395e531899829bb2b5a\"" Dec 13 02:15:12.601287 env[1218]: time="2024-12-13T02:15:12.601250273Z" level=info msg="StartContainer for \"689dce916d3fc012be3c35c7b25ea0314001a2d90a60f395e531899829bb2b5a\"" Dec 13 02:15:12.620147 kubelet[1511]: E1213 02:15:12.620040 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:12.625716 systemd[1]: Started cri-containerd-689dce916d3fc012be3c35c7b25ea0314001a2d90a60f395e531899829bb2b5a.scope. Dec 13 02:15:12.670855 env[1218]: time="2024-12-13T02:15:12.669959706Z" level=info msg="StartContainer for \"689dce916d3fc012be3c35c7b25ea0314001a2d90a60f395e531899829bb2b5a\" returns successfully" Dec 13 02:15:12.866850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a-rootfs.mount: Deactivated successfully. Dec 13 02:15:12.877418 env[1218]: time="2024-12-13T02:15:12.874419036Z" level=info msg="CreateContainer within sandbox \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:15:12.907712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2411656647.mount: Deactivated successfully. Dec 13 02:15:12.910138 kubelet[1511]: I1213 02:15:12.910062 1511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tfm4m" podStartSLOduration=2.965793743 podStartE2EDuration="16.910038692s" podCreationTimestamp="2024-12-13 02:14:56 +0000 UTC" firstStartedPulling="2024-12-13 02:14:58.635818413 +0000 UTC m=+2.804585999" lastFinishedPulling="2024-12-13 02:15:12.580063368 +0000 UTC m=+16.748830948" observedRunningTime="2024-12-13 02:15:12.881612134 +0000 UTC m=+17.050379726" watchObservedRunningTime="2024-12-13 02:15:12.910038692 +0000 UTC m=+17.078806285" Dec 13 02:15:12.925395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229174447.mount: Deactivated successfully. Dec 13 02:15:12.927492 env[1218]: time="2024-12-13T02:15:12.927427222Z" level=info msg="CreateContainer within sandbox \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e\"" Dec 13 02:15:12.928554 env[1218]: time="2024-12-13T02:15:12.928515461Z" level=info msg="StartContainer for \"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e\"" Dec 13 02:15:12.959678 systemd[1]: Started cri-containerd-0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e.scope. Dec 13 02:15:13.016791 env[1218]: time="2024-12-13T02:15:13.016712518Z" level=info msg="StartContainer for \"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e\" returns successfully" Dec 13 02:15:13.019080 systemd[1]: cri-containerd-0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e.scope: Deactivated successfully. Dec 13 02:15:13.130247 env[1218]: time="2024-12-13T02:15:13.130067435Z" level=info msg="shim disconnected" id=0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e Dec 13 02:15:13.130247 env[1218]: time="2024-12-13T02:15:13.130130394Z" level=warning msg="cleaning up after shim disconnected" id=0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e namespace=k8s.io Dec 13 02:15:13.130247 env[1218]: time="2024-12-13T02:15:13.130146137Z" level=info msg="cleaning up dead shim" Dec 13 02:15:13.144070 env[1218]: time="2024-12-13T02:15:13.143994545Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1998 runtime=io.containerd.runc.v2\n" Dec 13 02:15:13.376731 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:15:13.620765 kubelet[1511]: E1213 02:15:13.620701 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:13.874317 env[1218]: time="2024-12-13T02:15:13.874192933Z" level=info msg="CreateContainer within sandbox \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:15:13.896540 env[1218]: time="2024-12-13T02:15:13.896478775Z" level=info msg="CreateContainer within sandbox \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\"" Dec 13 02:15:13.897591 env[1218]: time="2024-12-13T02:15:13.897538101Z" level=info msg="StartContainer for \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\"" Dec 13 02:15:13.932605 systemd[1]: Started cri-containerd-3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31.scope. Dec 13 02:15:13.977314 env[1218]: time="2024-12-13T02:15:13.977219089Z" level=info msg="StartContainer for \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\" returns successfully" Dec 13 02:15:14.177212 kubelet[1511]: I1213 02:15:14.177047 1511 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:15:14.514386 kernel: Initializing XFRM netlink socket Dec 13 02:15:14.621224 kubelet[1511]: E1213 02:15:14.621106 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:14.864759 systemd[1]: run-containerd-runc-k8s.io-3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31-runc.z2sYvx.mount: Deactivated successfully. Dec 13 02:15:14.895508 kubelet[1511]: I1213 02:15:14.895343 1511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d9hlx" podStartSLOduration=8.820309306 podStartE2EDuration="18.895318397s" podCreationTimestamp="2024-12-13 02:14:56 +0000 UTC" firstStartedPulling="2024-12-13 02:14:58.628509377 +0000 UTC m=+2.797276963" lastFinishedPulling="2024-12-13 02:15:08.703518483 +0000 UTC m=+12.872286054" observedRunningTime="2024-12-13 02:15:14.895304472 +0000 UTC m=+19.064072065" watchObservedRunningTime="2024-12-13 02:15:14.895318397 +0000 UTC m=+19.064085989" Dec 13 02:15:15.076693 kubelet[1511]: I1213 02:15:15.076630 1511 topology_manager.go:215] "Topology Admit Handler" podUID="2499c293-bed6-4cdc-b314-7da36e3463f2" podNamespace="default" podName="nginx-deployment-85f456d6dd-rn5fn" Dec 13 02:15:15.083633 systemd[1]: Created slice kubepods-besteffort-pod2499c293_bed6_4cdc_b314_7da36e3463f2.slice. Dec 13 02:15:15.189819 kubelet[1511]: I1213 02:15:15.189648 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j765w\" (UniqueName: \"kubernetes.io/projected/2499c293-bed6-4cdc-b314-7da36e3463f2-kube-api-access-j765w\") pod \"nginx-deployment-85f456d6dd-rn5fn\" (UID: \"2499c293-bed6-4cdc-b314-7da36e3463f2\") " pod="default/nginx-deployment-85f456d6dd-rn5fn" Dec 13 02:15:15.387704 env[1218]: time="2024-12-13T02:15:15.387636729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-rn5fn,Uid:2499c293-bed6-4cdc-b314-7da36e3463f2,Namespace:default,Attempt:0,}" Dec 13 02:15:15.621842 kubelet[1511]: E1213 02:15:15.621776 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:16.183264 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:15:16.176482 systemd-networkd[1028]: cilium_host: Link UP Dec 13 02:15:16.176685 systemd-networkd[1028]: cilium_net: Link UP Dec 13 02:15:16.176692 systemd-networkd[1028]: cilium_net: Gained carrier Dec 13 02:15:16.185839 systemd-networkd[1028]: cilium_host: Gained carrier Dec 13 02:15:16.328581 systemd-networkd[1028]: cilium_vxlan: Link UP Dec 13 02:15:16.328593 systemd-networkd[1028]: cilium_vxlan: Gained carrier Dec 13 02:15:16.590207 kernel: NET: Registered PF_ALG protocol family Dec 13 02:15:16.608715 kubelet[1511]: E1213 02:15:16.608677 1511 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:16.622001 kubelet[1511]: E1213 02:15:16.621959 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:17.081806 systemd-networkd[1028]: cilium_host: Gained IPv6LL Dec 13 02:15:17.209399 systemd-networkd[1028]: cilium_net: Gained IPv6LL Dec 13 02:15:17.396348 systemd-networkd[1028]: lxc_health: Link UP Dec 13 02:15:17.430460 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:15:17.432508 systemd-networkd[1028]: lxc_health: Gained carrier Dec 13 02:15:17.622634 kubelet[1511]: E1213 02:15:17.622589 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:17.941282 systemd-networkd[1028]: lxcb0b3fb0a82f5: Link UP Dec 13 02:15:17.952191 kernel: eth0: renamed from tmp10ace Dec 13 02:15:17.967196 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb0b3fb0a82f5: link becomes ready Dec 13 02:15:17.972050 systemd-networkd[1028]: lxcb0b3fb0a82f5: Gained carrier Dec 13 02:15:18.169807 systemd-networkd[1028]: cilium_vxlan: Gained IPv6LL Dec 13 02:15:18.623685 kubelet[1511]: E1213 02:15:18.623608 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:19.321478 systemd-networkd[1028]: lxcb0b3fb0a82f5: Gained IPv6LL Dec 13 02:15:19.449429 systemd-networkd[1028]: lxc_health: Gained IPv6LL Dec 13 02:15:19.623964 kubelet[1511]: E1213 02:15:19.623821 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:20.625399 kubelet[1511]: E1213 02:15:20.625336 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:20.827357 kubelet[1511]: I1213 02:15:20.827315 1511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:15:21.626060 kubelet[1511]: E1213 02:15:21.626008 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:22.506020 env[1218]: time="2024-12-13T02:15:22.505763913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:22.506020 env[1218]: time="2024-12-13T02:15:22.505809073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:22.506020 env[1218]: time="2024-12-13T02:15:22.505821542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:22.506829 env[1218]: time="2024-12-13T02:15:22.506082176Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10aceefbeb4e908b578ab32c4c100fea21af32d21608dd1b06ef38e0e8acd65f pid=2547 runtime=io.containerd.runc.v2 Dec 13 02:15:22.534914 systemd[1]: run-containerd-runc-k8s.io-10aceefbeb4e908b578ab32c4c100fea21af32d21608dd1b06ef38e0e8acd65f-runc.bpXYJm.mount: Deactivated successfully. Dec 13 02:15:22.538718 systemd[1]: Started cri-containerd-10aceefbeb4e908b578ab32c4c100fea21af32d21608dd1b06ef38e0e8acd65f.scope. Dec 13 02:15:22.598730 env[1218]: time="2024-12-13T02:15:22.598657330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-rn5fn,Uid:2499c293-bed6-4cdc-b314-7da36e3463f2,Namespace:default,Attempt:0,} returns sandbox id \"10aceefbeb4e908b578ab32c4c100fea21af32d21608dd1b06ef38e0e8acd65f\"" Dec 13 02:15:22.601130 env[1218]: time="2024-12-13T02:15:22.601089640Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:15:22.627659 kubelet[1511]: E1213 02:15:22.627594 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:23.628046 kubelet[1511]: E1213 02:15:23.627958 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:24.628773 kubelet[1511]: E1213 02:15:24.628695 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:25.112572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898820328.mount: Deactivated successfully. Dec 13 02:15:25.628956 kubelet[1511]: E1213 02:15:25.628865 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:26.629440 kubelet[1511]: E1213 02:15:26.629325 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:26.818192 env[1218]: time="2024-12-13T02:15:26.815864249Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:26.820923 env[1218]: time="2024-12-13T02:15:26.820881251Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:26.823203 env[1218]: time="2024-12-13T02:15:26.823142545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:26.825457 env[1218]: time="2024-12-13T02:15:26.825422716Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:26.826366 env[1218]: time="2024-12-13T02:15:26.826324811Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:15:26.829615 env[1218]: time="2024-12-13T02:15:26.829560292Z" level=info msg="CreateContainer within sandbox \"10aceefbeb4e908b578ab32c4c100fea21af32d21608dd1b06ef38e0e8acd65f\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 02:15:26.845775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694449843.mount: Deactivated successfully. Dec 13 02:15:26.854100 env[1218]: time="2024-12-13T02:15:26.854043870Z" level=info msg="CreateContainer within sandbox \"10aceefbeb4e908b578ab32c4c100fea21af32d21608dd1b06ef38e0e8acd65f\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"486ada24c4f93a9256ae04f4d628ddcdc09a7e68359428c44c4c979a476725e5\"" Dec 13 02:15:26.854902 env[1218]: time="2024-12-13T02:15:26.854865507Z" level=info msg="StartContainer for \"486ada24c4f93a9256ae04f4d628ddcdc09a7e68359428c44c4c979a476725e5\"" Dec 13 02:15:26.890397 systemd[1]: Started cri-containerd-486ada24c4f93a9256ae04f4d628ddcdc09a7e68359428c44c4c979a476725e5.scope. Dec 13 02:15:26.935559 env[1218]: time="2024-12-13T02:15:26.934534050Z" level=info msg="StartContainer for \"486ada24c4f93a9256ae04f4d628ddcdc09a7e68359428c44c4c979a476725e5\" returns successfully" Dec 13 02:15:27.630374 kubelet[1511]: E1213 02:15:27.630304 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:27.772094 update_engine[1208]: I1213 02:15:27.772017 1208 update_attempter.cc:509] Updating boot flags... Dec 13 02:15:27.840045 systemd[1]: run-containerd-runc-k8s.io-486ada24c4f93a9256ae04f4d628ddcdc09a7e68359428c44c4c979a476725e5-runc.KUutP5.mount: Deactivated successfully. Dec 13 02:15:28.631517 kubelet[1511]: E1213 02:15:28.631439 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:29.632678 kubelet[1511]: E1213 02:15:29.632607 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:30.633534 kubelet[1511]: E1213 02:15:30.633464 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:31.634420 kubelet[1511]: E1213 02:15:31.634349 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:32.635493 kubelet[1511]: E1213 02:15:32.635421 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:33.636420 kubelet[1511]: E1213 02:15:33.636335 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:34.637155 kubelet[1511]: E1213 02:15:34.637072 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:35.522801 kubelet[1511]: I1213 02:15:35.522718 1511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-rn5fn" podStartSLOduration=16.295180213 podStartE2EDuration="20.522691511s" podCreationTimestamp="2024-12-13 02:15:15 +0000 UTC" firstStartedPulling="2024-12-13 02:15:22.600449878 +0000 UTC m=+26.769217461" lastFinishedPulling="2024-12-13 02:15:26.827961178 +0000 UTC m=+30.996728759" observedRunningTime="2024-12-13 02:15:27.932017594 +0000 UTC m=+32.100785187" watchObservedRunningTime="2024-12-13 02:15:35.522691511 +0000 UTC m=+39.691459187" Dec 13 02:15:35.523121 kubelet[1511]: I1213 02:15:35.522914 1511 topology_manager.go:215] "Topology Admit Handler" podUID="77ca68b1-961d-4487-a3f5-50270e084236" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 02:15:35.529458 systemd[1]: Created slice kubepods-besteffort-pod77ca68b1_961d_4487_a3f5_50270e084236.slice. Dec 13 02:15:35.638341 kubelet[1511]: E1213 02:15:35.638276 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:35.642599 kubelet[1511]: I1213 02:15:35.642533 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/77ca68b1-961d-4487-a3f5-50270e084236-data\") pod \"nfs-server-provisioner-0\" (UID: \"77ca68b1-961d-4487-a3f5-50270e084236\") " pod="default/nfs-server-provisioner-0" Dec 13 02:15:35.642599 kubelet[1511]: I1213 02:15:35.642591 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5skb\" (UniqueName: \"kubernetes.io/projected/77ca68b1-961d-4487-a3f5-50270e084236-kube-api-access-t5skb\") pod \"nfs-server-provisioner-0\" (UID: \"77ca68b1-961d-4487-a3f5-50270e084236\") " pod="default/nfs-server-provisioner-0" Dec 13 02:15:35.835976 env[1218]: time="2024-12-13T02:15:35.835380993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:77ca68b1-961d-4487-a3f5-50270e084236,Namespace:default,Attempt:0,}" Dec 13 02:15:35.877823 systemd-networkd[1028]: lxc245694bbd57f: Link UP Dec 13 02:15:35.887301 kernel: eth0: renamed from tmp782c6 Dec 13 02:15:35.909197 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:15:35.909322 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc245694bbd57f: link becomes ready Dec 13 02:15:35.914928 systemd-networkd[1028]: lxc245694bbd57f: Gained carrier Dec 13 02:15:36.119293 env[1218]: time="2024-12-13T02:15:36.118620483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:36.119293 env[1218]: time="2024-12-13T02:15:36.118676392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:36.119293 env[1218]: time="2024-12-13T02:15:36.118695975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:36.119293 env[1218]: time="2024-12-13T02:15:36.119012680Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/782c6917a26a51d3d1be1123deb7f269c86b10f33813601945e51fe12abb00c1 pid=2696 runtime=io.containerd.runc.v2 Dec 13 02:15:36.150752 systemd[1]: run-containerd-runc-k8s.io-782c6917a26a51d3d1be1123deb7f269c86b10f33813601945e51fe12abb00c1-runc.BcOiZC.mount: Deactivated successfully. Dec 13 02:15:36.157919 systemd[1]: Started cri-containerd-782c6917a26a51d3d1be1123deb7f269c86b10f33813601945e51fe12abb00c1.scope. Dec 13 02:15:36.213620 env[1218]: time="2024-12-13T02:15:36.213565237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:77ca68b1-961d-4487-a3f5-50270e084236,Namespace:default,Attempt:0,} returns sandbox id \"782c6917a26a51d3d1be1123deb7f269c86b10f33813601945e51fe12abb00c1\"" Dec 13 02:15:36.216355 env[1218]: time="2024-12-13T02:15:36.216319661Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 02:15:36.608922 kubelet[1511]: E1213 02:15:36.608855 1511 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:36.638600 kubelet[1511]: E1213 02:15:36.638544 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:37.241555 systemd-networkd[1028]: lxc245694bbd57f: Gained IPv6LL Dec 13 02:15:37.639551 kubelet[1511]: E1213 02:15:37.639503 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:38.640207 kubelet[1511]: E1213 02:15:38.640141 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:38.718626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941982207.mount: Deactivated successfully. Dec 13 02:15:39.640380 kubelet[1511]: E1213 02:15:39.640323 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:40.641039 kubelet[1511]: E1213 02:15:40.640975 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:41.101691 env[1218]: time="2024-12-13T02:15:41.101596807Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:41.104472 env[1218]: time="2024-12-13T02:15:41.104409185Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:41.107127 env[1218]: time="2024-12-13T02:15:41.107088708Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:41.109812 env[1218]: time="2024-12-13T02:15:41.109774536Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:41.110994 env[1218]: time="2024-12-13T02:15:41.110950260Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 02:15:41.115436 env[1218]: time="2024-12-13T02:15:41.115392344Z" level=info msg="CreateContainer within sandbox \"782c6917a26a51d3d1be1123deb7f269c86b10f33813601945e51fe12abb00c1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 02:15:41.134603 env[1218]: time="2024-12-13T02:15:41.134530147Z" level=info msg="CreateContainer within sandbox \"782c6917a26a51d3d1be1123deb7f269c86b10f33813601945e51fe12abb00c1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ec17dc5f2b3ebb5e706fdee1cf4420eb058663f48e822c48d73b333024c6fd37\"" Dec 13 02:15:41.135459 env[1218]: time="2024-12-13T02:15:41.135423289Z" level=info msg="StartContainer for \"ec17dc5f2b3ebb5e706fdee1cf4420eb058663f48e822c48d73b333024c6fd37\"" Dec 13 02:15:41.171570 systemd[1]: Started cri-containerd-ec17dc5f2b3ebb5e706fdee1cf4420eb058663f48e822c48d73b333024c6fd37.scope. Dec 13 02:15:41.207790 env[1218]: time="2024-12-13T02:15:41.206841445Z" level=info msg="StartContainer for \"ec17dc5f2b3ebb5e706fdee1cf4420eb058663f48e822c48d73b333024c6fd37\" returns successfully" Dec 13 02:15:41.641315 kubelet[1511]: E1213 02:15:41.641246 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:42.000469 kubelet[1511]: I1213 02:15:42.000303 1511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.102912944 podStartE2EDuration="7.000280058s" podCreationTimestamp="2024-12-13 02:15:35 +0000 UTC" firstStartedPulling="2024-12-13 02:15:36.215586684 +0000 UTC m=+40.384354262" lastFinishedPulling="2024-12-13 02:15:41.112953794 +0000 UTC m=+45.281721376" observedRunningTime="2024-12-13 02:15:41.998898439 +0000 UTC m=+46.167666031" watchObservedRunningTime="2024-12-13 02:15:42.000280058 +0000 UTC m=+46.169047650" Dec 13 02:15:42.642189 kubelet[1511]: E1213 02:15:42.642099 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:43.642828 kubelet[1511]: E1213 02:15:43.642760 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:44.643419 kubelet[1511]: E1213 02:15:44.643341 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:45.644506 kubelet[1511]: E1213 02:15:45.644429 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:46.645320 kubelet[1511]: E1213 02:15:46.645248 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:47.645720 kubelet[1511]: E1213 02:15:47.645647 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:48.646327 kubelet[1511]: E1213 02:15:48.646240 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:49.647309 kubelet[1511]: E1213 02:15:49.647227 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:50.648431 kubelet[1511]: E1213 02:15:50.648350 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:50.711667 kubelet[1511]: I1213 02:15:50.711611 1511 topology_manager.go:215] "Topology Admit Handler" podUID="c80c7899-9865-4fb8-b04e-6e7a0a13c2fc" podNamespace="default" podName="test-pod-1" Dec 13 02:15:50.719828 systemd[1]: Created slice kubepods-besteffort-podc80c7899_9865_4fb8_b04e_6e7a0a13c2fc.slice. Dec 13 02:15:50.747758 kubelet[1511]: I1213 02:15:50.747698 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9634a011-826a-4ef6-a7cc-85c03acfe513\" (UniqueName: \"kubernetes.io/nfs/c80c7899-9865-4fb8-b04e-6e7a0a13c2fc-pvc-9634a011-826a-4ef6-a7cc-85c03acfe513\") pod \"test-pod-1\" (UID: \"c80c7899-9865-4fb8-b04e-6e7a0a13c2fc\") " pod="default/test-pod-1" Dec 13 02:15:50.748141 kubelet[1511]: I1213 02:15:50.748107 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgnbk\" (UniqueName: \"kubernetes.io/projected/c80c7899-9865-4fb8-b04e-6e7a0a13c2fc-kube-api-access-vgnbk\") pod \"test-pod-1\" (UID: \"c80c7899-9865-4fb8-b04e-6e7a0a13c2fc\") " pod="default/test-pod-1" Dec 13 02:15:50.892204 kernel: FS-Cache: Loaded Dec 13 02:15:50.952707 kernel: RPC: Registered named UNIX socket transport module. Dec 13 02:15:50.952893 kernel: RPC: Registered udp transport module. Dec 13 02:15:50.952945 kernel: RPC: Registered tcp transport module. Dec 13 02:15:50.957441 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 02:15:51.044209 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 02:15:51.270756 kernel: NFS: Registering the id_resolver key type Dec 13 02:15:51.270880 kernel: Key type id_resolver registered Dec 13 02:15:51.270937 kernel: Key type id_legacy registered Dec 13 02:15:51.325297 nfsidmap[2815]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Dec 13 02:15:51.337063 nfsidmap[2817]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Dec 13 02:15:51.624570 env[1218]: time="2024-12-13T02:15:51.624013557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c80c7899-9865-4fb8-b04e-6e7a0a13c2fc,Namespace:default,Attempt:0,}" Dec 13 02:15:51.649439 kubelet[1511]: E1213 02:15:51.649404 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:51.672203 systemd-networkd[1028]: lxc13a22e055847: Link UP Dec 13 02:15:51.688198 kernel: eth0: renamed from tmp94572 Dec 13 02:15:51.709254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:15:51.709391 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc13a22e055847: link becomes ready Dec 13 02:15:51.709724 systemd-networkd[1028]: lxc13a22e055847: Gained carrier Dec 13 02:15:51.924707 env[1218]: time="2024-12-13T02:15:51.924511662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:51.924707 env[1218]: time="2024-12-13T02:15:51.924569647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:51.924707 env[1218]: time="2024-12-13T02:15:51.924588413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:51.925457 env[1218]: time="2024-12-13T02:15:51.925387548Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/945720db84948b194b346a8035d82fde3b4b9b9934a4c21860716df41960fcec pid=2842 runtime=io.containerd.runc.v2 Dec 13 02:15:51.963305 systemd[1]: run-containerd-runc-k8s.io-945720db84948b194b346a8035d82fde3b4b9b9934a4c21860716df41960fcec-runc.N6B137.mount: Deactivated successfully. Dec 13 02:15:51.971589 systemd[1]: Started cri-containerd-945720db84948b194b346a8035d82fde3b4b9b9934a4c21860716df41960fcec.scope. Dec 13 02:15:52.028540 env[1218]: time="2024-12-13T02:15:52.028486475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c80c7899-9865-4fb8-b04e-6e7a0a13c2fc,Namespace:default,Attempt:0,} returns sandbox id \"945720db84948b194b346a8035d82fde3b4b9b9934a4c21860716df41960fcec\"" Dec 13 02:15:52.031508 env[1218]: time="2024-12-13T02:15:52.031462744Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:15:52.257868 env[1218]: time="2024-12-13T02:15:52.257701728Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:52.260048 env[1218]: time="2024-12-13T02:15:52.259999524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:52.262490 env[1218]: time="2024-12-13T02:15:52.262453097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:52.264720 env[1218]: time="2024-12-13T02:15:52.264676304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:52.265696 env[1218]: time="2024-12-13T02:15:52.265645689Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:15:52.269437 env[1218]: time="2024-12-13T02:15:52.269386540Z" level=info msg="CreateContainer within sandbox \"945720db84948b194b346a8035d82fde3b4b9b9934a4c21860716df41960fcec\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 02:15:52.295942 env[1218]: time="2024-12-13T02:15:52.295883275Z" level=info msg="CreateContainer within sandbox \"945720db84948b194b346a8035d82fde3b4b9b9934a4c21860716df41960fcec\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"34bd7726dbd8693d83bc0e7351d04266b233ef0a209aca3d09bcf6518041761c\"" Dec 13 02:15:52.296749 env[1218]: time="2024-12-13T02:15:52.296710408Z" level=info msg="StartContainer for \"34bd7726dbd8693d83bc0e7351d04266b233ef0a209aca3d09bcf6518041761c\"" Dec 13 02:15:52.319441 systemd[1]: Started cri-containerd-34bd7726dbd8693d83bc0e7351d04266b233ef0a209aca3d09bcf6518041761c.scope. Dec 13 02:15:52.358206 env[1218]: time="2024-12-13T02:15:52.358112680Z" level=info msg="StartContainer for \"34bd7726dbd8693d83bc0e7351d04266b233ef0a209aca3d09bcf6518041761c\" returns successfully" Dec 13 02:15:52.650473 kubelet[1511]: E1213 02:15:52.650399 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:53.651339 kubelet[1511]: E1213 02:15:53.651274 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:53.689511 systemd-networkd[1028]: lxc13a22e055847: Gained IPv6LL Dec 13 02:15:54.651713 kubelet[1511]: E1213 02:15:54.651641 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:55.652716 kubelet[1511]: E1213 02:15:55.652626 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:56.080686 kubelet[1511]: I1213 02:15:56.080581 1511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.843634857 podStartE2EDuration="20.080550846s" podCreationTimestamp="2024-12-13 02:15:36 +0000 UTC" firstStartedPulling="2024-12-13 02:15:52.030532821 +0000 UTC m=+56.199300389" lastFinishedPulling="2024-12-13 02:15:52.267448794 +0000 UTC m=+56.436216378" observedRunningTime="2024-12-13 02:15:53.016373419 +0000 UTC m=+57.185141004" watchObservedRunningTime="2024-12-13 02:15:56.080550846 +0000 UTC m=+60.249318436" Dec 13 02:15:56.112272 systemd[1]: run-containerd-runc-k8s.io-3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31-runc.dYEA6w.mount: Deactivated successfully. Dec 13 02:15:56.140102 env[1218]: time="2024-12-13T02:15:56.140000898Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:15:56.150840 env[1218]: time="2024-12-13T02:15:56.150778350Z" level=info msg="StopContainer for \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\" with timeout 2 (s)" Dec 13 02:15:56.151381 env[1218]: time="2024-12-13T02:15:56.151336845Z" level=info msg="Stop container \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\" with signal terminated" Dec 13 02:15:56.163236 systemd-networkd[1028]: lxc_health: Link DOWN Dec 13 02:15:56.163251 systemd-networkd[1028]: lxc_health: Lost carrier Dec 13 02:15:56.189800 systemd[1]: cri-containerd-3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31.scope: Deactivated successfully. Dec 13 02:15:56.190282 systemd[1]: cri-containerd-3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31.scope: Consumed 8.652s CPU time. Dec 13 02:15:56.224966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31-rootfs.mount: Deactivated successfully. Dec 13 02:15:56.609024 kubelet[1511]: E1213 02:15:56.608923 1511 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:56.652790 kubelet[1511]: E1213 02:15:56.652751 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:56.737273 kubelet[1511]: E1213 02:15:56.737190 1511 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:15:57.654867 kubelet[1511]: E1213 02:15:57.654811 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:57.762512 kubelet[1511]: I1213 02:15:57.762428 1511 setters.go:580] "Node became not ready" node="10.128.0.35" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:15:57Z","lastTransitionTime":"2024-12-13T02:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:15:57.773192 env[1218]: time="2024-12-13T02:15:57.772829071Z" level=info msg="shim disconnected" id=3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31 Dec 13 02:15:57.773192 env[1218]: time="2024-12-13T02:15:57.772908306Z" level=warning msg="cleaning up after shim disconnected" id=3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31 namespace=k8s.io Dec 13 02:15:57.773192 env[1218]: time="2024-12-13T02:15:57.772925650Z" level=info msg="cleaning up dead shim" Dec 13 02:15:57.785406 env[1218]: time="2024-12-13T02:15:57.785343179Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2972 runtime=io.containerd.runc.v2\n" Dec 13 02:15:57.788593 env[1218]: time="2024-12-13T02:15:57.788542646Z" level=info msg="StopContainer for \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\" returns successfully" Dec 13 02:15:57.789347 env[1218]: time="2024-12-13T02:15:57.789299895Z" level=info msg="StopPodSandbox for \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\"" Dec 13 02:15:57.789482 env[1218]: time="2024-12-13T02:15:57.789377021Z" level=info msg="Container to stop \"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:15:57.789482 env[1218]: time="2024-12-13T02:15:57.789402228Z" level=info msg="Container to stop \"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:15:57.789482 env[1218]: time="2024-12-13T02:15:57.789420330Z" level=info msg="Container to stop \"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:15:57.789482 env[1218]: time="2024-12-13T02:15:57.789439124Z" level=info msg="Container to stop \"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:15:57.789482 env[1218]: time="2024-12-13T02:15:57.789456383Z" level=info msg="Container to stop \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:15:57.793318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0-shm.mount: Deactivated successfully. Dec 13 02:15:57.803567 systemd[1]: cri-containerd-53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0.scope: Deactivated successfully. Dec 13 02:15:57.832131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0-rootfs.mount: Deactivated successfully. Dec 13 02:15:57.837349 env[1218]: time="2024-12-13T02:15:57.837289002Z" level=info msg="shim disconnected" id=53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0 Dec 13 02:15:57.837541 env[1218]: time="2024-12-13T02:15:57.837356153Z" level=warning msg="cleaning up after shim disconnected" id=53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0 namespace=k8s.io Dec 13 02:15:57.837541 env[1218]: time="2024-12-13T02:15:57.837371960Z" level=info msg="cleaning up dead shim" Dec 13 02:15:57.849558 env[1218]: time="2024-12-13T02:15:57.849504735Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3004 runtime=io.containerd.runc.v2\n" Dec 13 02:15:57.849944 env[1218]: time="2024-12-13T02:15:57.849906061Z" level=info msg="TearDown network for sandbox \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" successfully" Dec 13 02:15:57.850056 env[1218]: time="2024-12-13T02:15:57.849941959Z" level=info msg="StopPodSandbox for \"53be583280629f4c41f241767f0e1ceadcea4a395da1fd7db438fe74e1cf5bb0\" returns successfully" Dec 13 02:15:57.896555 kubelet[1511]: I1213 02:15:57.896513 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-run\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.896816 kubelet[1511]: I1213 02:15:57.896569 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-hubble-tls\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.896816 kubelet[1511]: I1213 02:15:57.896605 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-clustermesh-secrets\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.896816 kubelet[1511]: I1213 02:15:57.896630 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-host-proc-sys-net\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.896816 kubelet[1511]: I1213 02:15:57.896654 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-host-proc-sys-kernel\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.896816 kubelet[1511]: I1213 02:15:57.896675 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-etc-cni-netd\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.896816 kubelet[1511]: I1213 02:15:57.896695 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-lib-modules\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.897138 kubelet[1511]: I1213 02:15:57.896720 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-hostproc\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.897138 kubelet[1511]: I1213 02:15:57.896752 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv2mq\" (UniqueName: \"kubernetes.io/projected/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-kube-api-access-bv2mq\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.897138 kubelet[1511]: I1213 02:15:57.896781 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-config-path\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.897138 kubelet[1511]: I1213 02:15:57.896807 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-cgroup\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.897138 kubelet[1511]: I1213 02:15:57.896832 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cni-path\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.897138 kubelet[1511]: I1213 02:15:57.896856 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-bpf-maps\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.897507 kubelet[1511]: I1213 02:15:57.896883 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-xtables-lock\") pod \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\" (UID: \"370f0b5f-3ee0-43cb-a377-5793d1ec2c18\") " Dec 13 02:15:57.897507 kubelet[1511]: I1213 02:15:57.896970 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:57.897507 kubelet[1511]: I1213 02:15:57.897018 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:57.900202 kubelet[1511]: I1213 02:15:57.897725 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-hostproc" (OuterVolumeSpecName: "hostproc") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:57.900202 kubelet[1511]: I1213 02:15:57.898569 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:57.900202 kubelet[1511]: I1213 02:15:57.898627 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:57.900202 kubelet[1511]: I1213 02:15:57.898657 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:57.900202 kubelet[1511]: I1213 02:15:57.898683 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:57.900547 kubelet[1511]: I1213 02:15:57.898718 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:57.904613 systemd[1]: var-lib-kubelet-pods-370f0b5f\x2d3ee0\x2d43cb\x2da377\x2d5793d1ec2c18-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:15:57.906112 kubelet[1511]: I1213 02:15:57.906027 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cni-path" (OuterVolumeSpecName: "cni-path") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:57.907580 kubelet[1511]: I1213 02:15:57.907550 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:15:57.907965 kubelet[1511]: I1213 02:15:57.907905 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:57.908787 kubelet[1511]: I1213 02:15:57.908749 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:15:57.914480 systemd[1]: var-lib-kubelet-pods-370f0b5f\x2d3ee0\x2d43cb\x2da377\x2d5793d1ec2c18-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:15:57.915923 kubelet[1511]: I1213 02:15:57.915891 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:15:57.919810 systemd[1]: var-lib-kubelet-pods-370f0b5f\x2d3ee0\x2d43cb\x2da377\x2d5793d1ec2c18-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbv2mq.mount: Deactivated successfully. Dec 13 02:15:57.921310 kubelet[1511]: I1213 02:15:57.921264 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-kube-api-access-bv2mq" (OuterVolumeSpecName: "kube-api-access-bv2mq") pod "370f0b5f-3ee0-43cb-a377-5793d1ec2c18" (UID: "370f0b5f-3ee0-43cb-a377-5793d1ec2c18"). InnerVolumeSpecName "kube-api-access-bv2mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:15:57.998116 kubelet[1511]: I1213 02:15:57.998070 1511 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-run\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998116 kubelet[1511]: I1213 02:15:57.998110 1511 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-hubble-tls\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998395 kubelet[1511]: I1213 02:15:57.998126 1511 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-host-proc-sys-kernel\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998395 kubelet[1511]: I1213 02:15:57.998145 1511 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-etc-cni-netd\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998395 kubelet[1511]: I1213 02:15:57.998179 1511 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-lib-modules\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998395 kubelet[1511]: I1213 02:15:57.998193 1511 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-clustermesh-secrets\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998395 kubelet[1511]: I1213 02:15:57.998205 1511 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-host-proc-sys-net\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998395 kubelet[1511]: I1213 02:15:57.998218 1511 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-hostproc\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998395 kubelet[1511]: I1213 02:15:57.998231 1511 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bv2mq\" (UniqueName: \"kubernetes.io/projected/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-kube-api-access-bv2mq\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998395 kubelet[1511]: I1213 02:15:57.998243 1511 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cni-path\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998653 kubelet[1511]: I1213 02:15:57.998253 1511 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-bpf-maps\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998653 kubelet[1511]: I1213 02:15:57.998269 1511 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-xtables-lock\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998653 kubelet[1511]: I1213 02:15:57.998285 1511 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-config-path\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:57.998653 kubelet[1511]: I1213 02:15:57.998298 1511 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/370f0b5f-3ee0-43cb-a377-5793d1ec2c18-cilium-cgroup\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:15:58.021928 kubelet[1511]: I1213 02:15:58.021897 1511 scope.go:117] "RemoveContainer" containerID="3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31" Dec 13 02:15:58.023617 env[1218]: time="2024-12-13T02:15:58.023559458Z" level=info msg="RemoveContainer for \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\"" Dec 13 02:15:58.028331 env[1218]: time="2024-12-13T02:15:58.028274311Z" level=info msg="RemoveContainer for \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\" returns successfully" Dec 13 02:15:58.029129 systemd[1]: Removed slice kubepods-burstable-pod370f0b5f_3ee0_43cb_a377_5793d1ec2c18.slice. Dec 13 02:15:58.029310 systemd[1]: kubepods-burstable-pod370f0b5f_3ee0_43cb_a377_5793d1ec2c18.slice: Consumed 8.817s CPU time. Dec 13 02:15:58.030838 kubelet[1511]: I1213 02:15:58.030813 1511 scope.go:117] "RemoveContainer" containerID="0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e" Dec 13 02:15:58.032292 env[1218]: time="2024-12-13T02:15:58.032250707Z" level=info msg="RemoveContainer for \"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e\"" Dec 13 02:15:58.036759 env[1218]: time="2024-12-13T02:15:58.036717769Z" level=info msg="RemoveContainer for \"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e\" returns successfully" Dec 13 02:15:58.037011 kubelet[1511]: I1213 02:15:58.036973 1511 scope.go:117] "RemoveContainer" containerID="d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a" Dec 13 02:15:58.038915 env[1218]: time="2024-12-13T02:15:58.038563203Z" level=info msg="RemoveContainer for \"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a\"" Dec 13 02:15:58.042224 env[1218]: time="2024-12-13T02:15:58.042181124Z" level=info msg="RemoveContainer for \"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a\" returns successfully" Dec 13 02:15:58.042524 kubelet[1511]: I1213 02:15:58.042504 1511 scope.go:117] "RemoveContainer" containerID="f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808" Dec 13 02:15:58.044109 env[1218]: time="2024-12-13T02:15:58.044062981Z" level=info msg="RemoveContainer for \"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808\"" Dec 13 02:15:58.047894 env[1218]: time="2024-12-13T02:15:58.047855973Z" level=info msg="RemoveContainer for \"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808\" returns successfully" Dec 13 02:15:58.048131 kubelet[1511]: I1213 02:15:58.048108 1511 scope.go:117] "RemoveContainer" containerID="0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7" Dec 13 02:15:58.049445 env[1218]: time="2024-12-13T02:15:58.049408973Z" level=info msg="RemoveContainer for \"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7\"" Dec 13 02:15:58.052824 env[1218]: time="2024-12-13T02:15:58.052766310Z" level=info msg="RemoveContainer for \"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7\" returns successfully" Dec 13 02:15:58.053012 kubelet[1511]: I1213 02:15:58.052990 1511 scope.go:117] "RemoveContainer" containerID="3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31" Dec 13 02:15:58.053527 env[1218]: time="2024-12-13T02:15:58.053404268Z" level=error msg="ContainerStatus for \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\": not found" Dec 13 02:15:58.053686 kubelet[1511]: E1213 02:15:58.053654 1511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\": not found" containerID="3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31" Dec 13 02:15:58.053838 kubelet[1511]: I1213 02:15:58.053719 1511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31"} err="failed to get container status \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\": rpc error: code = NotFound desc = an error occurred when try to find container \"3311c5c670dd02ae2494ab911986187e4509817ae8e811324da56775efa78f31\": not found" Dec 13 02:15:58.053930 kubelet[1511]: I1213 02:15:58.053843 1511 scope.go:117] "RemoveContainer" containerID="0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e" Dec 13 02:15:58.054132 env[1218]: time="2024-12-13T02:15:58.054058666Z" level=error msg="ContainerStatus for \"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e\": not found" Dec 13 02:15:58.054306 kubelet[1511]: E1213 02:15:58.054277 1511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e\": not found" containerID="0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e" Dec 13 02:15:58.054407 kubelet[1511]: I1213 02:15:58.054317 1511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e"} err="failed to get container status \"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0833b783a8fd94280c27311e548a0e5b002beb4e14ca0b9095712ec2b4e66a4e\": not found" Dec 13 02:15:58.054407 kubelet[1511]: I1213 02:15:58.054343 1511 scope.go:117] "RemoveContainer" containerID="d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a" Dec 13 02:15:58.054633 env[1218]: time="2024-12-13T02:15:58.054555520Z" level=error msg="ContainerStatus for \"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a\": not found" Dec 13 02:15:58.054778 kubelet[1511]: E1213 02:15:58.054748 1511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a\": not found" containerID="d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a" Dec 13 02:15:58.054886 kubelet[1511]: I1213 02:15:58.054783 1511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a"} err="failed to get container status \"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d37ada6602beeea5889312eff7e2ac3ce26aaa72e5367928c595ceffcba2035a\": not found" Dec 13 02:15:58.054886 kubelet[1511]: I1213 02:15:58.054813 1511 scope.go:117] "RemoveContainer" containerID="f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808" Dec 13 02:15:58.055099 env[1218]: time="2024-12-13T02:15:58.055017842Z" level=error msg="ContainerStatus for \"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808\": not found" Dec 13 02:15:58.055249 kubelet[1511]: E1213 02:15:58.055217 1511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808\": not found" containerID="f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808" Dec 13 02:15:58.055353 kubelet[1511]: I1213 02:15:58.055254 1511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808"} err="failed to get container status \"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808\": rpc error: code = NotFound desc = an error occurred when try to find container \"f71aca1e616f067bde523f48c2e96a25e3c1e3bfa9d23f15b67046a0a36b9808\": not found" Dec 13 02:15:58.055353 kubelet[1511]: I1213 02:15:58.055276 1511 scope.go:117] "RemoveContainer" containerID="0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7" Dec 13 02:15:58.055541 env[1218]: time="2024-12-13T02:15:58.055475545Z" level=error msg="ContainerStatus for \"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7\": not found" Dec 13 02:15:58.055706 kubelet[1511]: E1213 02:15:58.055675 1511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7\": not found" containerID="0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7" Dec 13 02:15:58.055802 kubelet[1511]: I1213 02:15:58.055710 1511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7"} err="failed to get container status \"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f4f17bd9444ca6069d66e474d67915d2c018f6b371bf42cb34e3527e99ae5f7\": not found" Dec 13 02:15:58.655854 kubelet[1511]: E1213 02:15:58.655782 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:58.821057 kubelet[1511]: I1213 02:15:58.820967 1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370f0b5f-3ee0-43cb-a377-5793d1ec2c18" path="/var/lib/kubelet/pods/370f0b5f-3ee0-43cb-a377-5793d1ec2c18/volumes" Dec 13 02:15:59.500575 kubelet[1511]: I1213 02:15:59.500500 1511 topology_manager.go:215] "Topology Admit Handler" podUID="1f63ec01-84d9-4741-b6bb-af0c69e3ab3e" podNamespace="kube-system" podName="cilium-operator-599987898-75rvm" Dec 13 02:15:59.501068 kubelet[1511]: E1213 02:15:59.501035 1511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="370f0b5f-3ee0-43cb-a377-5793d1ec2c18" containerName="apply-sysctl-overwrites" Dec 13 02:15:59.501068 kubelet[1511]: E1213 02:15:59.501068 1511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="370f0b5f-3ee0-43cb-a377-5793d1ec2c18" containerName="clean-cilium-state" Dec 13 02:15:59.501068 kubelet[1511]: E1213 02:15:59.501080 1511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="370f0b5f-3ee0-43cb-a377-5793d1ec2c18" containerName="mount-cgroup" Dec 13 02:15:59.501349 kubelet[1511]: E1213 02:15:59.501091 1511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="370f0b5f-3ee0-43cb-a377-5793d1ec2c18" containerName="mount-bpf-fs" Dec 13 02:15:59.501349 kubelet[1511]: E1213 02:15:59.501103 1511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="370f0b5f-3ee0-43cb-a377-5793d1ec2c18" containerName="cilium-agent" Dec 13 02:15:59.501349 kubelet[1511]: I1213 02:15:59.501136 1511 memory_manager.go:354] "RemoveStaleState removing state" podUID="370f0b5f-3ee0-43cb-a377-5793d1ec2c18" containerName="cilium-agent" Dec 13 02:15:59.509917 systemd[1]: Created slice kubepods-besteffort-pod1f63ec01_84d9_4741_b6bb_af0c69e3ab3e.slice. Dec 13 02:15:59.584823 kubelet[1511]: I1213 02:15:59.584754 1511 topology_manager.go:215] "Topology Admit Handler" podUID="e89ec42a-6ff2-472d-b7ae-9c54a293cf91" podNamespace="kube-system" podName="cilium-zfd7d" Dec 13 02:15:59.593285 systemd[1]: Created slice kubepods-burstable-pode89ec42a_6ff2_472d_b7ae_9c54a293cf91.slice. Dec 13 02:15:59.608408 kubelet[1511]: I1213 02:15:59.608361 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-etc-cni-netd\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.608408 kubelet[1511]: I1213 02:15:59.608413 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-lib-modules\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.608709 kubelet[1511]: I1213 02:15:59.608444 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-xtables-lock\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.608709 kubelet[1511]: I1213 02:15:59.608476 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-config-path\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.608709 kubelet[1511]: I1213 02:15:59.608504 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-ipsec-secrets\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.608709 kubelet[1511]: I1213 02:15:59.608532 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-cgroup\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.608709 kubelet[1511]: I1213 02:15:59.608562 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5vhp\" (UniqueName: \"kubernetes.io/projected/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-kube-api-access-j5vhp\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.608709 kubelet[1511]: I1213 02:15:59.608590 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-hostproc\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.609040 kubelet[1511]: I1213 02:15:59.608614 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-host-proc-sys-net\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.609040 kubelet[1511]: I1213 02:15:59.608645 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-hubble-tls\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.609040 kubelet[1511]: I1213 02:15:59.608674 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-bpf-maps\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.609040 kubelet[1511]: I1213 02:15:59.608705 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfw8c\" (UniqueName: \"kubernetes.io/projected/1f63ec01-84d9-4741-b6bb-af0c69e3ab3e-kube-api-access-sfw8c\") pod \"cilium-operator-599987898-75rvm\" (UID: \"1f63ec01-84d9-4741-b6bb-af0c69e3ab3e\") " pod="kube-system/cilium-operator-599987898-75rvm" Dec 13 02:15:59.609040 kubelet[1511]: I1213 02:15:59.608739 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-clustermesh-secrets\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.609317 kubelet[1511]: I1213 02:15:59.608776 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-host-proc-sys-kernel\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.609317 kubelet[1511]: I1213 02:15:59.608808 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f63ec01-84d9-4741-b6bb-af0c69e3ab3e-cilium-config-path\") pod \"cilium-operator-599987898-75rvm\" (UID: \"1f63ec01-84d9-4741-b6bb-af0c69e3ab3e\") " pod="kube-system/cilium-operator-599987898-75rvm" Dec 13 02:15:59.609317 kubelet[1511]: I1213 02:15:59.608836 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cni-path\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.609317 kubelet[1511]: I1213 02:15:59.608887 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-run\") pod \"cilium-zfd7d\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " pod="kube-system/cilium-zfd7d" Dec 13 02:15:59.656198 kubelet[1511]: E1213 02:15:59.656132 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:15:59.815245 env[1218]: time="2024-12-13T02:15:59.815155141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-75rvm,Uid:1f63ec01-84d9-4741-b6bb-af0c69e3ab3e,Namespace:kube-system,Attempt:0,}" Dec 13 02:15:59.834916 env[1218]: time="2024-12-13T02:15:59.834813112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:59.834916 env[1218]: time="2024-12-13T02:15:59.834868868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:59.834916 env[1218]: time="2024-12-13T02:15:59.834887979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:59.835547 env[1218]: time="2024-12-13T02:15:59.835487212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3173573748e6993885231b2cae094ba5b71cde27bfe965856ac220ad653847d pid=3036 runtime=io.containerd.runc.v2 Dec 13 02:15:59.853375 systemd[1]: Started cri-containerd-a3173573748e6993885231b2cae094ba5b71cde27bfe965856ac220ad653847d.scope. Dec 13 02:15:59.904883 env[1218]: time="2024-12-13T02:15:59.904812137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zfd7d,Uid:e89ec42a-6ff2-472d-b7ae-9c54a293cf91,Namespace:kube-system,Attempt:0,}" Dec 13 02:15:59.923712 env[1218]: time="2024-12-13T02:15:59.923650921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-75rvm,Uid:1f63ec01-84d9-4741-b6bb-af0c69e3ab3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3173573748e6993885231b2cae094ba5b71cde27bfe965856ac220ad653847d\"" Dec 13 02:15:59.925926 env[1218]: time="2024-12-13T02:15:59.925878756Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:15:59.935758 env[1218]: time="2024-12-13T02:15:59.935675988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:59.935932 env[1218]: time="2024-12-13T02:15:59.935793994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:59.935932 env[1218]: time="2024-12-13T02:15:59.935837020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:59.936094 env[1218]: time="2024-12-13T02:15:59.936047300Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261 pid=3076 runtime=io.containerd.runc.v2 Dec 13 02:15:59.954854 systemd[1]: Started cri-containerd-535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261.scope. Dec 13 02:15:59.993106 env[1218]: time="2024-12-13T02:15:59.993050489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zfd7d,Uid:e89ec42a-6ff2-472d-b7ae-9c54a293cf91,Namespace:kube-system,Attempt:0,} returns sandbox id \"535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261\"" Dec 13 02:15:59.997012 env[1218]: time="2024-12-13T02:15:59.996977519Z" level=info msg="CreateContainer within sandbox \"535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:16:00.012671 env[1218]: time="2024-12-13T02:16:00.012624891Z" level=info msg="CreateContainer within sandbox \"535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293\"" Dec 13 02:16:00.013863 env[1218]: time="2024-12-13T02:16:00.013823212Z" level=info msg="StartContainer for \"c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293\"" Dec 13 02:16:00.040707 systemd[1]: Started cri-containerd-c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293.scope. Dec 13 02:16:00.059940 systemd[1]: cri-containerd-c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293.scope: Deactivated successfully. Dec 13 02:16:00.075806 env[1218]: time="2024-12-13T02:16:00.074631419Z" level=info msg="shim disconnected" id=c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293 Dec 13 02:16:00.075806 env[1218]: time="2024-12-13T02:16:00.074705002Z" level=warning msg="cleaning up after shim disconnected" id=c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293 namespace=k8s.io Dec 13 02:16:00.075806 env[1218]: time="2024-12-13T02:16:00.074719262Z" level=info msg="cleaning up dead shim" Dec 13 02:16:00.086929 env[1218]: time="2024-12-13T02:16:00.086840959Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3137 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:16:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:16:00.087364 env[1218]: time="2024-12-13T02:16:00.087228087Z" level=error msg="copy shim log" error="read /proc/self/fd/66: file already closed" Dec 13 02:16:00.088302 env[1218]: time="2024-12-13T02:16:00.088251976Z" level=error msg="Failed to pipe stdout of container \"c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293\"" error="reading from a closed fifo" Dec 13 02:16:00.088522 env[1218]: time="2024-12-13T02:16:00.088472257Z" level=error msg="Failed to pipe stderr of container \"c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293\"" error="reading from a closed fifo" Dec 13 02:16:00.090662 env[1218]: time="2024-12-13T02:16:00.090591050Z" level=error msg="StartContainer for \"c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:16:00.090954 kubelet[1511]: E1213 02:16:00.090899 1511 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293" Dec 13 02:16:00.091150 kubelet[1511]: E1213 02:16:00.091124 1511 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:16:00.091150 kubelet[1511]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:16:00.091150 kubelet[1511]: rm /hostbin/cilium-mount Dec 13 02:16:00.091346 kubelet[1511]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zfd7d_kube-system(e89ec42a-6ff2-472d-b7ae-9c54a293cf91): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:16:00.091346 kubelet[1511]: E1213 02:16:00.091193 1511 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zfd7d" podUID="e89ec42a-6ff2-472d-b7ae-9c54a293cf91" Dec 13 02:16:00.656883 kubelet[1511]: E1213 02:16:00.656809 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:01.040677 env[1218]: time="2024-12-13T02:16:01.040620216Z" level=info msg="CreateContainer within sandbox \"535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 02:16:01.057894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount97139072.mount: Deactivated successfully. Dec 13 02:16:01.067021 env[1218]: time="2024-12-13T02:16:01.066962479Z" level=info msg="CreateContainer within sandbox \"535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad\"" Dec 13 02:16:01.067994 env[1218]: time="2024-12-13T02:16:01.067955327Z" level=info msg="StartContainer for \"dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad\"" Dec 13 02:16:01.092016 systemd[1]: Started cri-containerd-dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad.scope. Dec 13 02:16:01.107849 systemd[1]: cri-containerd-dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad.scope: Deactivated successfully. Dec 13 02:16:01.118375 env[1218]: time="2024-12-13T02:16:01.118306637Z" level=info msg="shim disconnected" id=dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad Dec 13 02:16:01.118623 env[1218]: time="2024-12-13T02:16:01.118377668Z" level=warning msg="cleaning up after shim disconnected" id=dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad namespace=k8s.io Dec 13 02:16:01.118623 env[1218]: time="2024-12-13T02:16:01.118391694Z" level=info msg="cleaning up dead shim" Dec 13 02:16:01.132560 env[1218]: time="2024-12-13T02:16:01.132506538Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3174 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:16:01Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:16:01.133127 env[1218]: time="2024-12-13T02:16:01.133052006Z" level=error msg="copy shim log" error="read /proc/self/fd/89: file already closed" Dec 13 02:16:01.134300 env[1218]: time="2024-12-13T02:16:01.134247210Z" level=error msg="Failed to pipe stdout of container \"dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad\"" error="reading from a closed fifo" Dec 13 02:16:01.134445 env[1218]: time="2024-12-13T02:16:01.134266252Z" level=error msg="Failed to pipe stderr of container \"dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad\"" error="reading from a closed fifo" Dec 13 02:16:01.136539 env[1218]: time="2024-12-13T02:16:01.136476916Z" level=error msg="StartContainer for \"dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:16:01.136782 kubelet[1511]: E1213 02:16:01.136712 1511 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad" Dec 13 02:16:01.136933 kubelet[1511]: E1213 02:16:01.136861 1511 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:16:01.136933 kubelet[1511]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:16:01.136933 kubelet[1511]: rm /hostbin/cilium-mount Dec 13 02:16:01.136933 kubelet[1511]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5vhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zfd7d_kube-system(e89ec42a-6ff2-472d-b7ae-9c54a293cf91): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:16:01.136933 kubelet[1511]: E1213 02:16:01.136904 1511 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zfd7d" podUID="e89ec42a-6ff2-472d-b7ae-9c54a293cf91" Dec 13 02:16:01.657086 kubelet[1511]: E1213 02:16:01.657027 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:01.720266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad-rootfs.mount: Deactivated successfully. Dec 13 02:16:01.739485 kubelet[1511]: E1213 02:16:01.739442 1511 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:16:02.041327 kubelet[1511]: I1213 02:16:02.041103 1511 scope.go:117] "RemoveContainer" containerID="c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293" Dec 13 02:16:02.041682 env[1218]: time="2024-12-13T02:16:02.041565895Z" level=info msg="StopPodSandbox for \"535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261\"" Dec 13 02:16:02.041682 env[1218]: time="2024-12-13T02:16:02.041647358Z" level=info msg="Container to stop \"c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:16:02.046867 env[1218]: time="2024-12-13T02:16:02.041681756Z" level=info msg="Container to stop \"dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:16:02.045120 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261-shm.mount: Deactivated successfully. Dec 13 02:16:02.049680 env[1218]: time="2024-12-13T02:16:02.049639785Z" level=info msg="RemoveContainer for \"c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293\"" Dec 13 02:16:02.055808 env[1218]: time="2024-12-13T02:16:02.054591551Z" level=info msg="RemoveContainer for \"c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293\" returns successfully" Dec 13 02:16:02.054808 systemd[1]: cri-containerd-535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261.scope: Deactivated successfully. Dec 13 02:16:02.086194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261-rootfs.mount: Deactivated successfully. Dec 13 02:16:02.091548 env[1218]: time="2024-12-13T02:16:02.091479786Z" level=info msg="shim disconnected" id=535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261 Dec 13 02:16:02.091548 env[1218]: time="2024-12-13T02:16:02.091541651Z" level=warning msg="cleaning up after shim disconnected" id=535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261 namespace=k8s.io Dec 13 02:16:02.091548 env[1218]: time="2024-12-13T02:16:02.091557317Z" level=info msg="cleaning up dead shim" Dec 13 02:16:02.102805 env[1218]: time="2024-12-13T02:16:02.102759183Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3206 runtime=io.containerd.runc.v2\n" Dec 13 02:16:02.103220 env[1218]: time="2024-12-13T02:16:02.103178538Z" level=info msg="TearDown network for sandbox \"535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261\" successfully" Dec 13 02:16:02.103349 env[1218]: time="2024-12-13T02:16:02.103219196Z" level=info msg="StopPodSandbox for \"535a9e75c5fcf4403426908dafeb6ba5d838a6505ef3d9529ed3e791c2235261\" returns successfully" Dec 13 02:16:02.232396 kubelet[1511]: I1213 02:16:02.232347 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-etc-cni-netd\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.232810 kubelet[1511]: I1213 02:16:02.232782 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-lib-modules\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233035 kubelet[1511]: I1213 02:16:02.232826 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-cgroup\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233035 kubelet[1511]: I1213 02:16:02.232861 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-clustermesh-secrets\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233035 kubelet[1511]: I1213 02:16:02.232888 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cni-path\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233035 kubelet[1511]: I1213 02:16:02.232920 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5vhp\" (UniqueName: \"kubernetes.io/projected/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-kube-api-access-j5vhp\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233035 kubelet[1511]: I1213 02:16:02.232947 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-config-path\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233035 kubelet[1511]: I1213 02:16:02.232978 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-ipsec-secrets\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233035 kubelet[1511]: I1213 02:16:02.233001 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-hostproc\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233035 kubelet[1511]: I1213 02:16:02.233024 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-host-proc-sys-net\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233528 kubelet[1511]: I1213 02:16:02.233053 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-run\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233528 kubelet[1511]: I1213 02:16:02.233078 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-xtables-lock\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233528 kubelet[1511]: I1213 02:16:02.233108 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-hubble-tls\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233528 kubelet[1511]: I1213 02:16:02.233135 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-bpf-maps\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233528 kubelet[1511]: I1213 02:16:02.233188 1511 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-host-proc-sys-kernel\") pod \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\" (UID: \"e89ec42a-6ff2-472d-b7ae-9c54a293cf91\") " Dec 13 02:16:02.233528 kubelet[1511]: I1213 02:16:02.232720 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:02.233528 kubelet[1511]: I1213 02:16:02.233276 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:02.233528 kubelet[1511]: I1213 02:16:02.233325 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:02.233528 kubelet[1511]: I1213 02:16:02.233350 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:02.234016 kubelet[1511]: I1213 02:16:02.233823 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-hostproc" (OuterVolumeSpecName: "hostproc") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:02.234016 kubelet[1511]: I1213 02:16:02.233864 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cni-path" (OuterVolumeSpecName: "cni-path") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:02.234588 kubelet[1511]: I1213 02:16:02.234555 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:02.241565 systemd[1]: var-lib-kubelet-pods-e89ec42a\x2d6ff2\x2d472d\x2db7ae\x2d9c54a293cf91-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj5vhp.mount: Deactivated successfully. Dec 13 02:16:02.242898 kubelet[1511]: I1213 02:16:02.234731 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:02.242898 kubelet[1511]: I1213 02:16:02.234779 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:02.246327 kubelet[1511]: I1213 02:16:02.246292 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:16:02.246683 kubelet[1511]: I1213 02:16:02.246651 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:02.249767 systemd[1]: var-lib-kubelet-pods-e89ec42a\x2d6ff2\x2d472d\x2db7ae\x2d9c54a293cf91-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:16:02.251912 kubelet[1511]: I1213 02:16:02.251874 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:16:02.252594 kubelet[1511]: I1213 02:16:02.252554 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-kube-api-access-j5vhp" (OuterVolumeSpecName: "kube-api-access-j5vhp") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "kube-api-access-j5vhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:16:02.254919 kubelet[1511]: I1213 02:16:02.254874 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:16:02.255734 kubelet[1511]: I1213 02:16:02.255677 1511 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e89ec42a-6ff2-472d-b7ae-9c54a293cf91" (UID: "e89ec42a-6ff2-472d-b7ae-9c54a293cf91"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334110 1511 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-config-path\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334196 1511 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-ipsec-secrets\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334213 1511 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-hostproc\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334227 1511 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-host-proc-sys-net\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334243 1511 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-run\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334256 1511 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-xtables-lock\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334269 1511 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-host-proc-sys-kernel\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334284 1511 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-hubble-tls\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334296 1511 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-bpf-maps\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334309 1511 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cilium-cgroup\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334322 1511 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-etc-cni-netd\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334335 1511 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-lib-modules\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334348 1511 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-cni-path\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334361 1511 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-clustermesh-secrets\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.334430 kubelet[1511]: I1213 02:16:02.334378 1511 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j5vhp\" (UniqueName: \"kubernetes.io/projected/e89ec42a-6ff2-472d-b7ae-9c54a293cf91-kube-api-access-j5vhp\") on node \"10.128.0.35\" DevicePath \"\"" Dec 13 02:16:02.658352 kubelet[1511]: E1213 02:16:02.658197 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:02.720204 systemd[1]: var-lib-kubelet-pods-e89ec42a\x2d6ff2\x2d472d\x2db7ae\x2d9c54a293cf91-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:16:02.720354 systemd[1]: var-lib-kubelet-pods-e89ec42a\x2d6ff2\x2d472d\x2db7ae\x2d9c54a293cf91-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:16:02.822942 systemd[1]: Removed slice kubepods-burstable-pode89ec42a_6ff2_472d_b7ae_9c54a293cf91.slice. Dec 13 02:16:03.045361 kubelet[1511]: I1213 02:16:03.045313 1511 scope.go:117] "RemoveContainer" containerID="dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad" Dec 13 02:16:03.047802 env[1218]: time="2024-12-13T02:16:03.047749652Z" level=info msg="RemoveContainer for \"dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad\"" Dec 13 02:16:03.052334 env[1218]: time="2024-12-13T02:16:03.052278108Z" level=info msg="RemoveContainer for \"dde139101b15a12634028009433b65f087a1ea05212e5c66fd9da2c4b330a9ad\" returns successfully" Dec 13 02:16:03.100693 kubelet[1511]: I1213 02:16:03.100612 1511 topology_manager.go:215] "Topology Admit Handler" podUID="29a0fe7d-cfc7-4d40-9661-31acb5e170c9" podNamespace="kube-system" podName="cilium-p2wzd" Dec 13 02:16:03.100693 kubelet[1511]: E1213 02:16:03.100692 1511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e89ec42a-6ff2-472d-b7ae-9c54a293cf91" containerName="mount-cgroup" Dec 13 02:16:03.100693 kubelet[1511]: E1213 02:16:03.100707 1511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e89ec42a-6ff2-472d-b7ae-9c54a293cf91" containerName="mount-cgroup" Dec 13 02:16:03.101025 kubelet[1511]: I1213 02:16:03.100737 1511 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89ec42a-6ff2-472d-b7ae-9c54a293cf91" containerName="mount-cgroup" Dec 13 02:16:03.101025 kubelet[1511]: I1213 02:16:03.100748 1511 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89ec42a-6ff2-472d-b7ae-9c54a293cf91" containerName="mount-cgroup" Dec 13 02:16:03.108900 systemd[1]: Created slice kubepods-burstable-pod29a0fe7d_cfc7_4d40_9661_31acb5e170c9.slice. Dec 13 02:16:03.139643 kubelet[1511]: I1213 02:16:03.139603 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-bpf-maps\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.139957 kubelet[1511]: I1213 02:16:03.139912 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-etc-cni-netd\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140076 kubelet[1511]: I1213 02:16:03.139958 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7pp8\" (UniqueName: \"kubernetes.io/projected/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-kube-api-access-v7pp8\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140076 kubelet[1511]: I1213 02:16:03.139994 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-hostproc\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140076 kubelet[1511]: I1213 02:16:03.140018 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-cilium-config-path\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140076 kubelet[1511]: I1213 02:16:03.140065 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-hubble-tls\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140346 kubelet[1511]: I1213 02:16:03.140092 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-xtables-lock\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140346 kubelet[1511]: I1213 02:16:03.140120 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-clustermesh-secrets\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140346 kubelet[1511]: I1213 02:16:03.140149 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-cni-path\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140346 kubelet[1511]: I1213 02:16:03.140228 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-host-proc-sys-kernel\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140346 kubelet[1511]: I1213 02:16:03.140258 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-host-proc-sys-net\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140346 kubelet[1511]: I1213 02:16:03.140286 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-cilium-run\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140346 kubelet[1511]: I1213 02:16:03.140314 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-cilium-cgroup\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140346 kubelet[1511]: I1213 02:16:03.140341 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-lib-modules\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.140780 kubelet[1511]: I1213 02:16:03.140371 1511 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/29a0fe7d-cfc7-4d40-9661-31acb5e170c9-cilium-ipsec-secrets\") pod \"cilium-p2wzd\" (UID: \"29a0fe7d-cfc7-4d40-9661-31acb5e170c9\") " pod="kube-system/cilium-p2wzd" Dec 13 02:16:03.180512 kubelet[1511]: W1213 02:16:03.180410 1511 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode89ec42a_6ff2_472d_b7ae_9c54a293cf91.slice/cri-containerd-c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293.scope WatchSource:0}: container "c7147b884de7ae603f62775b9ba4355d697d7e02958c76c3789c5dde5f345293" in namespace "k8s.io": not found Dec 13 02:16:03.418181 env[1218]: time="2024-12-13T02:16:03.416844589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p2wzd,Uid:29a0fe7d-cfc7-4d40-9661-31acb5e170c9,Namespace:kube-system,Attempt:0,}" Dec 13 02:16:03.441241 env[1218]: time="2024-12-13T02:16:03.441105132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:16:03.441241 env[1218]: time="2024-12-13T02:16:03.441183855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:16:03.441241 env[1218]: time="2024-12-13T02:16:03.441204066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:16:03.441846 env[1218]: time="2024-12-13T02:16:03.441766659Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785 pid=3234 runtime=io.containerd.runc.v2 Dec 13 02:16:03.459058 systemd[1]: Started cri-containerd-f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785.scope. Dec 13 02:16:03.495899 env[1218]: time="2024-12-13T02:16:03.495833479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p2wzd,Uid:29a0fe7d-cfc7-4d40-9661-31acb5e170c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785\"" Dec 13 02:16:03.500054 env[1218]: time="2024-12-13T02:16:03.499999475Z" level=info msg="CreateContainer within sandbox \"f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:16:03.514882 env[1218]: time="2024-12-13T02:16:03.514369099Z" level=info msg="CreateContainer within sandbox \"f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"944ad2b63be786e1ed07a96be5193b7c327e2a7ace005036285985181e8e89e7\"" Dec 13 02:16:03.519140 env[1218]: time="2024-12-13T02:16:03.515758450Z" level=info msg="StartContainer for \"944ad2b63be786e1ed07a96be5193b7c327e2a7ace005036285985181e8e89e7\"" Dec 13 02:16:03.543817 systemd[1]: Started cri-containerd-944ad2b63be786e1ed07a96be5193b7c327e2a7ace005036285985181e8e89e7.scope. Dec 13 02:16:03.585205 env[1218]: time="2024-12-13T02:16:03.583609009Z" level=info msg="StartContainer for \"944ad2b63be786e1ed07a96be5193b7c327e2a7ace005036285985181e8e89e7\" returns successfully" Dec 13 02:16:03.594575 systemd[1]: cri-containerd-944ad2b63be786e1ed07a96be5193b7c327e2a7ace005036285985181e8e89e7.scope: Deactivated successfully. Dec 13 02:16:03.626761 env[1218]: time="2024-12-13T02:16:03.626696190Z" level=info msg="shim disconnected" id=944ad2b63be786e1ed07a96be5193b7c327e2a7ace005036285985181e8e89e7 Dec 13 02:16:03.626761 env[1218]: time="2024-12-13T02:16:03.626762632Z" level=warning msg="cleaning up after shim disconnected" id=944ad2b63be786e1ed07a96be5193b7c327e2a7ace005036285985181e8e89e7 namespace=k8s.io Dec 13 02:16:03.627111 env[1218]: time="2024-12-13T02:16:03.626778024Z" level=info msg="cleaning up dead shim" Dec 13 02:16:03.637960 env[1218]: time="2024-12-13T02:16:03.637902948Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3315 runtime=io.containerd.runc.v2\n" Dec 13 02:16:03.659383 kubelet[1511]: E1213 02:16:03.659329 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:04.052272 env[1218]: time="2024-12-13T02:16:04.052204331Z" level=info msg="CreateContainer within sandbox \"f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:16:04.070612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027397068.mount: Deactivated successfully. Dec 13 02:16:04.079540 env[1218]: time="2024-12-13T02:16:04.079486979Z" level=info msg="CreateContainer within sandbox \"f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a68f8c69eb4ce9d50ed1c17292a55fdf0c3598eed0f38900f995bfdecf1d7850\"" Dec 13 02:16:04.080383 env[1218]: time="2024-12-13T02:16:04.080314109Z" level=info msg="StartContainer for \"a68f8c69eb4ce9d50ed1c17292a55fdf0c3598eed0f38900f995bfdecf1d7850\"" Dec 13 02:16:04.106627 systemd[1]: Started cri-containerd-a68f8c69eb4ce9d50ed1c17292a55fdf0c3598eed0f38900f995bfdecf1d7850.scope. Dec 13 02:16:04.147204 env[1218]: time="2024-12-13T02:16:04.147134118Z" level=info msg="StartContainer for \"a68f8c69eb4ce9d50ed1c17292a55fdf0c3598eed0f38900f995bfdecf1d7850\" returns successfully" Dec 13 02:16:04.155510 systemd[1]: cri-containerd-a68f8c69eb4ce9d50ed1c17292a55fdf0c3598eed0f38900f995bfdecf1d7850.scope: Deactivated successfully. Dec 13 02:16:04.185271 env[1218]: time="2024-12-13T02:16:04.185210297Z" level=info msg="shim disconnected" id=a68f8c69eb4ce9d50ed1c17292a55fdf0c3598eed0f38900f995bfdecf1d7850 Dec 13 02:16:04.185271 env[1218]: time="2024-12-13T02:16:04.185271801Z" level=warning msg="cleaning up after shim disconnected" id=a68f8c69eb4ce9d50ed1c17292a55fdf0c3598eed0f38900f995bfdecf1d7850 namespace=k8s.io Dec 13 02:16:04.185632 env[1218]: time="2024-12-13T02:16:04.185294516Z" level=info msg="cleaning up dead shim" Dec 13 02:16:04.196529 env[1218]: time="2024-12-13T02:16:04.196468315Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3381 runtime=io.containerd.runc.v2\n" Dec 13 02:16:04.659774 kubelet[1511]: E1213 02:16:04.659704 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:04.720582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a68f8c69eb4ce9d50ed1c17292a55fdf0c3598eed0f38900f995bfdecf1d7850-rootfs.mount: Deactivated successfully. Dec 13 02:16:04.819097 kubelet[1511]: I1213 02:16:04.819033 1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89ec42a-6ff2-472d-b7ae-9c54a293cf91" path="/var/lib/kubelet/pods/e89ec42a-6ff2-472d-b7ae-9c54a293cf91/volumes" Dec 13 02:16:05.026810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1365270940.mount: Deactivated successfully. Dec 13 02:16:05.070561 env[1218]: time="2024-12-13T02:16:05.070509338Z" level=info msg="CreateContainer within sandbox \"f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:16:05.101040 env[1218]: time="2024-12-13T02:16:05.100970832Z" level=info msg="CreateContainer within sandbox \"f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6a9b5ab84accfc53898f76b2260d8fd7043775d36cb351d559ddaa4f21e3b6b3\"" Dec 13 02:16:05.102490 env[1218]: time="2024-12-13T02:16:05.102456112Z" level=info msg="StartContainer for \"6a9b5ab84accfc53898f76b2260d8fd7043775d36cb351d559ddaa4f21e3b6b3\"" Dec 13 02:16:05.129887 systemd[1]: Started cri-containerd-6a9b5ab84accfc53898f76b2260d8fd7043775d36cb351d559ddaa4f21e3b6b3.scope. Dec 13 02:16:05.194170 systemd[1]: cri-containerd-6a9b5ab84accfc53898f76b2260d8fd7043775d36cb351d559ddaa4f21e3b6b3.scope: Deactivated successfully. Dec 13 02:16:05.198484 env[1218]: time="2024-12-13T02:16:05.198296194Z" level=info msg="StartContainer for \"6a9b5ab84accfc53898f76b2260d8fd7043775d36cb351d559ddaa4f21e3b6b3\" returns successfully" Dec 13 02:16:05.282219 env[1218]: time="2024-12-13T02:16:05.281822398Z" level=info msg="shim disconnected" id=6a9b5ab84accfc53898f76b2260d8fd7043775d36cb351d559ddaa4f21e3b6b3 Dec 13 02:16:05.282219 env[1218]: time="2024-12-13T02:16:05.281919016Z" level=warning msg="cleaning up after shim disconnected" id=6a9b5ab84accfc53898f76b2260d8fd7043775d36cb351d559ddaa4f21e3b6b3 namespace=k8s.io Dec 13 02:16:05.282219 env[1218]: time="2024-12-13T02:16:05.281952659Z" level=info msg="cleaning up dead shim" Dec 13 02:16:05.294529 env[1218]: time="2024-12-13T02:16:05.294482821Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3440 runtime=io.containerd.runc.v2\n" Dec 13 02:16:05.660414 kubelet[1511]: E1213 02:16:05.660272 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:05.721040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122318443.mount: Deactivated successfully. Dec 13 02:16:06.053752 env[1218]: time="2024-12-13T02:16:06.053677497Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:06.056031 env[1218]: time="2024-12-13T02:16:06.055985777Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:06.058275 env[1218]: time="2024-12-13T02:16:06.058226574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:06.058983 env[1218]: time="2024-12-13T02:16:06.058938341Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:16:06.063015 env[1218]: time="2024-12-13T02:16:06.062964487Z" level=info msg="CreateContainer within sandbox \"a3173573748e6993885231b2cae094ba5b71cde27bfe965856ac220ad653847d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:16:06.085322 env[1218]: time="2024-12-13T02:16:06.085269927Z" level=info msg="CreateContainer within sandbox \"a3173573748e6993885231b2cae094ba5b71cde27bfe965856ac220ad653847d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"75c4f9da2ddd9bf4dcc622075f4b4ce3a460f9b4e389f6028dab290d14912e18\"" Dec 13 02:16:06.086396 env[1218]: time="2024-12-13T02:16:06.086145622Z" level=info msg="StartContainer for \"75c4f9da2ddd9bf4dcc622075f4b4ce3a460f9b4e389f6028dab290d14912e18\"" Dec 13 02:16:06.092149 env[1218]: time="2024-12-13T02:16:06.092102770Z" level=info msg="CreateContainer within sandbox \"f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:16:06.122682 env[1218]: time="2024-12-13T02:16:06.122634900Z" level=info msg="CreateContainer within sandbox \"f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5d10544ec3b47b40044be00b1deaf7595d2265a53c6337f907cffe8588877c95\"" Dec 13 02:16:06.124269 env[1218]: time="2024-12-13T02:16:06.124227644Z" level=info msg="StartContainer for \"5d10544ec3b47b40044be00b1deaf7595d2265a53c6337f907cffe8588877c95\"" Dec 13 02:16:06.132544 systemd[1]: Started cri-containerd-75c4f9da2ddd9bf4dcc622075f4b4ce3a460f9b4e389f6028dab290d14912e18.scope. Dec 13 02:16:06.162346 systemd[1]: Started cri-containerd-5d10544ec3b47b40044be00b1deaf7595d2265a53c6337f907cffe8588877c95.scope. Dec 13 02:16:06.205434 env[1218]: time="2024-12-13T02:16:06.205380518Z" level=info msg="StartContainer for \"75c4f9da2ddd9bf4dcc622075f4b4ce3a460f9b4e389f6028dab290d14912e18\" returns successfully" Dec 13 02:16:06.224024 systemd[1]: cri-containerd-5d10544ec3b47b40044be00b1deaf7595d2265a53c6337f907cffe8588877c95.scope: Deactivated successfully. Dec 13 02:16:06.225587 env[1218]: time="2024-12-13T02:16:06.225349023Z" level=info msg="StartContainer for \"5d10544ec3b47b40044be00b1deaf7595d2265a53c6337f907cffe8588877c95\" returns successfully" Dec 13 02:16:06.416035 env[1218]: time="2024-12-13T02:16:06.415890213Z" level=info msg="shim disconnected" id=5d10544ec3b47b40044be00b1deaf7595d2265a53c6337f907cffe8588877c95 Dec 13 02:16:06.416387 env[1218]: time="2024-12-13T02:16:06.416352308Z" level=warning msg="cleaning up after shim disconnected" id=5d10544ec3b47b40044be00b1deaf7595d2265a53c6337f907cffe8588877c95 namespace=k8s.io Dec 13 02:16:06.416522 env[1218]: time="2024-12-13T02:16:06.416500163Z" level=info msg="cleaning up dead shim" Dec 13 02:16:06.440183 env[1218]: time="2024-12-13T02:16:06.440116718Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3539 runtime=io.containerd.runc.v2\n" Dec 13 02:16:06.661183 kubelet[1511]: E1213 02:16:06.661095 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:06.719936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1857032569.mount: Deactivated successfully. Dec 13 02:16:06.740047 kubelet[1511]: E1213 02:16:06.739992 1511 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:16:07.099398 env[1218]: time="2024-12-13T02:16:07.099344006Z" level=info msg="CreateContainer within sandbox \"f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:16:07.128539 env[1218]: time="2024-12-13T02:16:07.128478598Z" level=info msg="CreateContainer within sandbox \"f02a4cbb16a4e3f75ca01fcd2461089a3e189a8f1a9f27a818dad75dbcd68785\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"31eae6d5fc8b1792bec095fba77e126591e76fc9774b6843d70da63613a82d8f\"" Dec 13 02:16:07.129345 env[1218]: time="2024-12-13T02:16:07.129306159Z" level=info msg="StartContainer for \"31eae6d5fc8b1792bec095fba77e126591e76fc9774b6843d70da63613a82d8f\"" Dec 13 02:16:07.158714 kubelet[1511]: I1213 02:16:07.158629 1511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-75rvm" podStartSLOduration=2.023338095 podStartE2EDuration="8.158602653s" podCreationTimestamp="2024-12-13 02:15:59 +0000 UTC" firstStartedPulling="2024-12-13 02:15:59.925386352 +0000 UTC m=+64.094153925" lastFinishedPulling="2024-12-13 02:16:06.060650901 +0000 UTC m=+70.229418483" observedRunningTime="2024-12-13 02:16:07.1095258 +0000 UTC m=+71.278293380" watchObservedRunningTime="2024-12-13 02:16:07.158602653 +0000 UTC m=+71.327370246" Dec 13 02:16:07.164608 systemd[1]: Started cri-containerd-31eae6d5fc8b1792bec095fba77e126591e76fc9774b6843d70da63613a82d8f.scope. Dec 13 02:16:07.211073 env[1218]: time="2024-12-13T02:16:07.211020004Z" level=info msg="StartContainer for \"31eae6d5fc8b1792bec095fba77e126591e76fc9774b6843d70da63613a82d8f\" returns successfully" Dec 13 02:16:07.636224 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:16:07.665148 kubelet[1511]: E1213 02:16:07.661511 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:08.130274 kubelet[1511]: I1213 02:16:08.130188 1511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p2wzd" podStartSLOduration=5.130123103 podStartE2EDuration="5.130123103s" podCreationTimestamp="2024-12-13 02:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:16:08.128958804 +0000 UTC m=+72.297726395" watchObservedRunningTime="2024-12-13 02:16:08.130123103 +0000 UTC m=+72.298890696" Dec 13 02:16:08.662762 kubelet[1511]: E1213 02:16:08.662689 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:09.663603 kubelet[1511]: E1213 02:16:09.663467 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:10.663893 kubelet[1511]: E1213 02:16:10.663844 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:10.673524 systemd-networkd[1028]: lxc_health: Link UP Dec 13 02:16:10.689188 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:16:10.689594 systemd-networkd[1028]: lxc_health: Gained carrier Dec 13 02:16:11.665798 kubelet[1511]: E1213 02:16:11.665742 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:11.994061 systemd-networkd[1028]: lxc_health: Gained IPv6LL Dec 13 02:16:12.666415 kubelet[1511]: E1213 02:16:12.666366 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:13.667726 kubelet[1511]: E1213 02:16:13.667641 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:14.668320 kubelet[1511]: E1213 02:16:14.668262 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:15.669380 kubelet[1511]: E1213 02:16:15.669337 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:16.608847 kubelet[1511]: E1213 02:16:16.608773 1511 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:16.671035 kubelet[1511]: E1213 02:16:16.670969 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:17.671590 kubelet[1511]: E1213 02:16:17.671517 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:18.671981 kubelet[1511]: E1213 02:16:18.671910 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:16:19.672400 kubelet[1511]: E1213 02:16:19.672311 1511 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"