Dec 13 14:26:49.147552 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:26:49.147595 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:26:49.147613 kernel: BIOS-provided physical RAM map: Dec 13 14:26:49.147627 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 14:26:49.147640 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 14:26:49.147653 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 14:26:49.147673 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 14:26:49.147687 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 14:26:49.147700 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable Dec 13 14:26:49.147715 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data Dec 13 14:26:49.147728 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable Dec 13 14:26:49.147741 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 14:26:49.147755 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 14:26:49.147769 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 14:26:49.147791 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 14:26:49.147806 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 14:26:49.147820 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 14:26:49.147835 kernel: NX (Execute Disable) protection: active Dec 13 14:26:49.147849 kernel: efi: EFI v2.70 by EDK II Dec 13 14:26:49.147865 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 Dec 13 14:26:49.147879 kernel: random: crng init done Dec 13 14:26:49.147903 kernel: SMBIOS 2.4 present. Dec 13 14:26:49.147923 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 14:26:49.147938 kernel: Hypervisor detected: KVM Dec 13 14:26:49.147952 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:26:49.147982 kernel: kvm-clock: cpu 0, msr 1af19a001, primary cpu clock Dec 13 14:26:49.147996 kernel: kvm-clock: using sched offset of 13005446069 cycles Dec 13 14:26:49.148012 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:26:49.148027 kernel: tsc: Detected 2299.998 MHz processor Dec 13 14:26:49.148042 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:26:49.148058 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:26:49.148073 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 14:26:49.148093 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:26:49.148108 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 14:26:49.148123 kernel: Using GB pages for direct mapping Dec 13 14:26:49.148137 kernel: Secure boot disabled Dec 13 14:26:49.148152 kernel: ACPI: Early table checksum verification disabled Dec 13 14:26:49.148167 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 14:26:49.148182 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 14:26:49.148198 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 14:26:49.148224 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 14:26:49.148240 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 14:26:49.148256 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 14:26:49.148272 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 14:26:49.148289 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 14:26:49.148306 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 14:26:49.148326 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 14:26:49.148342 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 14:26:49.148359 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 14:26:49.148374 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 14:26:49.148391 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 14:26:49.148407 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 14:26:49.148423 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 14:26:49.148439 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 14:26:49.148456 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 14:26:49.148476 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 14:26:49.148492 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 14:26:49.148509 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:26:49.148525 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:26:49.148540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 14:26:49.148556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 14:26:49.148572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 14:26:49.148588 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 14:26:49.148604 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 14:26:49.148626 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 14:26:49.148643 kernel: Zone ranges: Dec 13 14:26:49.148659 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:26:49.148675 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:26:49.148691 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 14:26:49.148706 kernel: Movable zone start for each node Dec 13 14:26:49.148723 kernel: Early memory node ranges Dec 13 14:26:49.148739 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 14:26:49.148755 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 14:26:49.148777 kernel: node 0: [mem 0x0000000000100000-0x00000000bd276fff] Dec 13 14:26:49.148793 kernel: node 0: [mem 0x00000000bd281000-0x00000000bf8ecfff] Dec 13 14:26:49.148809 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 14:26:49.148825 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 14:26:49.148841 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 14:26:49.148857 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:26:49.148874 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 14:26:49.148936 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 14:26:49.148953 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 13 14:26:49.148988 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 14:26:49.149005 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 14:26:49.149020 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:26:49.149036 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:26:49.149052 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:26:49.149068 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:26:49.149084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:26:49.149101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:26:49.149117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:26:49.149139 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:26:49.149154 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:26:49.149170 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 14:26:49.149186 kernel: Booting paravirtualized kernel on KVM Dec 13 14:26:49.149202 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:26:49.149219 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:26:49.149236 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:26:49.149252 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:26:49.149267 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:26:49.149286 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:26:49.149301 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:26:49.149317 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Dec 13 14:26:49.149331 kernel: Policy zone: Normal Dec 13 14:26:49.149349 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:26:49.149365 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:26:49.149381 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 14:26:49.149397 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:26:49.149413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:26:49.149434 kernel: Memory: 7515408K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 344876K reserved, 0K cma-reserved) Dec 13 14:26:49.149450 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:26:49.149466 kernel: Kernel/User page tables isolation: enabled Dec 13 14:26:49.149482 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:26:49.149499 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:26:49.149515 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:26:49.149533 kernel: rcu: RCU event tracing is enabled. Dec 13 14:26:49.149551 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:26:49.149574 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:26:49.149610 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:26:49.149628 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:26:49.149652 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:26:49.149670 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:26:49.149688 kernel: Console: colour dummy device 80x25 Dec 13 14:26:49.149706 kernel: printk: console [ttyS0] enabled Dec 13 14:26:49.149724 kernel: ACPI: Core revision 20210730 Dec 13 14:26:49.149742 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:26:49.149761 kernel: x2apic enabled Dec 13 14:26:49.149785 kernel: Switched APIC routing to physical x2apic. Dec 13 14:26:49.149803 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 14:26:49.149821 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 14:26:49.149840 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 14:26:49.149858 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 14:26:49.149876 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 14:26:49.149905 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:26:49.149927 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 14:26:49.149943 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 14:26:49.149958 kernel: Spectre V2 : Mitigation: IBRS Dec 13 14:26:49.152494 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:26:49.152518 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:26:49.152537 kernel: RETBleed: Mitigation: IBRS Dec 13 14:26:49.152554 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:26:49.152718 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Dec 13 14:26:49.152737 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:26:49.152762 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 14:26:49.152925 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:26:49.152946 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:26:49.152986 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:26:49.153142 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:26:49.153160 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:26:49.153179 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:26:49.153196 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:26:49.153307 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:26:49.153334 kernel: LSM: Security Framework initializing Dec 13 14:26:49.153351 kernel: SELinux: Initializing. Dec 13 14:26:49.153369 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:26:49.153387 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:26:49.153405 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 14:26:49.153423 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 14:26:49.153441 kernel: signal: max sigframe size: 1776 Dec 13 14:26:49.153459 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:26:49.153476 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:26:49.153498 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:26:49.153515 kernel: x86: Booting SMP configuration: Dec 13 14:26:49.153533 kernel: .... node #0, CPUs: #1 Dec 13 14:26:49.153551 kernel: kvm-clock: cpu 1, msr 1af19a041, secondary cpu clock Dec 13 14:26:49.153570 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:26:49.153589 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:26:49.153606 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:26:49.153624 kernel: smpboot: Max logical packages: 1 Dec 13 14:26:49.153646 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 14:26:49.153664 kernel: devtmpfs: initialized Dec 13 14:26:49.153681 kernel: x86/mm: Memory block size: 128MB Dec 13 14:26:49.153699 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 14:26:49.153717 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:26:49.153734 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:26:49.153752 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:26:49.153770 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:26:49.153786 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:26:49.153806 kernel: audit: type=2000 audit(1734100007.942:1): state=initialized audit_enabled=0 res=1 Dec 13 14:26:49.153822 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:26:49.153837 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:26:49.153854 kernel: cpuidle: using governor menu Dec 13 14:26:49.153871 kernel: ACPI: bus type PCI registered Dec 13 14:26:49.153887 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:26:49.153912 kernel: dca service started, version 1.12.1 Dec 13 14:26:49.153930 kernel: PCI: Using configuration type 1 for base access Dec 13 14:26:49.153948 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:26:49.154012 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:26:49.154031 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:26:49.154049 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:26:49.154065 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:26:49.154083 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:26:49.154100 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:26:49.154118 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:26:49.154136 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:26:49.154153 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:26:49.154175 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:26:49.154192 kernel: ACPI: Interpreter enabled Dec 13 14:26:49.154210 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:26:49.154228 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:26:49.154246 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:26:49.154263 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:26:49.154281 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:26:49.154509 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:26:49.154675 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:26:49.154698 kernel: PCI host bridge to bus 0000:00 Dec 13 14:26:49.154851 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:26:49.155037 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:26:49.155192 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:26:49.155339 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 14:26:49.155485 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:26:49.155676 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:26:49.155866 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 14:26:49.160577 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 14:26:49.161151 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:26:49.161341 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 14:26:49.161521 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 14:26:49.161695 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 14:26:49.161878 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:26:49.168139 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 14:26:49.168699 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 14:26:49.168895 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:26:49.169091 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 14:26:49.169263 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 14:26:49.169294 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:26:49.169313 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:26:49.169331 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:26:49.169349 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:26:49.169368 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:26:49.169386 kernel: iommu: Default domain type: Translated Dec 13 14:26:49.169404 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:26:49.169423 kernel: vgaarb: loaded Dec 13 14:26:49.169441 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:26:49.169465 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:26:49.169483 kernel: PTP clock support registered Dec 13 14:26:49.169502 kernel: Registered efivars operations Dec 13 14:26:49.169520 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:26:49.169538 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:26:49.169554 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 14:26:49.169572 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 14:26:49.169588 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff] Dec 13 14:26:49.169606 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 14:26:49.169629 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 14:26:49.169646 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:26:49.169664 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:26:49.169682 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:26:49.169701 kernel: pnp: PnP ACPI init Dec 13 14:26:49.169718 kernel: pnp: PnP ACPI: found 7 devices Dec 13 14:26:49.169737 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:26:49.169755 kernel: NET: Registered PF_INET protocol family Dec 13 14:26:49.169772 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:26:49.169795 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 14:26:49.169813 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:26:49.169831 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:26:49.169849 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:26:49.169867 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 14:26:49.169885 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:26:49.169911 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:26:49.169929 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:26:49.169952 kernel: NET: Registered PF_XDP protocol family Dec 13 14:26:49.178110 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:26:49.178313 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:26:49.178462 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:26:49.178605 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 14:26:49.178773 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:26:49.178796 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:26:49.178821 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:26:49.178839 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 14:26:49.178857 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:26:49.178874 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 14:26:49.178903 kernel: clocksource: Switched to clocksource tsc Dec 13 14:26:49.178920 kernel: Initialise system trusted keyrings Dec 13 14:26:49.178937 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 14:26:49.178954 kernel: Key type asymmetric registered Dec 13 14:26:49.187182 kernel: Asymmetric key parser 'x509' registered Dec 13 14:26:49.187214 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:26:49.187233 kernel: io scheduler mq-deadline registered Dec 13 14:26:49.187250 kernel: io scheduler kyber registered Dec 13 14:26:49.187265 kernel: io scheduler bfq registered Dec 13 14:26:49.187282 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:26:49.187301 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 14:26:49.187511 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 14:26:49.187537 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 14:26:49.187704 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 14:26:49.187734 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 14:26:49.187910 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 14:26:49.187933 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:26:49.187952 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:26:49.193556 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:26:49.193583 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 14:26:49.193603 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 14:26:49.193823 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 14:26:49.193856 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:26:49.193874 kernel: i8042: Warning: Keylock active Dec 13 14:26:49.193901 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:26:49.193918 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:26:49.194118 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:26:49.194283 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:26:49.194440 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:26:48 UTC (1734100008) Dec 13 14:26:49.194597 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:26:49.194625 kernel: intel_pstate: CPU model not supported Dec 13 14:26:49.194644 kernel: pstore: Registered efi as persistent store backend Dec 13 14:26:49.194661 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:26:49.194678 kernel: Segment Routing with IPv6 Dec 13 14:26:49.194696 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:26:49.194714 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:26:49.194731 kernel: Key type dns_resolver registered Dec 13 14:26:49.194748 kernel: IPI shorthand broadcast: enabled Dec 13 14:26:49.194766 kernel: sched_clock: Marking stable (796586376, 165043957)->(1002461511, -40831178) Dec 13 14:26:49.194789 kernel: registered taskstats version 1 Dec 13 14:26:49.194806 kernel: Loading compiled-in X.509 certificates Dec 13 14:26:49.194824 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:26:49.194842 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:26:49.194860 kernel: Key type .fscrypt registered Dec 13 14:26:49.194876 kernel: Key type fscrypt-provisioning registered Dec 13 14:26:49.194904 kernel: pstore: Using crash dump compression: deflate Dec 13 14:26:49.194921 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:26:49.194938 kernel: ima: No architecture policies found Dec 13 14:26:49.194960 kernel: clk: Disabling unused clocks Dec 13 14:26:49.194992 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:26:49.195009 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:26:49.195026 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:26:49.195044 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:26:49.195062 kernel: Run /init as init process Dec 13 14:26:49.195079 kernel: with arguments: Dec 13 14:26:49.195097 kernel: /init Dec 13 14:26:49.195113 kernel: with environment: Dec 13 14:26:49.195135 kernel: HOME=/ Dec 13 14:26:49.195152 kernel: TERM=linux Dec 13 14:26:49.195169 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:26:49.195192 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:26:49.195214 systemd[1]: Detected virtualization kvm. Dec 13 14:26:49.195233 systemd[1]: Detected architecture x86-64. Dec 13 14:26:49.195250 systemd[1]: Running in initrd. Dec 13 14:26:49.195272 systemd[1]: No hostname configured, using default hostname. Dec 13 14:26:49.195289 systemd[1]: Hostname set to . Dec 13 14:26:49.195308 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:26:49.195325 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:26:49.195344 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:26:49.195361 systemd[1]: Reached target cryptsetup.target. Dec 13 14:26:49.195379 systemd[1]: Reached target paths.target. Dec 13 14:26:49.195396 systemd[1]: Reached target slices.target. Dec 13 14:26:49.195419 systemd[1]: Reached target swap.target. Dec 13 14:26:49.195437 systemd[1]: Reached target timers.target. Dec 13 14:26:49.195456 systemd[1]: Listening on iscsid.socket. Dec 13 14:26:49.195474 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:26:49.195492 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:26:49.195510 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:26:49.195528 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:26:49.195546 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:26:49.195569 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:26:49.195587 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:26:49.195630 systemd[1]: Reached target sockets.target. Dec 13 14:26:49.195653 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:26:49.195672 systemd[1]: Finished network-cleanup.service. Dec 13 14:26:49.195691 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:26:49.195715 systemd[1]: Starting systemd-journald.service... Dec 13 14:26:49.195734 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:26:49.195752 systemd[1]: Starting systemd-resolved.service... Dec 13 14:26:49.195772 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:26:49.195790 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:26:49.195810 kernel: audit: type=1130 audit(1734100009.154:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.195829 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:26:49.195848 kernel: audit: type=1130 audit(1734100009.160:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.195867 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:26:49.195901 kernel: audit: type=1130 audit(1734100009.168:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.195920 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:26:49.195939 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:26:49.195978 systemd-journald[189]: Journal started Dec 13 14:26:49.196075 systemd-journald[189]: Runtime Journal (/run/log/journal/05b0adc9468f1c930e461bcaf189cfba) is 8.0M, max 148.8M, 140.8M free. Dec 13 14:26:49.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.186445 systemd-modules-load[190]: Inserted module 'overlay' Dec 13 14:26:49.203045 systemd[1]: Started systemd-journald.service. Dec 13 14:26:49.203087 kernel: audit: type=1130 audit(1734100009.199:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.214704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:26:49.220255 kernel: audit: type=1130 audit(1734100009.212:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.228382 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:26:49.243280 kernel: audit: type=1130 audit(1734100009.230:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.233583 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:26:49.252420 systemd-resolved[191]: Positive Trust Anchors: Dec 13 14:26:49.252442 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:26:49.252498 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:26:49.258205 systemd-resolved[191]: Defaulting to hostname 'linux'. Dec 13 14:26:49.273069 dracut-cmdline[205]: dracut-dracut-053 Dec 13 14:26:49.273069 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:26:49.259785 systemd[1]: Started systemd-resolved.service. Dec 13 14:26:49.291991 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:26:49.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.292321 systemd[1]: Reached target nss-lookup.target. Dec 13 14:26:49.308093 kernel: audit: type=1130 audit(1734100009.290:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.308135 kernel: Bridge firewalling registered Dec 13 14:26:49.305175 systemd-modules-load[190]: Inserted module 'br_netfilter' Dec 13 14:26:49.336996 kernel: SCSI subsystem initialized Dec 13 14:26:49.357670 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:26:49.357761 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:26:49.359977 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:26:49.365015 systemd-modules-load[190]: Inserted module 'dm_multipath' Dec 13 14:26:49.367051 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:26:49.374994 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:26:49.383843 kernel: audit: type=1130 audit(1734100009.374:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.377415 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:26:49.396412 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:26:49.407127 kernel: audit: type=1130 audit(1734100009.398:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.407180 kernel: iscsi: registered transport (tcp) Dec 13 14:26:49.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.433317 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:26:49.433401 kernel: QLogic iSCSI HBA Driver Dec 13 14:26:49.479139 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:26:49.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.481038 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:26:49.538054 kernel: raid6: avx2x4 gen() 22770 MB/s Dec 13 14:26:49.555009 kernel: raid6: avx2x4 xor() 6439 MB/s Dec 13 14:26:49.572006 kernel: raid6: avx2x2 gen() 24200 MB/s Dec 13 14:26:49.589006 kernel: raid6: avx2x2 xor() 18620 MB/s Dec 13 14:26:49.607010 kernel: raid6: avx2x1 gen() 21577 MB/s Dec 13 14:26:49.624007 kernel: raid6: avx2x1 xor() 16190 MB/s Dec 13 14:26:49.641007 kernel: raid6: sse2x4 gen() 10325 MB/s Dec 13 14:26:49.658010 kernel: raid6: sse2x4 xor() 6385 MB/s Dec 13 14:26:49.675006 kernel: raid6: sse2x2 gen() 10989 MB/s Dec 13 14:26:49.692012 kernel: raid6: sse2x2 xor() 7388 MB/s Dec 13 14:26:49.709052 kernel: raid6: sse2x1 gen() 9774 MB/s Dec 13 14:26:49.727177 kernel: raid6: sse2x1 xor() 5132 MB/s Dec 13 14:26:49.727234 kernel: raid6: using algorithm avx2x2 gen() 24200 MB/s Dec 13 14:26:49.727257 kernel: raid6: .... xor() 18620 MB/s, rmw enabled Dec 13 14:26:49.728326 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:26:49.744004 kernel: xor: automatically using best checksumming function avx Dec 13 14:26:49.856023 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:26:49.868088 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:26:49.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.867000 audit: BPF prog-id=7 op=LOAD Dec 13 14:26:49.867000 audit: BPF prog-id=8 op=LOAD Dec 13 14:26:49.869926 systemd[1]: Starting systemd-udevd.service... Dec 13 14:26:49.887756 systemd-udevd[388]: Using default interface naming scheme 'v252'. Dec 13 14:26:49.895284 systemd[1]: Started systemd-udevd.service. Dec 13 14:26:49.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.897889 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:26:49.921242 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation Dec 13 14:26:49.963557 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:26:49.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:49.970176 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:26:50.039169 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:26:50.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:50.140015 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:26:50.205008 kernel: scsi host0: Virtio SCSI HBA Dec 13 14:26:50.235996 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 14:26:50.247316 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:26:50.247404 kernel: AES CTR mode by8 optimization enabled Dec 13 14:26:50.308037 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 14:26:50.364084 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 14:26:50.364295 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 14:26:50.364436 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 14:26:50.364573 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 14:26:50.364709 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:26:50.364736 kernel: GPT:17805311 != 25165823 Dec 13 14:26:50.364749 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:26:50.364763 kernel: GPT:17805311 != 25165823 Dec 13 14:26:50.364776 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:26:50.364789 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:26:50.364805 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 14:26:50.418992 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (443) Dec 13 14:26:50.421791 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:26:50.444237 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:26:50.454100 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:26:50.481445 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:26:50.490524 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:26:50.511517 systemd[1]: Starting disk-uuid.service... Dec 13 14:26:50.529294 disk-uuid[519]: Primary Header is updated. Dec 13 14:26:50.529294 disk-uuid[519]: Secondary Entries is updated. Dec 13 14:26:50.529294 disk-uuid[519]: Secondary Header is updated. Dec 13 14:26:50.555077 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:26:50.566002 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:26:50.593011 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:26:51.581999 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:26:51.582097 disk-uuid[520]: The operation has completed successfully. Dec 13 14:26:51.644765 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:26:51.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:51.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:51.644917 systemd[1]: Finished disk-uuid.service. Dec 13 14:26:51.668449 systemd[1]: Starting verity-setup.service... Dec 13 14:26:51.697191 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:26:51.770550 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:26:51.773139 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:26:51.800623 systemd[1]: Finished verity-setup.service. Dec 13 14:26:51.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:51.875007 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:26:51.875426 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:26:51.875823 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:26:51.931288 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:26:51.931328 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:26:51.931351 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:26:51.931374 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:26:51.876793 systemd[1]: Starting ignition-setup.service... Dec 13 14:26:51.890271 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:26:51.949811 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:26:51.964863 systemd[1]: Finished ignition-setup.service. Dec 13 14:26:51.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:51.979323 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:26:52.018915 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:26:52.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.018000 audit: BPF prog-id=9 op=LOAD Dec 13 14:26:52.020955 systemd[1]: Starting systemd-networkd.service... Dec 13 14:26:52.055363 systemd-networkd[694]: lo: Link UP Dec 13 14:26:52.055377 systemd-networkd[694]: lo: Gained carrier Dec 13 14:26:52.056260 systemd-networkd[694]: Enumeration completed Dec 13 14:26:52.056659 systemd-networkd[694]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:26:52.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.056870 systemd[1]: Started systemd-networkd.service. Dec 13 14:26:52.059104 systemd-networkd[694]: eth0: Link UP Dec 13 14:26:52.059125 systemd-networkd[694]: eth0: Gained carrier Dec 13 14:26:52.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.071146 systemd-networkd[694]: eth0: DHCPv4 address 10.128.0.21/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 14:26:52.164340 iscsid[703]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:26:52.164340 iscsid[703]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:26:52.164340 iscsid[703]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:26:52.164340 iscsid[703]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:26:52.164340 iscsid[703]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:26:52.164340 iscsid[703]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:26:52.164340 iscsid[703]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:26:52.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.086266 systemd[1]: Reached target network.target. Dec 13 14:26:52.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.243187 ignition[656]: Ignition 2.14.0 Dec 13 14:26:52.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.106355 systemd[1]: Starting iscsiuio.service... Dec 13 14:26:52.243201 ignition[656]: Stage: fetch-offline Dec 13 14:26:52.120359 systemd[1]: Started iscsiuio.service. Dec 13 14:26:52.243283 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:26:52.137061 systemd[1]: Starting iscsid.service... Dec 13 14:26:52.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.243327 ignition[656]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:26:52.144654 systemd[1]: Started iscsid.service. Dec 13 14:26:52.270379 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:26:52.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.245440 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:26:52.270705 ignition[656]: parsed url from cmdline: "" Dec 13 14:26:52.266409 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:26:52.270712 ignition[656]: no config URL provided Dec 13 14:26:52.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.286599 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:26:52.270719 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:26:52.302418 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:26:52.270730 ignition[656]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:26:52.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.316222 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:26:52.270739 ignition[656]: failed to fetch config: resource requires networking Dec 13 14:26:52.331211 systemd[1]: Reached target remote-fs.target. Dec 13 14:26:52.271103 ignition[656]: Ignition finished successfully Dec 13 14:26:52.339384 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:26:52.371230 ignition[718]: Ignition 2.14.0 Dec 13 14:26:52.359184 systemd[1]: Starting ignition-fetch.service... Dec 13 14:26:52.371241 ignition[718]: Stage: fetch Dec 13 14:26:52.376569 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:26:52.371377 ignition[718]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:26:52.388040 unknown[718]: fetched base config from "system" Dec 13 14:26:52.371408 ignition[718]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:26:52.388054 unknown[718]: fetched base config from "system" Dec 13 14:26:52.378481 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:26:52.388064 unknown[718]: fetched user config from "gcp" Dec 13 14:26:52.378659 ignition[718]: parsed url from cmdline: "" Dec 13 14:26:52.390145 systemd[1]: Finished ignition-fetch.service. Dec 13 14:26:52.378665 ignition[718]: no config URL provided Dec 13 14:26:52.408281 systemd[1]: Starting ignition-kargs.service... Dec 13 14:26:52.378672 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:26:52.432794 systemd[1]: Finished ignition-kargs.service. Dec 13 14:26:52.378683 ignition[718]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:26:52.448342 systemd[1]: Starting ignition-disks.service... Dec 13 14:26:52.378721 ignition[718]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 14:26:52.479438 systemd[1]: Finished ignition-disks.service. Dec 13 14:26:52.384816 ignition[718]: GET result: OK Dec 13 14:26:52.495290 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:26:52.384874 ignition[718]: parsing config with SHA512: b9c53464b4e57a1723b9bc8a2cd51d87e6e8462a6278e28c061f52d2ebecd64b0ee0b53c7c34fdb2b16168bf53bcc95ed88c4208a7a6e2bee8f25b39281d413a Dec 13 14:26:52.511105 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:26:52.388519 ignition[718]: fetch: fetch complete Dec 13 14:26:52.511208 systemd[1]: Reached target local-fs.target. Dec 13 14:26:52.388525 ignition[718]: fetch: fetch passed Dec 13 14:26:52.533142 systemd[1]: Reached target sysinit.target. Dec 13 14:26:52.388570 ignition[718]: Ignition finished successfully Dec 13 14:26:52.533268 systemd[1]: Reached target basic.target. Dec 13 14:26:52.421561 ignition[724]: Ignition 2.14.0 Dec 13 14:26:52.556344 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:26:52.421573 ignition[724]: Stage: kargs Dec 13 14:26:52.421708 ignition[724]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:26:52.421737 ignition[724]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:26:52.430559 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:26:52.431627 ignition[724]: kargs: kargs passed Dec 13 14:26:52.431668 ignition[724]: Ignition finished successfully Dec 13 14:26:52.459426 ignition[730]: Ignition 2.14.0 Dec 13 14:26:52.459436 ignition[730]: Stage: disks Dec 13 14:26:52.459572 ignition[730]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:26:52.459602 ignition[730]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:26:52.467839 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:26:52.469015 ignition[730]: disks: disks passed Dec 13 14:26:52.469061 ignition[730]: Ignition finished successfully Dec 13 14:26:52.608523 systemd-fsck[738]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 14:26:52.819287 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:26:52.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.829859 systemd[1]: Mounting sysroot.mount... Dec 13 14:26:52.861285 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:26:52.858466 systemd[1]: Mounted sysroot.mount. Dec 13 14:26:52.868355 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:26:52.888506 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:26:52.892845 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:26:52.892899 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:26:52.892931 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:26:52.904737 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:26:52.999171 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (744) Dec 13 14:26:52.999220 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:26:52.999245 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:26:52.999268 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:26:52.999291 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:26:52.934598 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:26:52.987659 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:26:53.018259 initrd-setup-root[767]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:26:53.048103 initrd-setup-root[775]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:26:53.028748 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:26:53.066164 initrd-setup-root[783]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:26:53.076089 initrd-setup-root[791]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:26:53.086288 systemd-networkd[694]: eth0: Gained IPv6LL Dec 13 14:26:53.090527 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:26:53.118286 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 14:26:53.118366 kernel: audit: type=1130 audit(1734100013.106:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:53.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:53.109723 systemd[1]: Starting ignition-mount.service... Dec 13 14:26:53.135218 systemd[1]: Starting sysroot-boot.service... Dec 13 14:26:53.154471 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:26:53.154633 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:26:53.182260 ignition[810]: INFO : Ignition 2.14.0 Dec 13 14:26:53.182260 ignition[810]: INFO : Stage: mount Dec 13 14:26:53.182260 ignition[810]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:26:53.182260 ignition[810]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:26:53.182260 ignition[810]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:26:53.343133 kernel: audit: type=1130 audit(1734100013.188:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:53.343188 kernel: audit: type=1130 audit(1734100013.215:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:53.343214 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (819) Dec 13 14:26:53.343237 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:26:53.343261 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:26:53.343286 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:26:53.343307 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:26:53.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:53.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:53.172196 systemd[1]: Finished sysroot-boot.service. Dec 13 14:26:53.352186 ignition[810]: INFO : mount: mount passed Dec 13 14:26:53.352186 ignition[810]: INFO : Ignition finished successfully Dec 13 14:26:53.190515 systemd[1]: Finished ignition-mount.service. Dec 13 14:26:53.219243 systemd[1]: Starting ignition-files.service... Dec 13 14:26:53.253798 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:26:53.399105 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (841) Dec 13 14:26:53.399185 ignition[838]: INFO : Ignition 2.14.0 Dec 13 14:26:53.399185 ignition[838]: INFO : Stage: files Dec 13 14:26:53.399185 ignition[838]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:26:53.399185 ignition[838]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:26:53.399185 ignition[838]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:26:53.399185 ignition[838]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:26:53.399185 ignition[838]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:26:53.399185 ignition[838]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:26:53.399185 ignition[838]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:26:53.399185 ignition[838]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:26:53.399185 ignition[838]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:26:53.399185 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Dec 13 14:26:53.399185 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:26:53.399185 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2096736076" Dec 13 14:26:53.399185 ignition[838]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2096736076": device or resource busy Dec 13 14:26:53.399185 ignition[838]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2096736076", trying btrfs: device or resource busy Dec 13 14:26:53.399185 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2096736076" Dec 13 14:26:53.399185 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2096736076" Dec 13 14:26:53.399185 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem2096736076" Dec 13 14:26:53.315314 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem2096736076" Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem236423051" Dec 13 14:26:53.668159 ignition[838]: CRITICAL : files: createFilesystemsFiles: createFiles: op(7): op(8): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem236423051": device or resource busy Dec 13 14:26:53.668159 ignition[838]: ERROR : files: createFilesystemsFiles: createFiles: op(7): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem236423051", trying btrfs: device or resource busy Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem236423051" Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem236423051" Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [started] unmounting "/mnt/oem236423051" Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [finished] unmounting "/mnt/oem236423051" Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:26:53.668159 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:26:53.370693 unknown[838]: wrote ssh authorized keys file for user: core Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem739117529" Dec 13 14:26:53.918171 ignition[838]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem739117529": device or resource busy Dec 13 14:26:53.918171 ignition[838]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem739117529", trying btrfs: device or resource busy Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem739117529" Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem739117529" Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem739117529" Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem739117529" Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 14:26:53.918171 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 14:26:54.167201 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(12): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:26:54.167201 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1277389549" Dec 13 14:26:54.167201 ignition[838]: CRITICAL : files: createFilesystemsFiles: createFiles: op(12): op(13): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1277389549": device or resource busy Dec 13 14:26:54.167201 ignition[838]: ERROR : files: createFilesystemsFiles: createFiles: op(12): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1277389549", trying btrfs: device or resource busy Dec 13 14:26:54.167201 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1277389549" Dec 13 14:26:54.167201 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1277389549" Dec 13 14:26:54.167201 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [started] unmounting "/mnt/oem1277389549" Dec 13 14:26:54.167201 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [finished] unmounting "/mnt/oem1277389549" Dec 13 14:26:54.167201 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 14:26:54.167201 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:26:54.167201 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(16): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:26:54.167201 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(16): GET result: OK Dec 13 14:26:54.448134 kernel: audit: type=1130 audit(1734100014.308:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.448195 kernel: audit: type=1130 audit(1734100014.413:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.448319 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(17): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(17): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(18): [started] processing unit "oem-gce.service" Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(18): [finished] processing unit "oem-gce.service" Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(19): [started] processing unit "oem-gce-enable-oslogin.service" Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(19): [finished] processing unit "oem-gce-enable-oslogin.service" Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(1b): [started] setting preset to enabled for "oem-gce.service" Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(1b): [finished] setting preset to enabled for "oem-gce.service" Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(1c): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 14:26:54.448319 ignition[838]: INFO : files: op(1c): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 14:26:54.448319 ignition[838]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:26:54.448319 ignition[838]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:26:54.448319 ignition[838]: INFO : files: files passed Dec 13 14:26:54.448319 ignition[838]: INFO : Ignition finished successfully Dec 13 14:26:54.841148 kernel: audit: type=1130 audit(1734100014.464:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.841208 kernel: audit: type=1131 audit(1734100014.464:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.841233 kernel: audit: type=1130 audit(1734100014.570:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.841256 kernel: audit: type=1131 audit(1734100014.570:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.841272 kernel: audit: type=1130 audit(1734100014.706:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.292122 systemd[1]: Finished ignition-files.service. Dec 13 14:26:54.321359 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:26:54.872132 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:26:54.354383 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:26:54.355871 systemd[1]: Starting ignition-quench.service... Dec 13 14:26:54.386543 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:26:54.415644 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:26:54.415821 systemd[1]: Finished ignition-quench.service. Dec 13 14:26:54.466591 systemd[1]: Reached target ignition-complete.target. Dec 13 14:26:54.525741 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:26:54.565210 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:26:54.565338 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:26:55.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.572537 systemd[1]: Reached target initrd-fs.target. Dec 13 14:26:54.638221 systemd[1]: Reached target initrd.target. Dec 13 14:26:55.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.651422 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:26:55.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.652950 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:26:55.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.670588 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:26:55.122161 ignition[876]: INFO : Ignition 2.14.0 Dec 13 14:26:55.122161 ignition[876]: INFO : Stage: umount Dec 13 14:26:55.122161 ignition[876]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:26:55.122161 ignition[876]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:26:55.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.182372 iscsid[703]: iscsid shutting down. Dec 13 14:26:55.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.710183 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:26:55.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.213297 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:26:55.213297 ignition[876]: INFO : umount: umount passed Dec 13 14:26:55.213297 ignition[876]: INFO : Ignition finished successfully Dec 13 14:26:55.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.750633 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:26:55.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.778439 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:26:55.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.793518 systemd[1]: Stopped target timers.target. Dec 13 14:26:55.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.814468 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:26:55.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.814679 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:26:54.827690 systemd[1]: Stopped target initrd.target. Dec 13 14:26:54.848443 systemd[1]: Stopped target basic.target. Dec 13 14:26:54.862423 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:26:54.880393 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:26:54.902456 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:26:55.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.927463 systemd[1]: Stopped target remote-fs.target. Dec 13 14:26:55.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.942442 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:26:54.958579 systemd[1]: Stopped target sysinit.target. Dec 13 14:26:54.974441 systemd[1]: Stopped target local-fs.target. Dec 13 14:26:55.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:54.989407 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:26:55.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.005463 systemd[1]: Stopped target swap.target. Dec 13 14:26:55.013416 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:26:55.013616 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:26:55.027586 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:26:55.049331 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:26:55.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.049547 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:26:55.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.554000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:26:55.056584 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:26:55.056777 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:26:55.079536 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:26:55.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.079747 systemd[1]: Stopped ignition-files.service. Dec 13 14:26:55.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.100552 systemd[1]: Stopping ignition-mount.service... Dec 13 14:26:55.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.114471 systemd[1]: Stopping iscsid.service... Dec 13 14:26:55.130282 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:26:55.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.130545 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:26:55.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.143348 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:26:55.158393 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:26:55.158658 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:26:55.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.190444 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:26:55.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.190660 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:26:55.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.209429 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:26:55.210680 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:26:55.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.210806 systemd[1]: Stopped iscsid.service. Dec 13 14:26:55.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:55.220951 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:26:55.221100 systemd[1]: Stopped ignition-mount.service. Dec 13 14:26:55.231847 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:26:55.231960 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:26:55.246054 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:26:55.872119 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Dec 13 14:26:55.246216 systemd[1]: Stopped ignition-disks.service. Dec 13 14:26:55.268248 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:26:55.268345 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:26:55.283231 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:26:55.283305 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:26:55.299261 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:26:55.299354 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:26:55.314207 systemd[1]: Stopped target paths.target. Dec 13 14:26:55.329081 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:26:55.333057 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:26:55.344088 systemd[1]: Stopped target slices.target. Dec 13 14:26:55.356103 systemd[1]: Stopped target sockets.target. Dec 13 14:26:55.368184 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:26:55.368269 systemd[1]: Closed iscsid.socket. Dec 13 14:26:55.383167 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:26:55.383267 systemd[1]: Stopped ignition-setup.service. Dec 13 14:26:55.399255 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:26:55.399345 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:26:55.415390 systemd[1]: Stopping iscsiuio.service... Dec 13 14:26:55.429762 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:26:55.429900 systemd[1]: Stopped iscsiuio.service. Dec 13 14:26:55.450682 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:26:55.450808 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:26:55.467365 systemd[1]: Stopped target network.target. Dec 13 14:26:55.483143 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:26:55.483215 systemd[1]: Closed iscsiuio.socket. Dec 13 14:26:55.497382 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:26:55.501041 systemd-networkd[694]: eth0: DHCPv6 lease lost Dec 13 14:26:55.505505 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:26:55.525502 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:26:55.525636 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:26:55.540866 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:26:55.541024 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:26:55.556856 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:26:55.556908 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:26:55.572409 systemd[1]: Stopping network-cleanup.service... Dec 13 14:26:55.581262 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:26:55.581346 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:26:55.603244 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:26:55.603332 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:26:55.619396 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:26:55.880000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:26:55.619462 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:26:55.634447 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:26:55.643159 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:26:55.646676 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:26:55.647084 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:26:55.654006 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:26:55.654128 systemd[1]: Stopped network-cleanup.service. Dec 13 14:26:55.677482 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:26:55.677535 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:26:55.693247 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:26:55.693303 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:26:55.708206 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:26:55.708275 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:26:55.718356 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:26:55.718432 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:26:55.740278 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:26:55.740346 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:26:55.759650 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:26:55.783067 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:26:55.783168 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:26:55.783780 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:26:55.783894 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:26:55.805298 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:26:55.821216 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:26:55.836956 systemd[1]: Switching root. Dec 13 14:26:55.883213 systemd-journald[189]: Journal stopped Dec 13 14:27:00.538879 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:27:00.539055 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:27:00.539088 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:27:00.539113 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:27:00.539138 kernel: SELinux: policy capability open_perms=1 Dec 13 14:27:00.539162 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:27:00.539187 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:27:00.539210 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:27:00.539234 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:27:00.539257 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:27:00.539286 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:27:00.539312 systemd[1]: Successfully loaded SELinux policy in 111.261ms. Dec 13 14:27:00.539361 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.088ms. Dec 13 14:27:00.539389 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:27:00.539416 systemd[1]: Detected virtualization kvm. Dec 13 14:27:00.539441 systemd[1]: Detected architecture x86-64. Dec 13 14:27:00.539466 systemd[1]: Detected first boot. Dec 13 14:27:00.539496 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:27:00.539522 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:27:00.539546 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:27:00.539573 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:00.539608 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:00.539636 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:00.539669 kernel: kauditd_printk_skb: 51 callbacks suppressed Dec 13 14:27:00.539693 kernel: audit: type=1334 audit(1734100019.647:88): prog-id=12 op=LOAD Dec 13 14:27:00.539722 kernel: audit: type=1334 audit(1734100019.647:89): prog-id=3 op=UNLOAD Dec 13 14:27:00.539746 kernel: audit: type=1334 audit(1734100019.659:90): prog-id=13 op=LOAD Dec 13 14:27:00.539932 kernel: audit: type=1334 audit(1734100019.673:91): prog-id=14 op=LOAD Dec 13 14:27:00.539957 kernel: audit: type=1334 audit(1734100019.673:92): prog-id=4 op=UNLOAD Dec 13 14:27:00.539998 kernel: audit: type=1334 audit(1734100019.673:93): prog-id=5 op=UNLOAD Dec 13 14:27:00.540022 kernel: audit: type=1334 audit(1734100019.694:94): prog-id=15 op=LOAD Dec 13 14:27:00.540044 kernel: audit: type=1334 audit(1734100019.694:95): prog-id=12 op=UNLOAD Dec 13 14:27:00.540067 kernel: audit: type=1334 audit(1734100019.708:96): prog-id=16 op=LOAD Dec 13 14:27:00.540090 kernel: audit: type=1334 audit(1734100019.715:97): prog-id=17 op=LOAD Dec 13 14:27:00.540118 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:27:00.540143 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:27:00.540166 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:27:00.540191 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:27:00.540214 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:27:00.540237 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:27:00.540261 systemd[1]: Created slice system-getty.slice. Dec 13 14:27:00.540289 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:27:00.540313 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:27:00.540337 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:27:00.540361 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:27:00.540390 systemd[1]: Created slice user.slice. Dec 13 14:27:00.540415 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:27:00.540441 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:27:00.540466 systemd[1]: Set up automount boot.automount. Dec 13 14:27:00.540493 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:27:00.540527 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:27:00.540553 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:27:00.540579 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:27:00.540613 systemd[1]: Reached target integritysetup.target. Dec 13 14:27:00.540641 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:27:00.540667 systemd[1]: Reached target remote-fs.target. Dec 13 14:27:00.540694 systemd[1]: Reached target slices.target. Dec 13 14:27:00.540720 systemd[1]: Reached target swap.target. Dec 13 14:27:00.540745 systemd[1]: Reached target torcx.target. Dec 13 14:27:00.540777 systemd[1]: Reached target veritysetup.target. Dec 13 14:27:00.540810 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:27:00.540834 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:27:00.540858 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:27:00.540894 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:27:00.540915 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:27:00.540937 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:27:00.540961 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:27:00.540999 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:27:00.541032 systemd[1]: Mounting media.mount... Dec 13 14:27:00.541062 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:00.541085 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:27:00.541108 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:27:00.541131 systemd[1]: Mounting tmp.mount... Dec 13 14:27:00.541252 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:27:00.541279 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:00.541303 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:27:00.541327 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:27:00.541350 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:00.541378 systemd[1]: Starting modprobe@drm.service... Dec 13 14:27:00.541401 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:00.541424 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:27:00.541448 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:00.541471 kernel: fuse: init (API version 7.34) Dec 13 14:27:00.541494 kernel: loop: module loaded Dec 13 14:27:00.541517 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:27:00.541539 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:27:00.541562 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:27:00.541592 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:27:00.541615 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:27:00.541638 systemd[1]: Stopped systemd-journald.service. Dec 13 14:27:00.541661 systemd[1]: Starting systemd-journald.service... Dec 13 14:27:00.541685 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:27:00.541709 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:27:00.541741 systemd-journald[1000]: Journal started Dec 13 14:27:00.541835 systemd-journald[1000]: Runtime Journal (/run/log/journal/05b0adc9468f1c930e461bcaf189cfba) is 8.0M, max 148.8M, 140.8M free. Dec 13 14:26:56.177000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:26:56.324000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:26:56.324000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:26:56.324000 audit: BPF prog-id=10 op=LOAD Dec 13 14:26:56.324000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:26:56.324000 audit: BPF prog-id=11 op=LOAD Dec 13 14:26:56.324000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:26:56.471000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:26:56.471000 audit[909]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:56.471000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:26:56.482000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:26:56.482000 audit[909]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:56.482000 audit: CWD cwd="/" Dec 13 14:26:56.482000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:56.482000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:56.482000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:26:59.647000 audit: BPF prog-id=12 op=LOAD Dec 13 14:26:59.647000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:26:59.659000 audit: BPF prog-id=13 op=LOAD Dec 13 14:26:59.673000 audit: BPF prog-id=14 op=LOAD Dec 13 14:26:59.673000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:26:59.673000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:26:59.694000 audit: BPF prog-id=15 op=LOAD Dec 13 14:26:59.694000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:26:59.708000 audit: BPF prog-id=16 op=LOAD Dec 13 14:26:59.715000 audit: BPF prog-id=17 op=LOAD Dec 13 14:26:59.715000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:26:59.715000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:26:59.722000 audit: BPF prog-id=18 op=LOAD Dec 13 14:26:59.722000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:26:59.722000 audit: BPF prog-id=19 op=LOAD Dec 13 14:26:59.723000 audit: BPF prog-id=20 op=LOAD Dec 13 14:26:59.723000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:26:59.723000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:26:59.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:59.738000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:26:59.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:59.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.491000 audit: BPF prog-id=21 op=LOAD Dec 13 14:27:00.491000 audit: BPF prog-id=22 op=LOAD Dec 13 14:27:00.491000 audit: BPF prog-id=23 op=LOAD Dec 13 14:27:00.491000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:27:00.491000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:27:00.534000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:27:00.534000 audit[1000]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe772de860 a2=4000 a3=7ffe772de8fc items=0 ppid=1 pid=1000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:00.534000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:26:56.469152 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:26:59.646843 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:26:56.470363 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:26:59.725700 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:26:56.470401 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:26:56.470459 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:26:56.470488 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:26:56.470546 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:26:56.470572 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:26:56.470918 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:26:56.471028 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:26:56.471056 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:26:56.472260 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:26:56.472328 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:26:56.472366 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:26:56.472397 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:26:56.472429 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:26:56.472455 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:26:59.041303 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:26:59.041650 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:26:59.041798 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:26:59.042121 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:26:59.042196 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:26:59.042279 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:27:00.553025 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:27:00.568009 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:27:00.581993 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:27:00.588004 systemd[1]: Stopped verity-setup.service. Dec 13 14:27:00.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.607131 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:00.617017 systemd[1]: Started systemd-journald.service. Dec 13 14:27:00.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.626544 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:27:00.635379 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:27:00.643336 systemd[1]: Mounted media.mount. Dec 13 14:27:00.650316 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:27:00.660334 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:27:00.669305 systemd[1]: Mounted tmp.mount. Dec 13 14:27:00.676466 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:27:00.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.685552 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:27:00.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.695643 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:27:00.695877 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:27:00.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.704597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:00.704832 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:00.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.714570 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:27:00.714800 systemd[1]: Finished modprobe@drm.service. Dec 13 14:27:00.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.723590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:00.723817 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:00.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.732691 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:27:00.732961 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:27:00.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.741539 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:00.741760 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:00.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.750575 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:27:00.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.759539 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:27:00.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.768558 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:27:00.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.777563 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:27:00.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.787012 systemd[1]: Reached target network-pre.target. Dec 13 14:27:00.797043 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:27:00.807820 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:27:00.815147 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:27:00.831163 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:27:00.841194 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:27:00.849175 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:00.851112 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:27:00.858171 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:00.860053 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:00.864173 systemd-journald[1000]: Time spent on flushing to /var/log/journal/05b0adc9468f1c930e461bcaf189cfba is 52.878ms for 1146 entries. Dec 13 14:27:00.864173 systemd-journald[1000]: System Journal (/var/log/journal/05b0adc9468f1c930e461bcaf189cfba) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:27:00.962160 systemd-journald[1000]: Received client request to flush runtime journal. Dec 13 14:27:00.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:00.877128 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:27:00.885825 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:27:00.898041 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:27:00.964021 udevadm[1015]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:27:00.906298 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:27:00.915519 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:27:00.925617 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:00.937934 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:27:00.952543 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:27:00.963507 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:27:00.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:01.584867 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:27:01.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:01.592000 audit: BPF prog-id=24 op=LOAD Dec 13 14:27:01.592000 audit: BPF prog-id=25 op=LOAD Dec 13 14:27:01.592000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:27:01.592000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:27:01.595518 systemd[1]: Starting systemd-udevd.service... Dec 13 14:27:01.619831 systemd-udevd[1018]: Using default interface naming scheme 'v252'. Dec 13 14:27:01.669566 systemd[1]: Started systemd-udevd.service. Dec 13 14:27:01.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:01.679000 audit: BPF prog-id=26 op=LOAD Dec 13 14:27:01.683359 systemd[1]: Starting systemd-networkd.service... Dec 13 14:27:01.696000 audit: BPF prog-id=27 op=LOAD Dec 13 14:27:01.696000 audit: BPF prog-id=28 op=LOAD Dec 13 14:27:01.697000 audit: BPF prog-id=29 op=LOAD Dec 13 14:27:01.700369 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:27:01.765606 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:27:01.769558 systemd[1]: Started systemd-userdbd.service. Dec 13 14:27:01.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:01.881025 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:27:01.920076 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:27:01.958587 systemd-networkd[1031]: lo: Link UP Dec 13 14:27:01.958611 systemd-networkd[1031]: lo: Gained carrier Dec 13 14:27:01.959603 systemd-networkd[1031]: Enumeration completed Dec 13 14:27:01.959787 systemd[1]: Started systemd-networkd.service. Dec 13 14:27:01.960232 systemd-networkd[1031]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:27:01.963031 systemd-networkd[1031]: eth0: Link UP Dec 13 14:27:01.963050 systemd-networkd[1031]: eth0: Gained carrier Dec 13 14:27:01.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:01.974219 systemd-networkd[1031]: eth0: DHCPv4 address 10.128.0.21/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 14:27:01.995824 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 14:27:02.010007 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1045) Dec 13 14:27:02.032019 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:27:02.044079 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:27:01.997000 audit[1042]: AVC avc: denied { confidentiality } for pid=1042 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:27:01.997000 audit[1042]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d133c3ec10 a1=337fc a2=7f9547f61bc5 a3=5 items=110 ppid=1018 pid=1042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:01.997000 audit: CWD cwd="/" Dec 13 14:27:01.997000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=1 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=2 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=3 name=(null) inode=14642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=4 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=5 name=(null) inode=14643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=6 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=7 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=8 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=9 name=(null) inode=14645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=10 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=11 name=(null) inode=14646 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=12 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=13 name=(null) inode=14647 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=14 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=15 name=(null) inode=14648 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=16 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=17 name=(null) inode=14649 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=18 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=19 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=20 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=21 name=(null) inode=14651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=22 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=23 name=(null) inode=14652 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=24 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=25 name=(null) inode=14653 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=26 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=27 name=(null) inode=14654 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=28 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=29 name=(null) inode=14655 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=30 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=31 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=32 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=33 name=(null) inode=14657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=34 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=35 name=(null) inode=14658 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=36 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=37 name=(null) inode=14659 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=38 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=39 name=(null) inode=14660 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=40 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=41 name=(null) inode=14661 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=42 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=43 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=44 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=45 name=(null) inode=14663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=46 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=47 name=(null) inode=14664 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=48 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=49 name=(null) inode=14665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=50 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=51 name=(null) inode=14666 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=52 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=53 name=(null) inode=14667 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=55 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=56 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=57 name=(null) inode=14669 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=58 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=59 name=(null) inode=14670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=60 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=61 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=62 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=63 name=(null) inode=14672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=64 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=65 name=(null) inode=14673 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=66 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=67 name=(null) inode=14674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=68 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=69 name=(null) inode=14675 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=70 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=71 name=(null) inode=14676 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=72 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=73 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=74 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=75 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=76 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=77 name=(null) inode=14679 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=78 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=79 name=(null) inode=14680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=80 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=81 name=(null) inode=14681 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=82 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=83 name=(null) inode=14682 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=84 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=85 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=86 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=87 name=(null) inode=14684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=88 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=89 name=(null) inode=14685 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=90 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=91 name=(null) inode=14686 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=92 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=93 name=(null) inode=14687 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:02.072055 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 14:27:02.103311 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 14:27:01.997000 audit: PATH item=94 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=95 name=(null) inode=14688 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=96 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=97 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=98 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=99 name=(null) inode=14690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=100 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=101 name=(null) inode=14691 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=102 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=103 name=(null) inode=14692 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=104 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=105 name=(null) inode=14693 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=106 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=107 name=(null) inode=14694 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PATH item=109 name=(null) inode=14695 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:01.997000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:27:02.117754 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:27:02.132021 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:27:02.152595 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:27:02.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.163011 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:27:02.190889 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:27:02.220734 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:27:02.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.229379 systemd[1]: Reached target cryptsetup.target. Dec 13 14:27:02.239987 systemd[1]: Starting lvm2-activation.service... Dec 13 14:27:02.246816 lvm[1056]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:27:02.275788 systemd[1]: Finished lvm2-activation.service. Dec 13 14:27:02.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.284350 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:27:02.293142 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:27:02.293212 systemd[1]: Reached target local-fs.target. Dec 13 14:27:02.301147 systemd[1]: Reached target machines.target. Dec 13 14:27:02.311020 systemd[1]: Starting ldconfig.service... Dec 13 14:27:02.319187 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:02.319289 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:02.321120 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:27:02.330538 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:27:02.342891 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:27:02.344923 systemd[1]: Starting systemd-sysext.service... Dec 13 14:27:02.346625 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1058 (bootctl) Dec 13 14:27:02.348824 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:27:02.371333 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:27:02.374956 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:27:02.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.384262 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:27:02.384504 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:27:02.411303 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:27:02.506466 systemd-fsck[1069]: fsck.fat 4.2 (2021-01-31) Dec 13 14:27:02.506466 systemd-fsck[1069]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:27:02.509032 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:27:02.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.523513 systemd[1]: Mounting boot.mount... Dec 13 14:27:02.550558 systemd[1]: Mounted boot.mount. Dec 13 14:27:02.559545 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:27:02.560551 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:27:02.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.577949 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:27:02.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.602153 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:27:02.629005 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:27:02.649487 (sd-sysext)[1073]: Using extensions 'kubernetes'. Dec 13 14:27:02.650242 (sd-sysext)[1073]: Merged extensions into '/usr'. Dec 13 14:27:02.676371 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:02.678790 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:27:02.686378 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:02.688295 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:02.696539 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:02.704896 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:02.712182 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:02.712400 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:02.712609 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:02.716987 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:27:02.724657 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:02.724903 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:02.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.733800 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:02.734085 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:02.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.742795 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:02.743033 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:02.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.751851 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:02.752343 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:02.755465 systemd[1]: Finished systemd-sysext.service. Dec 13 14:27:02.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:02.766828 systemd[1]: Starting ensure-sysext.service... Dec 13 14:27:02.775951 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:27:02.788012 systemd[1]: Reloading. Dec 13 14:27:02.814910 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:27:02.823714 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:27:02.845227 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:27:02.949486 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-12-13T14:27:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:02.949559 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-12-13T14:27:02Z" level=info msg="torcx already run" Dec 13 14:27:02.981743 ldconfig[1057]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:27:03.098562 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:03.099028 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:03.139089 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:03.222000 audit: BPF prog-id=30 op=LOAD Dec 13 14:27:03.223000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:27:03.226000 audit: BPF prog-id=31 op=LOAD Dec 13 14:27:03.226000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:27:03.226000 audit: BPF prog-id=32 op=LOAD Dec 13 14:27:03.226000 audit: BPF prog-id=33 op=LOAD Dec 13 14:27:03.226000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:27:03.226000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:27:03.226000 audit: BPF prog-id=34 op=LOAD Dec 13 14:27:03.226000 audit: BPF prog-id=35 op=LOAD Dec 13 14:27:03.226000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:27:03.226000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:27:03.227000 audit: BPF prog-id=36 op=LOAD Dec 13 14:27:03.227000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:27:03.227000 audit: BPF prog-id=37 op=LOAD Dec 13 14:27:03.227000 audit: BPF prog-id=38 op=LOAD Dec 13 14:27:03.228000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:27:03.228000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:27:03.234516 systemd[1]: Finished ldconfig.service. Dec 13 14:27:03.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:03.243116 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:27:03.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:03.258696 systemd[1]: Starting audit-rules.service... Dec 13 14:27:03.268235 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:27:03.279444 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:27:03.290178 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:27:03.298000 audit: BPF prog-id=39 op=LOAD Dec 13 14:27:03.302164 systemd[1]: Starting systemd-resolved.service... Dec 13 14:27:03.308000 audit: BPF prog-id=40 op=LOAD Dec 13 14:27:03.312357 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:27:03.321747 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:27:03.331305 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:27:03.333000 audit[1170]: SYSTEM_BOOT pid=1170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:27:03.337000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:27:03.337000 audit[1174]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff489d8920 a2=420 a3=0 items=0 ppid=1145 pid=1174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:03.337000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:27:03.340233 augenrules[1174]: No rules Dec 13 14:27:03.341016 systemd[1]: Finished audit-rules.service. Dec 13 14:27:03.348754 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:27:03.349050 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:27:03.357770 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:27:03.374275 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:03.380495 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:03.389364 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:03.390158 systemd-networkd[1031]: eth0: Gained IPv6LL Dec 13 14:27:03.398326 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:03.407285 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:27:03.414422 enable-oslogin[1183]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 14:27:03.416166 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:03.416540 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:03.419341 systemd[1]: Starting systemd-update-done.service... Dec 13 14:27:03.426108 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:03.433073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:03.435154 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:03.444166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:03.444424 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:03.454090 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:03.454311 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:03.464040 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:27:03.464299 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:27:03.474002 systemd[1]: Finished systemd-update-done.service. Dec 13 14:27:03.483569 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:03.483806 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:03.483994 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:03.484105 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:03.485017 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:27:03.497454 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:03.497905 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:03.501253 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:03.506898 systemd-timesyncd[1166]: Contacted time server 169.254.169.254:123 (169.254.169.254). Dec 13 14:27:03.507006 systemd-timesyncd[1166]: Initial clock synchronization to Fri 2024-12-13 14:27:03.592339 UTC. Dec 13 14:27:03.511422 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:03.513661 systemd-resolved[1161]: Positive Trust Anchors: Dec 13 14:27:03.513681 systemd-resolved[1161]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:27:03.513735 systemd-resolved[1161]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:27:03.521137 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:03.530114 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:27:03.536402 enable-oslogin[1189]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 14:27:03.536222 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:03.536475 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:03.536673 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:03.536807 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:03.538366 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:27:03.548290 systemd-resolved[1161]: Defaulting to hostname 'linux'. Dec 13 14:27:03.549720 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:03.550003 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:03.558502 systemd[1]: Started systemd-resolved.service. Dec 13 14:27:03.567635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:03.567865 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:03.576702 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:03.576925 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:03.585728 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:27:03.585999 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:27:03.594795 systemd[1]: Reached target network.target. Dec 13 14:27:03.603247 systemd[1]: Reached target nss-lookup.target. Dec 13 14:27:03.613241 systemd[1]: Reached target time-set.target. Dec 13 14:27:03.622200 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:03.622461 systemd[1]: Reached target sysinit.target. Dec 13 14:27:03.631408 systemd[1]: Started motdgen.path. Dec 13 14:27:03.638335 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:27:03.648524 systemd[1]: Started logrotate.timer. Dec 13 14:27:03.655432 systemd[1]: Started mdadm.timer. Dec 13 14:27:03.662294 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:27:03.671229 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:27:03.671435 systemd[1]: Reached target paths.target. Dec 13 14:27:03.678230 systemd[1]: Reached target timers.target. Dec 13 14:27:03.685818 systemd[1]: Listening on dbus.socket. Dec 13 14:27:03.695234 systemd[1]: Starting docker.socket... Dec 13 14:27:03.706738 systemd[1]: Listening on sshd.socket. Dec 13 14:27:03.714406 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:03.714734 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:03.717938 systemd[1]: Listening on docker.socket. Dec 13 14:27:03.728100 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:27:03.728399 systemd[1]: Reached target sockets.target. Dec 13 14:27:03.737258 systemd[1]: Reached target basic.target. Dec 13 14:27:03.744243 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:27:03.744528 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:27:03.746958 systemd[1]: Starting containerd.service... Dec 13 14:27:03.756250 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:27:03.769336 systemd[1]: Starting dbus.service... Dec 13 14:27:03.778109 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:27:03.787176 systemd[1]: Starting extend-filesystems.service... Dec 13 14:27:03.794110 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:27:03.796491 systemd[1]: Starting modprobe@drm.service... Dec 13 14:27:03.798296 jq[1195]: false Dec 13 14:27:03.805714 systemd[1]: Starting motdgen.service... Dec 13 14:27:03.817091 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:27:03.826269 systemd[1]: Starting sshd-keygen.service... Dec 13 14:27:03.835219 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:27:03.844111 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:03.844373 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 14:27:03.845457 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:27:03.847010 systemd[1]: Starting update-engine.service... Dec 13 14:27:03.856051 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:27:03.869829 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:27:03.870134 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:27:03.871017 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:27:03.871236 systemd[1]: Finished modprobe@drm.service. Dec 13 14:27:03.876583 jq[1216]: true Dec 13 14:27:03.881847 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:27:03.882118 systemd[1]: Finished motdgen.service. Dec 13 14:27:03.888052 extend-filesystems[1197]: Found loop1 Dec 13 14:27:03.888052 extend-filesystems[1197]: Found sda Dec 13 14:27:03.888052 extend-filesystems[1197]: Found sda1 Dec 13 14:27:03.888052 extend-filesystems[1197]: Found sda2 Dec 13 14:27:03.888052 extend-filesystems[1197]: Found sda3 Dec 13 14:27:03.888052 extend-filesystems[1197]: Found usr Dec 13 14:27:03.888052 extend-filesystems[1197]: Found sda4 Dec 13 14:27:03.888052 extend-filesystems[1197]: Found sda6 Dec 13 14:27:03.888052 extend-filesystems[1197]: Found sda7 Dec 13 14:27:03.888052 extend-filesystems[1197]: Found sda9 Dec 13 14:27:03.888052 extend-filesystems[1197]: Checking size of /dev/sda9 Dec 13 14:27:04.125666 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 14:27:04.125757 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 14:27:04.125846 update_engine[1214]: I1213 14:27:04.022897 1214 main.cc:92] Flatcar Update Engine starting Dec 13 14:27:04.125846 update_engine[1214]: I1213 14:27:04.030592 1214 update_check_scheduler.cc:74] Next update check in 8m3s Dec 13 14:27:03.997290 dbus-daemon[1194]: [system] SELinux support is enabled Dec 13 14:27:04.126651 extend-filesystems[1197]: Resized partition /dev/sda9 Dec 13 14:27:03.889945 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:27:04.000057 dbus-daemon[1194]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1031 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:27:04.130656 extend-filesystems[1224]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:27:04.130656 extend-filesystems[1224]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 14:27:04.130656 extend-filesystems[1224]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 14:27:04.130656 extend-filesystems[1224]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 14:27:03.890222 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:27:04.089813 dbus-daemon[1194]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:27:04.169669 jq[1225]: true Dec 13 14:27:04.170116 extend-filesystems[1197]: Resized filesystem in /dev/sda9 Dec 13 14:27:03.917221 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:27:03.960562 systemd[1]: Finished ensure-sysext.service. Dec 13 14:27:03.989030 systemd[1]: Created slice system-sshd.slice. Dec 13 14:27:03.997093 systemd[1]: Reached target network-online.target. Dec 13 14:27:04.009178 systemd[1]: Starting kubelet.service... Dec 13 14:27:04.025117 systemd[1]: Starting oem-gce.service... Dec 13 14:27:04.039938 systemd[1]: Starting systemd-logind.service... Dec 13 14:27:04.204160 mkfs.ext4[1245]: mke2fs 1.46.5 (30-Dec-2021) Dec 13 14:27:04.204160 mkfs.ext4[1245]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Dec 13 14:27:04.204160 mkfs.ext4[1245]: Creating filesystem with 262144 4k blocks and 65536 inodes Dec 13 14:27:04.204160 mkfs.ext4[1245]: Filesystem UUID: d8afa76f-08b2-4e09-a428-00c72c4f1a3b Dec 13 14:27:04.204160 mkfs.ext4[1245]: Superblock backups stored on blocks: Dec 13 14:27:04.204160 mkfs.ext4[1245]: 32768, 98304, 163840, 229376 Dec 13 14:27:04.204160 mkfs.ext4[1245]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:27:04.204160 mkfs.ext4[1245]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:27:04.204160 mkfs.ext4[1245]: Creating journal (8192 blocks): done Dec 13 14:27:04.204160 mkfs.ext4[1245]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:27:04.059550 systemd[1]: Started dbus.service. Dec 13 14:27:04.070667 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:27:04.070940 systemd[1]: Finished extend-filesystems.service. Dec 13 14:27:04.093275 systemd[1]: Started update-engine.service. Dec 13 14:27:04.126636 systemd[1]: Started locksmithd.service. Dec 13 14:27:04.141516 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:27:04.141583 systemd[1]: Reached target system-config.target. Dec 13 14:27:04.162139 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:27:04.193945 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:27:04.194092 systemd[1]: Reached target user-config.target. Dec 13 14:27:04.240198 env[1226]: time="2024-12-13T14:27:04.238452152Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:27:04.251806 bash[1262]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:27:04.252277 umount[1261]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Dec 13 14:27:04.253016 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:27:04.277240 coreos-metadata[1193]: Dec 13 14:27:04.277 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 14:27:04.281009 coreos-metadata[1193]: Dec 13 14:27:04.280 INFO Fetch failed with 404: resource not found Dec 13 14:27:04.281009 coreos-metadata[1193]: Dec 13 14:27:04.280 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 14:27:04.281477 coreos-metadata[1193]: Dec 13 14:27:04.281 INFO Fetch successful Dec 13 14:27:04.281477 coreos-metadata[1193]: Dec 13 14:27:04.281 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 14:27:04.281903 coreos-metadata[1193]: Dec 13 14:27:04.281 INFO Fetch failed with 404: resource not found Dec 13 14:27:04.281903 coreos-metadata[1193]: Dec 13 14:27:04.281 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 14:27:04.282302 coreos-metadata[1193]: Dec 13 14:27:04.282 INFO Fetch failed with 404: resource not found Dec 13 14:27:04.282302 coreos-metadata[1193]: Dec 13 14:27:04.282 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 14:27:04.283093 coreos-metadata[1193]: Dec 13 14:27:04.283 INFO Fetch successful Dec 13 14:27:04.285859 unknown[1193]: wrote ssh authorized keys file for user: core Dec 13 14:27:04.306019 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 14:27:04.318251 update-ssh-keys[1265]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:27:04.319573 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:27:04.366911 systemd-logind[1237]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:27:04.367492 systemd-logind[1237]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:27:04.367695 systemd-logind[1237]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:27:04.368119 systemd-logind[1237]: New seat seat0. Dec 13 14:27:04.378614 systemd[1]: Started systemd-logind.service. Dec 13 14:27:04.389999 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:27:04.431427 env[1226]: time="2024-12-13T14:27:04.431358457Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:27:04.431866 env[1226]: time="2024-12-13T14:27:04.431836666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:04.436488 env[1226]: time="2024-12-13T14:27:04.435840551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:04.436488 env[1226]: time="2024-12-13T14:27:04.435892078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:04.438027 env[1226]: time="2024-12-13T14:27:04.437658571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:04.438027 env[1226]: time="2024-12-13T14:27:04.437701007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:04.438027 env[1226]: time="2024-12-13T14:27:04.437728058Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:27:04.438027 env[1226]: time="2024-12-13T14:27:04.437747120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:04.440028 env[1226]: time="2024-12-13T14:27:04.438958365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:04.440028 env[1226]: time="2024-12-13T14:27:04.439421984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:04.440028 env[1226]: time="2024-12-13T14:27:04.439684034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:04.440028 env[1226]: time="2024-12-13T14:27:04.439713209Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:27:04.440028 env[1226]: time="2024-12-13T14:27:04.439788506Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:27:04.440028 env[1226]: time="2024-12-13T14:27:04.439809292Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451054550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451126787Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451151045Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451218919Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451246473Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451346618Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451384996Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451411535Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451437824Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451462371Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451486432Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451509783Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451668525Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:27:04.451853 env[1226]: time="2024-12-13T14:27:04.451785167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:27:04.453212 env[1226]: time="2024-12-13T14:27:04.453182526Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:27:04.453376 env[1226]: time="2024-12-13T14:27:04.453352770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.453519 env[1226]: time="2024-12-13T14:27:04.453498366Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:27:04.453840 env[1226]: time="2024-12-13T14:27:04.453816063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.453994 env[1226]: time="2024-12-13T14:27:04.453946264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.454100 env[1226]: time="2024-12-13T14:27:04.454079611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.454219 env[1226]: time="2024-12-13T14:27:04.454199296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.454321 env[1226]: time="2024-12-13T14:27:04.454302194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.455046 env[1226]: time="2024-12-13T14:27:04.455021694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.455153 env[1226]: time="2024-12-13T14:27:04.455133882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.455259 env[1226]: time="2024-12-13T14:27:04.455241120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.455400 env[1226]: time="2024-12-13T14:27:04.455381507Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:27:04.455728 env[1226]: time="2024-12-13T14:27:04.455705103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.455854 env[1226]: time="2024-12-13T14:27:04.455834729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.456007 env[1226]: time="2024-12-13T14:27:04.455957069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.456132 env[1226]: time="2024-12-13T14:27:04.456111030Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:27:04.456258 env[1226]: time="2024-12-13T14:27:04.456232890Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:27:04.457050 env[1226]: time="2024-12-13T14:27:04.457022410Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:27:04.457199 env[1226]: time="2024-12-13T14:27:04.457174951Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:27:04.457365 env[1226]: time="2024-12-13T14:27:04.457343365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:27:04.458007 env[1226]: time="2024-12-13T14:27:04.457839525Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:27:04.461937 env[1226]: time="2024-12-13T14:27:04.459055724Z" level=info msg="Connect containerd service" Dec 13 14:27:04.461937 env[1226]: time="2024-12-13T14:27:04.459136823Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:27:04.463926 env[1226]: time="2024-12-13T14:27:04.463890885Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:27:04.466418 env[1226]: time="2024-12-13T14:27:04.466367256Z" level=info msg="Start subscribing containerd event" Dec 13 14:27:04.466608 env[1226]: time="2024-12-13T14:27:04.466583369Z" level=info msg="Start recovering state" Dec 13 14:27:04.466810 env[1226]: time="2024-12-13T14:27:04.466790265Z" level=info msg="Start event monitor" Dec 13 14:27:04.466908 env[1226]: time="2024-12-13T14:27:04.466890186Z" level=info msg="Start snapshots syncer" Dec 13 14:27:04.467039 env[1226]: time="2024-12-13T14:27:04.467019248Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:27:04.467132 env[1226]: time="2024-12-13T14:27:04.467113990Z" level=info msg="Start streaming server" Dec 13 14:27:04.467699 env[1226]: time="2024-12-13T14:27:04.467675181Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:27:04.469109 env[1226]: time="2024-12-13T14:27:04.469081785Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:27:04.469384 systemd[1]: Started containerd.service. Dec 13 14:27:04.469725 env[1226]: time="2024-12-13T14:27:04.469700547Z" level=info msg="containerd successfully booted in 0.233353s" Dec 13 14:27:04.619134 dbus-daemon[1194]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:27:04.619495 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:27:04.620663 dbus-daemon[1194]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1252 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:27:04.633570 systemd[1]: Starting polkit.service... Dec 13 14:27:04.705611 polkitd[1273]: Started polkitd version 121 Dec 13 14:27:04.730680 polkitd[1273]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:27:04.731174 polkitd[1273]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:27:04.735387 polkitd[1273]: Finished loading, compiling and executing 2 rules Dec 13 14:27:04.736253 dbus-daemon[1194]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:27:04.736511 systemd[1]: Started polkit.service. Dec 13 14:27:04.737768 polkitd[1273]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:27:04.759284 systemd-hostnamed[1252]: Hostname set to (transient) Dec 13 14:27:04.763591 systemd-resolved[1161]: System hostname changed to 'ci-3510-3-6-ba5111ee7d0eaee3802e.c.flatcar-212911.internal'. Dec 13 14:27:05.964386 systemd[1]: Started kubelet.service. Dec 13 14:27:06.481997 sshd_keygen[1217]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:27:06.528829 systemd[1]: Finished sshd-keygen.service. Dec 13 14:27:06.539213 systemd[1]: Starting issuegen.service... Dec 13 14:27:06.549406 systemd[1]: Started sshd@0-10.128.0.21:22-139.178.68.195:45852.service. Dec 13 14:27:06.562052 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:27:06.562318 systemd[1]: Finished issuegen.service. Dec 13 14:27:06.573351 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:27:06.601597 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:27:06.613272 systemd[1]: Started getty@tty1.service. Dec 13 14:27:06.622881 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:27:06.624421 locksmithd[1246]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:27:06.631943 systemd[1]: Reached target getty.target. Dec 13 14:27:06.915337 sshd[1299]: Accepted publickey for core from 139.178.68.195 port 45852 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:06.920752 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:06.942384 systemd[1]: Created slice user-500.slice. Dec 13 14:27:06.951559 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:27:06.964093 systemd-logind[1237]: New session 1 of user core. Dec 13 14:27:06.973124 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:27:06.983898 systemd[1]: Starting user@500.service... Dec 13 14:27:07.012361 (systemd)[1308]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:07.203943 systemd[1308]: Queued start job for default target default.target. Dec 13 14:27:07.204920 systemd[1308]: Reached target paths.target. Dec 13 14:27:07.204950 systemd[1308]: Reached target sockets.target. Dec 13 14:27:07.204999 systemd[1308]: Reached target timers.target. Dec 13 14:27:07.205021 systemd[1308]: Reached target basic.target. Dec 13 14:27:07.205105 systemd[1308]: Reached target default.target. Dec 13 14:27:07.205167 systemd[1308]: Startup finished in 172ms. Dec 13 14:27:07.205340 systemd[1]: Started user@500.service. Dec 13 14:27:07.213667 systemd[1]: Started session-1.scope. Dec 13 14:27:07.456855 systemd[1]: Started sshd@1-10.128.0.21:22-139.178.68.195:44538.service. Dec 13 14:27:07.473456 kubelet[1285]: E1213 14:27:07.473366 1285 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:07.493371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:07.493610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:07.494086 systemd[1]: kubelet.service: Consumed 1.521s CPU time. Dec 13 14:27:07.765058 sshd[1318]: Accepted publickey for core from 139.178.68.195 port 44538 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:07.767534 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:07.776075 systemd-logind[1237]: New session 2 of user core. Dec 13 14:27:07.777135 systemd[1]: Started session-2.scope. Dec 13 14:27:07.985812 sshd[1318]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:07.993078 systemd[1]: sshd@1-10.128.0.21:22-139.178.68.195:44538.service: Deactivated successfully. Dec 13 14:27:07.994503 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:27:07.997508 systemd-logind[1237]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:27:07.999116 systemd-logind[1237]: Removed session 2. Dec 13 14:27:08.032714 systemd[1]: Started sshd@2-10.128.0.21:22-139.178.68.195:44552.service. Dec 13 14:27:08.336498 sshd[1324]: Accepted publickey for core from 139.178.68.195 port 44552 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:08.337586 sshd[1324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:08.346231 systemd[1]: Started session-3.scope. Dec 13 14:27:08.347693 systemd-logind[1237]: New session 3 of user core. Dec 13 14:27:08.556302 sshd[1324]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:08.561955 systemd[1]: sshd@2-10.128.0.21:22-139.178.68.195:44552.service: Deactivated successfully. Dec 13 14:27:08.563282 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:27:08.566416 systemd-logind[1237]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:27:08.568235 systemd-logind[1237]: Removed session 3. Dec 13 14:27:10.030157 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Dec 13 14:27:12.250035 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 14:27:12.265525 systemd-nspawn[1330]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Dec 13 14:27:12.265525 systemd-nspawn[1330]: Press ^] three times within 1s to kill container. Dec 13 14:27:12.281014 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:27:12.301467 systemd[1]: tmp-unifiedshvZjN.mount: Deactivated successfully. Dec 13 14:27:12.366174 systemd[1]: Started oem-gce.service. Dec 13 14:27:12.366665 systemd[1]: Reached target multi-user.target. Dec 13 14:27:12.369037 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:27:12.380595 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:27:12.380842 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:27:12.381154 systemd[1]: Startup finished in 1.125s (kernel) + 7.204s (initrd) + 16.329s (userspace) = 24.659s. Dec 13 14:27:12.418319 systemd-nspawn[1330]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 14:27:12.418319 systemd-nspawn[1330]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 14:27:12.418596 systemd-nspawn[1330]: + /usr/bin/google_instance_setup Dec 13 14:27:12.998929 instance-setup[1336]: INFO Running google_set_multiqueue. Dec 13 14:27:13.015027 instance-setup[1336]: INFO Set channels for eth0 to 2. Dec 13 14:27:13.018882 instance-setup[1336]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 14:27:13.020274 instance-setup[1336]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 14:27:13.020775 instance-setup[1336]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 14:27:13.022253 instance-setup[1336]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 14:27:13.022610 instance-setup[1336]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 14:27:13.023996 instance-setup[1336]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 14:27:13.024452 instance-setup[1336]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 14:27:13.025882 instance-setup[1336]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 14:27:13.037333 instance-setup[1336]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 14:27:13.037700 instance-setup[1336]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 14:27:13.076160 systemd-nspawn[1330]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 14:27:13.423268 startup-script[1367]: INFO Starting startup scripts. Dec 13 14:27:13.437406 startup-script[1367]: INFO No startup scripts found in metadata. Dec 13 14:27:13.437589 startup-script[1367]: INFO Finished running startup scripts. Dec 13 14:27:13.472855 systemd-nspawn[1330]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 14:27:13.473660 systemd-nspawn[1330]: + daemon_pids=() Dec 13 14:27:13.473660 systemd-nspawn[1330]: + for d in accounts clock_skew network Dec 13 14:27:13.473660 systemd-nspawn[1330]: + daemon_pids+=($!) Dec 13 14:27:13.473660 systemd-nspawn[1330]: + for d in accounts clock_skew network Dec 13 14:27:13.474011 systemd-nspawn[1330]: + daemon_pids+=($!) Dec 13 14:27:13.474127 systemd-nspawn[1330]: + for d in accounts clock_skew network Dec 13 14:27:13.474485 systemd-nspawn[1330]: + daemon_pids+=($!) Dec 13 14:27:13.474643 systemd-nspawn[1330]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 14:27:13.474724 systemd-nspawn[1330]: + /usr/bin/systemd-notify --ready Dec 13 14:27:13.474956 systemd-nspawn[1330]: + /usr/bin/google_network_daemon Dec 13 14:27:13.475765 systemd-nspawn[1330]: + /usr/bin/google_clock_skew_daemon Dec 13 14:27:13.476286 systemd-nspawn[1330]: + /usr/bin/google_accounts_daemon Dec 13 14:27:13.532997 systemd-nspawn[1330]: + wait -n 36 37 38 Dec 13 14:27:14.052553 google-networking[1372]: INFO Starting Google Networking daemon. Dec 13 14:27:14.146242 google-clock-skew[1371]: INFO Starting Google Clock Skew daemon. Dec 13 14:27:14.161023 google-clock-skew[1371]: INFO Clock drift token has changed: 0. Dec 13 14:27:14.167695 systemd-nspawn[1330]: hwclock: Cannot access the Hardware Clock via any known method. Dec 13 14:27:14.167695 systemd-nspawn[1330]: hwclock: Use the --verbose option to see the details of our search for an access method. Dec 13 14:27:14.168936 google-clock-skew[1371]: WARNING Failed to sync system time with hardware clock. Dec 13 14:27:14.259551 groupadd[1382]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 14:27:14.263177 groupadd[1382]: group added to /etc/gshadow: name=google-sudoers Dec 13 14:27:14.267217 groupadd[1382]: new group: name=google-sudoers, GID=1000 Dec 13 14:27:14.282629 google-accounts[1370]: INFO Starting Google Accounts daemon. Dec 13 14:27:14.308909 google-accounts[1370]: WARNING OS Login not installed. Dec 13 14:27:14.310339 google-accounts[1370]: INFO Creating a new user account for 0. Dec 13 14:27:14.316312 systemd-nspawn[1330]: useradd: invalid user name '0': use --badname to ignore Dec 13 14:27:14.317047 google-accounts[1370]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 14:27:17.745045 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:27:17.745392 systemd[1]: Stopped kubelet.service. Dec 13 14:27:17.745469 systemd[1]: kubelet.service: Consumed 1.521s CPU time. Dec 13 14:27:17.747984 systemd[1]: Starting kubelet.service... Dec 13 14:27:18.008242 systemd[1]: Started kubelet.service. Dec 13 14:27:18.084232 kubelet[1396]: E1213 14:27:18.084153 1396 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:18.089029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:18.089256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:18.637030 systemd[1]: Started sshd@3-10.128.0.21:22-139.178.68.195:47548.service. Dec 13 14:27:18.931796 sshd[1404]: Accepted publickey for core from 139.178.68.195 port 47548 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:18.933870 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:18.942535 systemd[1]: Started session-4.scope. Dec 13 14:27:18.943676 systemd-logind[1237]: New session 4 of user core. Dec 13 14:27:19.152703 sshd[1404]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:19.157499 systemd[1]: sshd@3-10.128.0.21:22-139.178.68.195:47548.service: Deactivated successfully. Dec 13 14:27:19.158694 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:27:19.159640 systemd-logind[1237]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:27:19.161282 systemd-logind[1237]: Removed session 4. Dec 13 14:27:19.199403 systemd[1]: Started sshd@4-10.128.0.21:22-139.178.68.195:47558.service. Dec 13 14:27:19.490345 sshd[1410]: Accepted publickey for core from 139.178.68.195 port 47558 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:19.493160 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:19.500203 systemd-logind[1237]: New session 5 of user core. Dec 13 14:27:19.501087 systemd[1]: Started session-5.scope. Dec 13 14:27:19.701878 sshd[1410]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:19.706763 systemd[1]: sshd@4-10.128.0.21:22-139.178.68.195:47558.service: Deactivated successfully. Dec 13 14:27:19.707962 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:27:19.709010 systemd-logind[1237]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:27:19.710416 systemd-logind[1237]: Removed session 5. Dec 13 14:27:19.748314 systemd[1]: Started sshd@5-10.128.0.21:22-139.178.68.195:47564.service. Dec 13 14:27:20.037695 sshd[1416]: Accepted publickey for core from 139.178.68.195 port 47564 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:20.039695 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:20.045839 systemd-logind[1237]: New session 6 of user core. Dec 13 14:27:20.047561 systemd[1]: Started session-6.scope. Dec 13 14:27:20.254171 sshd[1416]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:20.259149 systemd[1]: sshd@5-10.128.0.21:22-139.178.68.195:47564.service: Deactivated successfully. Dec 13 14:27:20.260337 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:27:20.261270 systemd-logind[1237]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:27:20.262609 systemd-logind[1237]: Removed session 6. Dec 13 14:27:20.301303 systemd[1]: Started sshd@6-10.128.0.21:22-139.178.68.195:47580.service. Dec 13 14:27:20.592260 sshd[1422]: Accepted publickey for core from 139.178.68.195 port 47580 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:20.594208 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:20.601492 systemd[1]: Started session-7.scope. Dec 13 14:27:20.602376 systemd-logind[1237]: New session 7 of user core. Dec 13 14:27:20.791669 sudo[1425]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:27:20.792138 sudo[1425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:27:20.813124 systemd[1]: Starting coreos-metadata.service... Dec 13 14:27:20.863841 coreos-metadata[1429]: Dec 13 14:27:20.863 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 13 14:27:20.864600 coreos-metadata[1429]: Dec 13 14:27:20.864 INFO Fetch successful Dec 13 14:27:20.864718 coreos-metadata[1429]: Dec 13 14:27:20.864 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 13 14:27:20.865421 coreos-metadata[1429]: Dec 13 14:27:20.865 INFO Fetch successful Dec 13 14:27:20.865421 coreos-metadata[1429]: Dec 13 14:27:20.865 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 13 14:27:20.865725 coreos-metadata[1429]: Dec 13 14:27:20.865 INFO Fetch successful Dec 13 14:27:20.865831 coreos-metadata[1429]: Dec 13 14:27:20.865 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 13 14:27:20.866143 coreos-metadata[1429]: Dec 13 14:27:20.866 INFO Fetch successful Dec 13 14:27:20.877465 systemd[1]: Finished coreos-metadata.service. Dec 13 14:27:21.862635 systemd[1]: Stopped kubelet.service. Dec 13 14:27:21.867224 systemd[1]: Starting kubelet.service... Dec 13 14:27:21.898112 systemd[1]: Reloading. Dec 13 14:27:22.029273 /usr/lib/systemd/system-generators/torcx-generator[1487]: time="2024-12-13T14:27:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:22.029932 /usr/lib/systemd/system-generators/torcx-generator[1487]: time="2024-12-13T14:27:22Z" level=info msg="torcx already run" Dec 13 14:27:22.182061 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:22.182095 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:22.207282 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:22.374280 systemd[1]: Started kubelet.service. Dec 13 14:27:22.382409 systemd[1]: Stopping kubelet.service... Dec 13 14:27:22.383241 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:27:22.383522 systemd[1]: Stopped kubelet.service. Dec 13 14:27:22.385911 systemd[1]: Starting kubelet.service... Dec 13 14:27:22.595367 systemd[1]: Started kubelet.service. Dec 13 14:27:22.658707 kubelet[1538]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:27:22.658707 kubelet[1538]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:27:22.658707 kubelet[1538]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:27:22.659424 kubelet[1538]: I1213 14:27:22.658786 1538 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:27:23.106234 kubelet[1538]: I1213 14:27:23.106190 1538 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:27:23.106234 kubelet[1538]: I1213 14:27:23.106234 1538 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:27:23.106662 kubelet[1538]: I1213 14:27:23.106620 1538 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:27:23.164779 kubelet[1538]: I1213 14:27:23.164716 1538 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:27:23.193348 kubelet[1538]: I1213 14:27:23.193286 1538 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:27:23.194254 kubelet[1538]: I1213 14:27:23.194218 1538 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:27:23.195164 kubelet[1538]: I1213 14:27:23.195103 1538 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:27:23.195535 kubelet[1538]: I1213 14:27:23.195483 1538 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:27:23.195535 kubelet[1538]: I1213 14:27:23.195519 1538 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:27:23.195758 kubelet[1538]: I1213 14:27:23.195721 1538 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:27:23.195927 kubelet[1538]: I1213 14:27:23.195895 1538 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:27:23.196046 kubelet[1538]: I1213 14:27:23.195931 1538 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:27:23.196121 kubelet[1538]: I1213 14:27:23.196048 1538 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:27:23.196121 kubelet[1538]: I1213 14:27:23.196087 1538 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:27:23.204274 kubelet[1538]: E1213 14:27:23.204200 1538 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:23.204854 kubelet[1538]: E1213 14:27:23.204828 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:23.205779 kubelet[1538]: I1213 14:27:23.205750 1538 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:27:23.211595 kubelet[1538]: I1213 14:27:23.211559 1538 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:27:23.213860 kubelet[1538]: W1213 14:27:23.213827 1538 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:27:23.218890 kubelet[1538]: I1213 14:27:23.218177 1538 server.go:1256] "Started kubelet" Dec 13 14:27:23.236829 kubelet[1538]: I1213 14:27:23.236756 1538 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:27:23.238080 kubelet[1538]: I1213 14:27:23.238050 1538 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:27:23.238498 kubelet[1538]: I1213 14:27:23.238470 1538 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:27:23.239009 kubelet[1538]: I1213 14:27:23.238962 1538 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:27:23.251190 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:27:23.251471 kubelet[1538]: I1213 14:27:23.251438 1538 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:27:23.253727 kubelet[1538]: I1213 14:27:23.253697 1538 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:27:23.254218 kubelet[1538]: I1213 14:27:23.254181 1538 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:27:23.254322 kubelet[1538]: I1213 14:27:23.254312 1538 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:27:23.259031 kubelet[1538]: E1213 14:27:23.258980 1538 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:27:23.262072 kubelet[1538]: E1213 14:27:23.261552 1538 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.21\" not found" node="10.128.0.21" Dec 13 14:27:23.277542 kubelet[1538]: I1213 14:27:23.277508 1538 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:27:23.277696 kubelet[1538]: I1213 14:27:23.277565 1538 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:27:23.277796 kubelet[1538]: I1213 14:27:23.277758 1538 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:27:23.300949 kubelet[1538]: I1213 14:27:23.299870 1538 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:27:23.300949 kubelet[1538]: I1213 14:27:23.299896 1538 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:27:23.300949 kubelet[1538]: I1213 14:27:23.299924 1538 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:27:23.305004 kubelet[1538]: I1213 14:27:23.303095 1538 policy_none.go:49] "None policy: Start" Dec 13 14:27:23.305004 kubelet[1538]: I1213 14:27:23.303985 1538 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:27:23.305004 kubelet[1538]: I1213 14:27:23.304026 1538 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:27:23.319519 systemd[1]: Created slice kubepods.slice. Dec 13 14:27:23.328402 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:27:23.333179 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:27:23.346434 kubelet[1538]: I1213 14:27:23.346400 1538 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:27:23.347183 kubelet[1538]: I1213 14:27:23.347156 1538 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:27:23.350920 kubelet[1538]: E1213 14:27:23.350894 1538 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.21\" not found" Dec 13 14:27:23.355722 kubelet[1538]: I1213 14:27:23.355701 1538 kubelet_node_status.go:73] "Attempting to register node" node="10.128.0.21" Dec 13 14:27:23.368947 kubelet[1538]: I1213 14:27:23.362470 1538 kubelet_node_status.go:76] "Successfully registered node" node="10.128.0.21" Dec 13 14:27:23.384461 kubelet[1538]: I1213 14:27:23.381325 1538 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:27:23.384461 kubelet[1538]: I1213 14:27:23.382080 1538 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:27:23.384743 env[1226]: time="2024-12-13T14:27:23.381811994Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:27:23.436647 kubelet[1538]: I1213 14:27:23.436595 1538 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:27:23.438902 kubelet[1538]: I1213 14:27:23.438858 1538 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:27:23.439138 kubelet[1538]: I1213 14:27:23.438922 1538 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:27:23.439138 kubelet[1538]: I1213 14:27:23.438958 1538 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:27:23.439138 kubelet[1538]: E1213 14:27:23.439086 1538 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:27:24.115453 kubelet[1538]: I1213 14:27:24.115388 1538 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:27:24.116300 kubelet[1538]: W1213 14:27:24.115642 1538 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:27:24.116300 kubelet[1538]: W1213 14:27:24.116095 1538 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:27:24.116300 kubelet[1538]: W1213 14:27:24.116155 1538 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:27:24.125325 sudo[1425]: pam_unix(sudo:session): session closed for user root Dec 13 14:27:24.170420 sshd[1422]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:24.176186 systemd[1]: sshd@6-10.128.0.21:22-139.178.68.195:47580.service: Deactivated successfully. Dec 13 14:27:24.177590 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:27:24.178690 systemd-logind[1237]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:27:24.180541 systemd-logind[1237]: Removed session 7. Dec 13 14:27:24.204120 kubelet[1538]: I1213 14:27:24.204058 1538 apiserver.go:52] "Watching apiserver" Dec 13 14:27:24.205490 kubelet[1538]: E1213 14:27:24.205454 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:24.212372 kubelet[1538]: I1213 14:27:24.212318 1538 topology_manager.go:215] "Topology Admit Handler" podUID="c1c1f159-b48e-4c8a-947c-060c7f529d50" podNamespace="kube-system" podName="cilium-x6fql" Dec 13 14:27:24.212654 kubelet[1538]: I1213 14:27:24.212631 1538 topology_manager.go:215] "Topology Admit Handler" podUID="738741c4-3c22-4ac5-b877-f1e2f6eeb27c" podNamespace="kube-system" podName="kube-proxy-pgldr" Dec 13 14:27:24.221768 systemd[1]: Created slice kubepods-besteffort-pod738741c4_3c22_4ac5_b877_f1e2f6eeb27c.slice. Dec 13 14:27:24.235713 systemd[1]: Created slice kubepods-burstable-podc1c1f159_b48e_4c8a_947c_060c7f529d50.slice. Dec 13 14:27:24.255604 kubelet[1538]: I1213 14:27:24.255566 1538 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:27:24.261440 kubelet[1538]: I1213 14:27:24.261386 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1c1f159-b48e-4c8a-947c-060c7f529d50-clustermesh-secrets\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.261633 kubelet[1538]: I1213 14:27:24.261479 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-host-proc-sys-kernel\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.261633 kubelet[1538]: I1213 14:27:24.261521 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1c1f159-b48e-4c8a-947c-060c7f529d50-hubble-tls\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.261633 kubelet[1538]: I1213 14:27:24.261557 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-run\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.261633 kubelet[1538]: I1213 14:27:24.261621 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-lib-modules\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.261877 kubelet[1538]: I1213 14:27:24.261660 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/738741c4-3c22-4ac5-b877-f1e2f6eeb27c-xtables-lock\") pod \"kube-proxy-pgldr\" (UID: \"738741c4-3c22-4ac5-b877-f1e2f6eeb27c\") " pod="kube-system/kube-proxy-pgldr" Dec 13 14:27:24.261877 kubelet[1538]: I1213 14:27:24.261699 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/738741c4-3c22-4ac5-b877-f1e2f6eeb27c-lib-modules\") pod \"kube-proxy-pgldr\" (UID: \"738741c4-3c22-4ac5-b877-f1e2f6eeb27c\") " pod="kube-system/kube-proxy-pgldr" Dec 13 14:27:24.261877 kubelet[1538]: I1213 14:27:24.261734 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-cgroup\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.261877 kubelet[1538]: I1213 14:27:24.261777 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cni-path\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.261877 kubelet[1538]: I1213 14:27:24.261815 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-etc-cni-netd\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.261877 kubelet[1538]: I1213 14:27:24.261851 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-host-proc-sys-net\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.262231 kubelet[1538]: I1213 14:27:24.261911 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/738741c4-3c22-4ac5-b877-f1e2f6eeb27c-kube-proxy\") pod \"kube-proxy-pgldr\" (UID: \"738741c4-3c22-4ac5-b877-f1e2f6eeb27c\") " pod="kube-system/kube-proxy-pgldr" Dec 13 14:27:24.262231 kubelet[1538]: I1213 14:27:24.261984 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dn9b\" (UniqueName: \"kubernetes.io/projected/738741c4-3c22-4ac5-b877-f1e2f6eeb27c-kube-api-access-4dn9b\") pod \"kube-proxy-pgldr\" (UID: \"738741c4-3c22-4ac5-b877-f1e2f6eeb27c\") " pod="kube-system/kube-proxy-pgldr" Dec 13 14:27:24.262231 kubelet[1538]: I1213 14:27:24.262035 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-bpf-maps\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.262231 kubelet[1538]: I1213 14:27:24.262079 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-hostproc\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.262231 kubelet[1538]: I1213 14:27:24.262133 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-xtables-lock\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.262231 kubelet[1538]: I1213 14:27:24.262188 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-config-path\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.262536 kubelet[1538]: I1213 14:27:24.262242 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtxvg\" (UniqueName: \"kubernetes.io/projected/c1c1f159-b48e-4c8a-947c-060c7f529d50-kube-api-access-xtxvg\") pod \"cilium-x6fql\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " pod="kube-system/cilium-x6fql" Dec 13 14:27:24.535438 env[1226]: time="2024-12-13T14:27:24.533288539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pgldr,Uid:738741c4-3c22-4ac5-b877-f1e2f6eeb27c,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:24.545174 env[1226]: time="2024-12-13T14:27:24.545111408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6fql,Uid:c1c1f159-b48e-4c8a-947c-060c7f529d50,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:25.140667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187558352.mount: Deactivated successfully. Dec 13 14:27:25.150116 env[1226]: time="2024-12-13T14:27:25.150055631Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:25.151432 env[1226]: time="2024-12-13T14:27:25.151375108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:25.155917 env[1226]: time="2024-12-13T14:27:25.155852846Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:25.157142 env[1226]: time="2024-12-13T14:27:25.157092208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:25.158057 env[1226]: time="2024-12-13T14:27:25.158017984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:25.160617 env[1226]: time="2024-12-13T14:27:25.160561778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:25.161535 env[1226]: time="2024-12-13T14:27:25.161497037Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:25.165774 env[1226]: time="2024-12-13T14:27:25.165716059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:25.197954 env[1226]: time="2024-12-13T14:27:25.197822409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:25.198252 env[1226]: time="2024-12-13T14:27:25.198003673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:25.198252 env[1226]: time="2024-12-13T14:27:25.198091700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:25.198409 env[1226]: time="2024-12-13T14:27:25.198345949Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa3e9351a8ddfba54497ca0b33c89775fca3e5b2119066e82a68e371ce12713 pid=1591 runtime=io.containerd.runc.v2 Dec 13 14:27:25.201206 env[1226]: time="2024-12-13T14:27:25.201115045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:25.201206 env[1226]: time="2024-12-13T14:27:25.201164290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:25.201206 env[1226]: time="2024-12-13T14:27:25.201182212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:25.201794 env[1226]: time="2024-12-13T14:27:25.201720104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca pid=1603 runtime=io.containerd.runc.v2 Dec 13 14:27:25.206660 kubelet[1538]: E1213 14:27:25.206600 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:25.228448 systemd[1]: Started cri-containerd-caa3e9351a8ddfba54497ca0b33c89775fca3e5b2119066e82a68e371ce12713.scope. Dec 13 14:27:25.249022 systemd[1]: Started cri-containerd-ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca.scope. Dec 13 14:27:25.298132 env[1226]: time="2024-12-13T14:27:25.296911363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pgldr,Uid:738741c4-3c22-4ac5-b877-f1e2f6eeb27c,Namespace:kube-system,Attempt:0,} returns sandbox id \"caa3e9351a8ddfba54497ca0b33c89775fca3e5b2119066e82a68e371ce12713\"" Dec 13 14:27:25.303474 env[1226]: time="2024-12-13T14:27:25.303413176Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:27:25.311014 env[1226]: time="2024-12-13T14:27:25.310037265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6fql,Uid:c1c1f159-b48e-4c8a-947c-060c7f529d50,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\"" Dec 13 14:27:26.207627 kubelet[1538]: E1213 14:27:26.207559 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:26.550894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2455149640.mount: Deactivated successfully. Dec 13 14:27:27.208513 kubelet[1538]: E1213 14:27:27.208419 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:27.251074 env[1226]: time="2024-12-13T14:27:27.250954914Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:27.253995 env[1226]: time="2024-12-13T14:27:27.253914380Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:27.256373 env[1226]: time="2024-12-13T14:27:27.256303220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:27.258531 env[1226]: time="2024-12-13T14:27:27.258473364Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:27.259251 env[1226]: time="2024-12-13T14:27:27.259192139Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:27:27.261874 env[1226]: time="2024-12-13T14:27:27.261818889Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:27:27.264193 env[1226]: time="2024-12-13T14:27:27.264151957Z" level=info msg="CreateContainer within sandbox \"caa3e9351a8ddfba54497ca0b33c89775fca3e5b2119066e82a68e371ce12713\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:27:27.282956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount309181508.mount: Deactivated successfully. Dec 13 14:27:27.293288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845736299.mount: Deactivated successfully. Dec 13 14:27:27.298726 env[1226]: time="2024-12-13T14:27:27.298652784Z" level=info msg="CreateContainer within sandbox \"caa3e9351a8ddfba54497ca0b33c89775fca3e5b2119066e82a68e371ce12713\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ffab18c7d80b02fb58b31b630aeb26975a70c4972fcc84f987c164f7aeb0ce6\"" Dec 13 14:27:27.299775 env[1226]: time="2024-12-13T14:27:27.299732661Z" level=info msg="StartContainer for \"1ffab18c7d80b02fb58b31b630aeb26975a70c4972fcc84f987c164f7aeb0ce6\"" Dec 13 14:27:27.327021 systemd[1]: Started cri-containerd-1ffab18c7d80b02fb58b31b630aeb26975a70c4972fcc84f987c164f7aeb0ce6.scope. Dec 13 14:27:27.381571 env[1226]: time="2024-12-13T14:27:27.381468758Z" level=info msg="StartContainer for \"1ffab18c7d80b02fb58b31b630aeb26975a70c4972fcc84f987c164f7aeb0ce6\" returns successfully" Dec 13 14:27:27.481654 kubelet[1538]: I1213 14:27:27.481503 1538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pgldr" podStartSLOduration=2.5231758429999998 podStartE2EDuration="4.481428699s" podCreationTimestamp="2024-12-13 14:27:23 +0000 UTC" firstStartedPulling="2024-12-13 14:27:25.301891143 +0000 UTC m=+2.699432214" lastFinishedPulling="2024-12-13 14:27:27.26014426 +0000 UTC m=+4.657685070" observedRunningTime="2024-12-13 14:27:27.479569524 +0000 UTC m=+4.877110331" watchObservedRunningTime="2024-12-13 14:27:27.481428699 +0000 UTC m=+4.878969507" Dec 13 14:27:27.862121 systemd[1]: Started sshd@7-10.128.0.21:22-218.92.0.190:49673.service. Dec 13 14:27:28.209529 kubelet[1538]: E1213 14:27:28.209333 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:29.210020 kubelet[1538]: E1213 14:27:29.209939 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:30.211119 kubelet[1538]: E1213 14:27:30.211021 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:30.888822 sshd[1833]: Failed password for root from 218.92.0.190 port 49673 ssh2 Dec 13 14:27:31.211980 kubelet[1538]: E1213 14:27:31.211800 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:31.881502 sshd[1833]: Failed password for root from 218.92.0.190 port 49673 ssh2 Dec 13 14:27:32.212487 kubelet[1538]: E1213 14:27:32.212267 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:32.921193 sshd[1833]: Failed password for root from 218.92.0.190 port 49673 ssh2 Dec 13 14:27:33.128112 sshd[1833]: Received disconnect from 218.92.0.190 port 49673:11: [preauth] Dec 13 14:27:33.128112 sshd[1833]: Disconnected from authenticating user root 218.92.0.190 port 49673 [preauth] Dec 13 14:27:33.130548 systemd[1]: sshd@7-10.128.0.21:22-218.92.0.190:49673.service: Deactivated successfully. Dec 13 14:27:33.213339 kubelet[1538]: E1213 14:27:33.213025 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:34.214286 kubelet[1538]: E1213 14:27:34.214214 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:34.792396 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:27:35.216183 kubelet[1538]: E1213 14:27:35.215997 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:36.216470 kubelet[1538]: E1213 14:27:36.216408 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:37.217250 kubelet[1538]: E1213 14:27:37.217196 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:38.218400 kubelet[1538]: E1213 14:27:38.218293 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:39.218856 kubelet[1538]: E1213 14:27:39.218791 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:40.220094 kubelet[1538]: E1213 14:27:40.219953 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:40.413550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount969966249.mount: Deactivated successfully. Dec 13 14:27:41.220263 kubelet[1538]: E1213 14:27:41.220194 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:42.220557 kubelet[1538]: E1213 14:27:42.220419 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:43.196351 kubelet[1538]: E1213 14:27:43.196280 1538 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:43.221200 kubelet[1538]: E1213 14:27:43.221060 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:43.770993 env[1226]: time="2024-12-13T14:27:43.770899372Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:43.773637 env[1226]: time="2024-12-13T14:27:43.773587411Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:43.776025 env[1226]: time="2024-12-13T14:27:43.775986392Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:43.776881 env[1226]: time="2024-12-13T14:27:43.776828514Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:27:43.780382 env[1226]: time="2024-12-13T14:27:43.780327882Z" level=info msg="CreateContainer within sandbox \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:27:43.795881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632858133.mount: Deactivated successfully. Dec 13 14:27:43.808774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403025633.mount: Deactivated successfully. Dec 13 14:27:43.813096 env[1226]: time="2024-12-13T14:27:43.813023431Z" level=info msg="CreateContainer within sandbox \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7\"" Dec 13 14:27:43.814035 env[1226]: time="2024-12-13T14:27:43.813921325Z" level=info msg="StartContainer for \"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7\"" Dec 13 14:27:43.845920 systemd[1]: Started cri-containerd-1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7.scope. Dec 13 14:27:43.901696 env[1226]: time="2024-12-13T14:27:43.901615817Z" level=info msg="StartContainer for \"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7\" returns successfully" Dec 13 14:27:43.911613 systemd[1]: cri-containerd-1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7.scope: Deactivated successfully. Dec 13 14:27:44.222370 kubelet[1538]: E1213 14:27:44.222174 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:44.791599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7-rootfs.mount: Deactivated successfully. Dec 13 14:27:45.223274 kubelet[1538]: E1213 14:27:45.223094 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:46.093901 env[1226]: time="2024-12-13T14:27:46.093780067Z" level=info msg="shim disconnected" id=1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7 Dec 13 14:27:46.094660 env[1226]: time="2024-12-13T14:27:46.093932138Z" level=warning msg="cleaning up after shim disconnected" id=1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7 namespace=k8s.io Dec 13 14:27:46.094660 env[1226]: time="2024-12-13T14:27:46.093960765Z" level=info msg="cleaning up dead shim" Dec 13 14:27:46.109081 env[1226]: time="2024-12-13T14:27:46.108997269Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1890 runtime=io.containerd.runc.v2\n" Dec 13 14:27:46.223774 kubelet[1538]: E1213 14:27:46.223630 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:46.520785 env[1226]: time="2024-12-13T14:27:46.520263982Z" level=info msg="CreateContainer within sandbox \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:27:46.544137 env[1226]: time="2024-12-13T14:27:46.544065278Z" level=info msg="CreateContainer within sandbox \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15\"" Dec 13 14:27:46.544914 env[1226]: time="2024-12-13T14:27:46.544860374Z" level=info msg="StartContainer for \"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15\"" Dec 13 14:27:46.586185 systemd[1]: Started cri-containerd-6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15.scope. Dec 13 14:27:46.623089 env[1226]: time="2024-12-13T14:27:46.622917623Z" level=info msg="StartContainer for \"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15\" returns successfully" Dec 13 14:27:46.644168 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:27:46.645380 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:27:46.646561 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:27:46.650329 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:46.656438 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:27:46.662475 systemd[1]: cri-containerd-6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15.scope: Deactivated successfully. Dec 13 14:27:46.677607 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:46.689885 env[1226]: time="2024-12-13T14:27:46.689815684Z" level=info msg="shim disconnected" id=6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15 Dec 13 14:27:46.690157 env[1226]: time="2024-12-13T14:27:46.689896023Z" level=warning msg="cleaning up after shim disconnected" id=6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15 namespace=k8s.io Dec 13 14:27:46.690157 env[1226]: time="2024-12-13T14:27:46.689913678Z" level=info msg="cleaning up dead shim" Dec 13 14:27:46.703996 env[1226]: time="2024-12-13T14:27:46.703924318Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1956 runtime=io.containerd.runc.v2\n" Dec 13 14:27:47.224614 kubelet[1538]: E1213 14:27:47.224541 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:47.525050 env[1226]: time="2024-12-13T14:27:47.524533165Z" level=info msg="CreateContainer within sandbox \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:27:47.535924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15-rootfs.mount: Deactivated successfully. Dec 13 14:27:47.552951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4035075786.mount: Deactivated successfully. Dec 13 14:27:47.562868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1097713681.mount: Deactivated successfully. Dec 13 14:27:47.572611 env[1226]: time="2024-12-13T14:27:47.572533757Z" level=info msg="CreateContainer within sandbox \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06\"" Dec 13 14:27:47.573522 env[1226]: time="2024-12-13T14:27:47.573474502Z" level=info msg="StartContainer for \"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06\"" Dec 13 14:27:47.600643 systemd[1]: Started cri-containerd-4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06.scope. Dec 13 14:27:47.655304 systemd[1]: cri-containerd-4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06.scope: Deactivated successfully. Dec 13 14:27:47.656065 env[1226]: time="2024-12-13T14:27:47.656009492Z" level=info msg="StartContainer for \"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06\" returns successfully" Dec 13 14:27:47.690865 env[1226]: time="2024-12-13T14:27:47.690765006Z" level=info msg="shim disconnected" id=4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06 Dec 13 14:27:47.690865 env[1226]: time="2024-12-13T14:27:47.690845908Z" level=warning msg="cleaning up after shim disconnected" id=4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06 namespace=k8s.io Dec 13 14:27:47.690865 env[1226]: time="2024-12-13T14:27:47.690865286Z" level=info msg="cleaning up dead shim" Dec 13 14:27:47.704282 env[1226]: time="2024-12-13T14:27:47.704219579Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2013 runtime=io.containerd.runc.v2\n" Dec 13 14:27:48.226024 kubelet[1538]: E1213 14:27:48.225948 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:48.530290 env[1226]: time="2024-12-13T14:27:48.530079488Z" level=info msg="CreateContainer within sandbox \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:27:48.556086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount615119637.mount: Deactivated successfully. Dec 13 14:27:48.561685 env[1226]: time="2024-12-13T14:27:48.561578898Z" level=info msg="CreateContainer within sandbox \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be\"" Dec 13 14:27:48.562511 env[1226]: time="2024-12-13T14:27:48.562454124Z" level=info msg="StartContainer for \"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be\"" Dec 13 14:27:48.599213 systemd[1]: Started cri-containerd-6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be.scope. Dec 13 14:27:48.644259 systemd[1]: cri-containerd-6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be.scope: Deactivated successfully. Dec 13 14:27:48.649440 env[1226]: time="2024-12-13T14:27:48.649374234Z" level=info msg="StartContainer for \"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be\" returns successfully" Dec 13 14:27:48.678819 env[1226]: time="2024-12-13T14:27:48.678733249Z" level=info msg="shim disconnected" id=6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be Dec 13 14:27:48.678819 env[1226]: time="2024-12-13T14:27:48.678809174Z" level=warning msg="cleaning up after shim disconnected" id=6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be namespace=k8s.io Dec 13 14:27:48.678819 env[1226]: time="2024-12-13T14:27:48.678825144Z" level=info msg="cleaning up dead shim" Dec 13 14:27:48.692356 env[1226]: time="2024-12-13T14:27:48.692277259Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2071 runtime=io.containerd.runc.v2\n" Dec 13 14:27:49.226668 kubelet[1538]: E1213 14:27:49.226595 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:49.535200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be-rootfs.mount: Deactivated successfully. Dec 13 14:27:49.542264 env[1226]: time="2024-12-13T14:27:49.541791417Z" level=info msg="CreateContainer within sandbox \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:27:49.567862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737835575.mount: Deactivated successfully. Dec 13 14:27:49.577697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2349016726.mount: Deactivated successfully. Dec 13 14:27:49.583922 env[1226]: time="2024-12-13T14:27:49.583852822Z" level=info msg="CreateContainer within sandbox \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\"" Dec 13 14:27:49.584920 env[1226]: time="2024-12-13T14:27:49.584761997Z" level=info msg="StartContainer for \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\"" Dec 13 14:27:49.612090 systemd[1]: Started cri-containerd-156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef.scope. Dec 13 14:27:49.672600 env[1226]: time="2024-12-13T14:27:49.672509914Z" level=info msg="StartContainer for \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\" returns successfully" Dec 13 14:27:49.719750 systemd[1]: Started sshd@8-10.128.0.21:22-190.0.126.91:61330.service. Dec 13 14:27:49.723980 update_engine[1214]: I1213 14:27:49.723237 1214 update_attempter.cc:509] Updating boot flags... Dec 13 14:27:49.966526 kubelet[1538]: I1213 14:27:49.966360 1538 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:27:50.055248 sshd[2129]: kex_exchange_identification: Connection closed by remote host Dec 13 14:27:50.055248 sshd[2129]: Connection closed by 190.0.126.91 port 61330 Dec 13 14:27:50.055769 systemd[1]: sshd@8-10.128.0.21:22-190.0.126.91:61330.service: Deactivated successfully. Dec 13 14:27:50.227891 kubelet[1538]: E1213 14:27:50.227737 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:50.350054 kernel: Initializing XFRM netlink socket Dec 13 14:27:50.565689 kubelet[1538]: I1213 14:27:50.565533 1538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x6fql" podStartSLOduration=9.100669234 podStartE2EDuration="27.565458203s" podCreationTimestamp="2024-12-13 14:27:23 +0000 UTC" firstStartedPulling="2024-12-13 14:27:25.312569135 +0000 UTC m=+2.710109936" lastFinishedPulling="2024-12-13 14:27:43.777358108 +0000 UTC m=+21.174898905" observedRunningTime="2024-12-13 14:27:50.565433847 +0000 UTC m=+27.962974650" watchObservedRunningTime="2024-12-13 14:27:50.565458203 +0000 UTC m=+27.962999008" Dec 13 14:27:51.228436 kubelet[1538]: E1213 14:27:51.228355 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:52.018290 systemd-networkd[1031]: cilium_host: Link UP Dec 13 14:27:52.020142 systemd-networkd[1031]: cilium_net: Link UP Dec 13 14:27:52.030006 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:27:52.035630 systemd-networkd[1031]: cilium_net: Gained carrier Dec 13 14:27:52.043577 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:27:52.045230 systemd-networkd[1031]: cilium_host: Gained carrier Dec 13 14:27:52.045509 systemd-networkd[1031]: cilium_net: Gained IPv6LL Dec 13 14:27:52.045945 systemd-networkd[1031]: cilium_host: Gained IPv6LL Dec 13 14:27:52.179454 systemd-networkd[1031]: cilium_vxlan: Link UP Dec 13 14:27:52.179473 systemd-networkd[1031]: cilium_vxlan: Gained carrier Dec 13 14:27:52.229883 kubelet[1538]: E1213 14:27:52.229738 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:52.456010 kernel: NET: Registered PF_ALG protocol family Dec 13 14:27:53.230673 kubelet[1538]: E1213 14:27:53.230589 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:53.308665 systemd-networkd[1031]: lxc_health: Link UP Dec 13 14:27:53.328007 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:27:53.332364 systemd-networkd[1031]: lxc_health: Gained carrier Dec 13 14:27:53.502788 systemd-networkd[1031]: cilium_vxlan: Gained IPv6LL Dec 13 14:27:53.623071 kubelet[1538]: I1213 14:27:53.623005 1538 topology_manager.go:215] "Topology Admit Handler" podUID="e6461744-6f79-40e3-b5b7-803a6eecc440" podNamespace="default" podName="nginx-deployment-6d5f899847-s9ph7" Dec 13 14:27:53.634948 systemd[1]: Created slice kubepods-besteffort-pode6461744_6f79_40e3_b5b7_803a6eecc440.slice. Dec 13 14:27:53.804086 kubelet[1538]: I1213 14:27:53.804025 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6zb\" (UniqueName: \"kubernetes.io/projected/e6461744-6f79-40e3-b5b7-803a6eecc440-kube-api-access-fv6zb\") pod \"nginx-deployment-6d5f899847-s9ph7\" (UID: \"e6461744-6f79-40e3-b5b7-803a6eecc440\") " pod="default/nginx-deployment-6d5f899847-s9ph7" Dec 13 14:27:53.943778 env[1226]: time="2024-12-13T14:27:53.943091780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-s9ph7,Uid:e6461744-6f79-40e3-b5b7-803a6eecc440,Namespace:default,Attempt:0,}" Dec 13 14:27:54.026169 kernel: eth0: renamed from tmp99b43 Dec 13 14:27:54.024847 systemd-networkd[1031]: lxc26b461b4d0ba: Link UP Dec 13 14:27:54.041314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc26b461b4d0ba: link becomes ready Dec 13 14:27:54.044835 systemd-networkd[1031]: lxc26b461b4d0ba: Gained carrier Dec 13 14:27:54.231893 kubelet[1538]: E1213 14:27:54.231715 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:54.846345 systemd-networkd[1031]: lxc_health: Gained IPv6LL Dec 13 14:27:55.233708 kubelet[1538]: E1213 14:27:55.233101 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:55.742359 systemd-networkd[1031]: lxc26b461b4d0ba: Gained IPv6LL Dec 13 14:27:56.233743 kubelet[1538]: E1213 14:27:56.233669 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:57.235021 kubelet[1538]: E1213 14:27:57.234928 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:58.236171 kubelet[1538]: E1213 14:27:58.236099 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:58.659951 env[1226]: time="2024-12-13T14:27:58.659847722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:58.660714 env[1226]: time="2024-12-13T14:27:58.660663891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:58.660928 env[1226]: time="2024-12-13T14:27:58.660888928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:58.661348 env[1226]: time="2024-12-13T14:27:58.661299833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99b434ca26ad34069d77b7b1c25e10e6b7abeb11e54d28d533a53bfb08c2f047 pid=2599 runtime=io.containerd.runc.v2 Dec 13 14:27:58.705747 systemd[1]: run-containerd-runc-k8s.io-99b434ca26ad34069d77b7b1c25e10e6b7abeb11e54d28d533a53bfb08c2f047-runc.hcaAKn.mount: Deactivated successfully. Dec 13 14:27:58.714440 systemd[1]: Started cri-containerd-99b434ca26ad34069d77b7b1c25e10e6b7abeb11e54d28d533a53bfb08c2f047.scope. Dec 13 14:27:58.778019 env[1226]: time="2024-12-13T14:27:58.777535328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-s9ph7,Uid:e6461744-6f79-40e3-b5b7-803a6eecc440,Namespace:default,Attempt:0,} returns sandbox id \"99b434ca26ad34069d77b7b1c25e10e6b7abeb11e54d28d533a53bfb08c2f047\"" Dec 13 14:27:58.781671 env[1226]: time="2024-12-13T14:27:58.781626353Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:27:59.236417 kubelet[1538]: E1213 14:27:59.236345 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:00.237165 kubelet[1538]: E1213 14:28:00.237105 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:01.237713 kubelet[1538]: E1213 14:28:01.237646 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:01.530581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678233032.mount: Deactivated successfully. Dec 13 14:28:02.238476 kubelet[1538]: E1213 14:28:02.238358 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:03.196426 kubelet[1538]: E1213 14:28:03.196312 1538 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:03.238740 kubelet[1538]: E1213 14:28:03.238677 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:03.279813 env[1226]: time="2024-12-13T14:28:03.279725442Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:03.282494 env[1226]: time="2024-12-13T14:28:03.282440487Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:03.284916 env[1226]: time="2024-12-13T14:28:03.284866384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:03.287112 env[1226]: time="2024-12-13T14:28:03.287067994Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:03.288181 env[1226]: time="2024-12-13T14:28:03.288126514Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:28:03.291740 env[1226]: time="2024-12-13T14:28:03.291680803Z" level=info msg="CreateContainer within sandbox \"99b434ca26ad34069d77b7b1c25e10e6b7abeb11e54d28d533a53bfb08c2f047\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:28:03.308339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288613903.mount: Deactivated successfully. Dec 13 14:28:03.318575 env[1226]: time="2024-12-13T14:28:03.318506977Z" level=info msg="CreateContainer within sandbox \"99b434ca26ad34069d77b7b1c25e10e6b7abeb11e54d28d533a53bfb08c2f047\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6602fea7233a2fb9dafcd484cecd8b317a2bc2b73127af4bcba93d82b9815cc0\"" Dec 13 14:28:03.319480 env[1226]: time="2024-12-13T14:28:03.319412106Z" level=info msg="StartContainer for \"6602fea7233a2fb9dafcd484cecd8b317a2bc2b73127af4bcba93d82b9815cc0\"" Dec 13 14:28:03.349611 systemd[1]: Started cri-containerd-6602fea7233a2fb9dafcd484cecd8b317a2bc2b73127af4bcba93d82b9815cc0.scope. Dec 13 14:28:03.418597 env[1226]: time="2024-12-13T14:28:03.418522936Z" level=info msg="StartContainer for \"6602fea7233a2fb9dafcd484cecd8b317a2bc2b73127af4bcba93d82b9815cc0\" returns successfully" Dec 13 14:28:03.592115 kubelet[1538]: I1213 14:28:03.592068 1538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-s9ph7" podStartSLOduration=6.083916974 podStartE2EDuration="10.592002818s" podCreationTimestamp="2024-12-13 14:27:53 +0000 UTC" firstStartedPulling="2024-12-13 14:27:58.780553458 +0000 UTC m=+36.178094256" lastFinishedPulling="2024-12-13 14:28:03.288639303 +0000 UTC m=+40.686180100" observedRunningTime="2024-12-13 14:28:03.591560336 +0000 UTC m=+40.989101144" watchObservedRunningTime="2024-12-13 14:28:03.592002818 +0000 UTC m=+40.989543626" Dec 13 14:28:04.239167 kubelet[1538]: E1213 14:28:04.239092 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:05.239595 kubelet[1538]: E1213 14:28:05.239524 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:06.240343 kubelet[1538]: E1213 14:28:06.240274 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:07.241369 kubelet[1538]: E1213 14:28:07.241297 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:08.241844 kubelet[1538]: E1213 14:28:08.241770 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:08.392288 kubelet[1538]: I1213 14:28:08.392223 1538 topology_manager.go:215] "Topology Admit Handler" podUID="9ee95002-4fac-4df9-af67-835c96747e9f" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:28:08.400072 systemd[1]: Created slice kubepods-besteffort-pod9ee95002_4fac_4df9_af67_835c96747e9f.slice. Dec 13 14:28:08.504598 kubelet[1538]: I1213 14:28:08.504369 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9ee95002-4fac-4df9-af67-835c96747e9f-data\") pod \"nfs-server-provisioner-0\" (UID: \"9ee95002-4fac-4df9-af67-835c96747e9f\") " pod="default/nfs-server-provisioner-0" Dec 13 14:28:08.505305 kubelet[1538]: I1213 14:28:08.505058 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt7pq\" (UniqueName: \"kubernetes.io/projected/9ee95002-4fac-4df9-af67-835c96747e9f-kube-api-access-kt7pq\") pod \"nfs-server-provisioner-0\" (UID: \"9ee95002-4fac-4df9-af67-835c96747e9f\") " pod="default/nfs-server-provisioner-0" Dec 13 14:28:08.705581 env[1226]: time="2024-12-13T14:28:08.705474307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9ee95002-4fac-4df9-af67-835c96747e9f,Namespace:default,Attempt:0,}" Dec 13 14:28:08.753315 systemd-networkd[1031]: lxc1424ee956ddd: Link UP Dec 13 14:28:08.766025 kernel: eth0: renamed from tmp520a7 Dec 13 14:28:08.777031 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:28:08.791092 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1424ee956ddd: link becomes ready Dec 13 14:28:08.792904 systemd-networkd[1031]: lxc1424ee956ddd: Gained carrier Dec 13 14:28:08.986180 env[1226]: time="2024-12-13T14:28:08.986009065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:08.986701 env[1226]: time="2024-12-13T14:28:08.986620797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:08.986889 env[1226]: time="2024-12-13T14:28:08.986680757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:08.987289 env[1226]: time="2024-12-13T14:28:08.987216814Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/520a7e617b2315dea152fde9512eede5766641d49d66f136021d95ad9c32b120 pid=2723 runtime=io.containerd.runc.v2 Dec 13 14:28:09.028088 systemd[1]: Started cri-containerd-520a7e617b2315dea152fde9512eede5766641d49d66f136021d95ad9c32b120.scope. Dec 13 14:28:09.093416 env[1226]: time="2024-12-13T14:28:09.092732141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9ee95002-4fac-4df9-af67-835c96747e9f,Namespace:default,Attempt:0,} returns sandbox id \"520a7e617b2315dea152fde9512eede5766641d49d66f136021d95ad9c32b120\"" Dec 13 14:28:09.096007 env[1226]: time="2024-12-13T14:28:09.095895928Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:28:09.242226 kubelet[1538]: E1213 14:28:09.242169 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:10.242992 kubelet[1538]: E1213 14:28:10.242893 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:10.335186 systemd-networkd[1031]: lxc1424ee956ddd: Gained IPv6LL Dec 13 14:28:11.244182 kubelet[1538]: E1213 14:28:11.244102 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:11.885867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417637181.mount: Deactivated successfully. Dec 13 14:28:12.245670 kubelet[1538]: E1213 14:28:12.245071 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:13.246102 kubelet[1538]: E1213 14:28:13.246028 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:14.246641 kubelet[1538]: E1213 14:28:14.246578 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:14.427647 env[1226]: time="2024-12-13T14:28:14.427561897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:14.430585 env[1226]: time="2024-12-13T14:28:14.430528933Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:14.433051 env[1226]: time="2024-12-13T14:28:14.433004448Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:14.435540 env[1226]: time="2024-12-13T14:28:14.435486615Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:14.436568 env[1226]: time="2024-12-13T14:28:14.436499014Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:28:14.440991 env[1226]: time="2024-12-13T14:28:14.440919692Z" level=info msg="CreateContainer within sandbox \"520a7e617b2315dea152fde9512eede5766641d49d66f136021d95ad9c32b120\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:28:14.459848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243846823.mount: Deactivated successfully. Dec 13 14:28:14.466292 env[1226]: time="2024-12-13T14:28:14.466204655Z" level=info msg="CreateContainer within sandbox \"520a7e617b2315dea152fde9512eede5766641d49d66f136021d95ad9c32b120\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6b94829f64dc3a0119ae981222956e17f002f1d8139bd1bd2d856ae92d28ec85\"" Dec 13 14:28:14.467220 env[1226]: time="2024-12-13T14:28:14.467180610Z" level=info msg="StartContainer for \"6b94829f64dc3a0119ae981222956e17f002f1d8139bd1bd2d856ae92d28ec85\"" Dec 13 14:28:14.512838 systemd[1]: Started cri-containerd-6b94829f64dc3a0119ae981222956e17f002f1d8139bd1bd2d856ae92d28ec85.scope. Dec 13 14:28:14.559927 env[1226]: time="2024-12-13T14:28:14.559858188Z" level=info msg="StartContainer for \"6b94829f64dc3a0119ae981222956e17f002f1d8139bd1bd2d856ae92d28ec85\" returns successfully" Dec 13 14:28:14.624081 kubelet[1538]: I1213 14:28:14.624022 1538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.2817581869999999 podStartE2EDuration="6.623939527s" podCreationTimestamp="2024-12-13 14:28:08 +0000 UTC" firstStartedPulling="2024-12-13 14:28:09.09489087 +0000 UTC m=+46.492431671" lastFinishedPulling="2024-12-13 14:28:14.437072214 +0000 UTC m=+51.834613011" observedRunningTime="2024-12-13 14:28:14.623596541 +0000 UTC m=+52.021137350" watchObservedRunningTime="2024-12-13 14:28:14.623939527 +0000 UTC m=+52.021480335" Dec 13 14:28:15.247564 kubelet[1538]: E1213 14:28:15.247497 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:16.248257 kubelet[1538]: E1213 14:28:16.248181 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:17.249229 kubelet[1538]: E1213 14:28:17.249166 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:18.249737 kubelet[1538]: E1213 14:28:18.249669 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:19.250769 kubelet[1538]: E1213 14:28:19.250703 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:20.251661 kubelet[1538]: E1213 14:28:20.251574 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:21.252344 kubelet[1538]: E1213 14:28:21.252275 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:22.253410 kubelet[1538]: E1213 14:28:22.253336 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:23.196961 kubelet[1538]: E1213 14:28:23.196820 1538 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:23.254952 kubelet[1538]: E1213 14:28:23.254096 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:23.982775 kubelet[1538]: I1213 14:28:23.982709 1538 topology_manager.go:215] "Topology Admit Handler" podUID="a7084c36-b97b-45ab-88d5-a1758a2b994d" podNamespace="default" podName="test-pod-1" Dec 13 14:28:23.990646 systemd[1]: Created slice kubepods-besteffort-poda7084c36_b97b_45ab_88d5_a1758a2b994d.slice. Dec 13 14:28:24.099836 kubelet[1538]: I1213 14:28:24.099767 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79jsl\" (UniqueName: \"kubernetes.io/projected/a7084c36-b97b-45ab-88d5-a1758a2b994d-kube-api-access-79jsl\") pod \"test-pod-1\" (UID: \"a7084c36-b97b-45ab-88d5-a1758a2b994d\") " pod="default/test-pod-1" Dec 13 14:28:24.100336 kubelet[1538]: I1213 14:28:24.100290 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77ed14de-69a8-461d-8ae5-832806fc9105\" (UniqueName: \"kubernetes.io/nfs/a7084c36-b97b-45ab-88d5-a1758a2b994d-pvc-77ed14de-69a8-461d-8ae5-832806fc9105\") pod \"test-pod-1\" (UID: \"a7084c36-b97b-45ab-88d5-a1758a2b994d\") " pod="default/test-pod-1" Dec 13 14:28:24.249041 kernel: FS-Cache: Loaded Dec 13 14:28:24.254340 kubelet[1538]: E1213 14:28:24.254296 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:24.311989 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:28:24.312222 kernel: RPC: Registered udp transport module. Dec 13 14:28:24.312272 kernel: RPC: Registered tcp transport module. Dec 13 14:28:24.316807 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:28:24.406012 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:28:24.640335 kernel: NFS: Registering the id_resolver key type Dec 13 14:28:24.640578 kernel: Key type id_resolver registered Dec 13 14:28:24.640627 kernel: Key type id_legacy registered Dec 13 14:28:24.696660 nfsidmap[2844]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Dec 13 14:28:24.707773 nfsidmap[2845]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Dec 13 14:28:24.895363 env[1226]: time="2024-12-13T14:28:24.895160562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a7084c36-b97b-45ab-88d5-a1758a2b994d,Namespace:default,Attempt:0,}" Dec 13 14:28:24.943925 systemd-networkd[1031]: lxc9d5af9b6681b: Link UP Dec 13 14:28:24.957022 kernel: eth0: renamed from tmp20f1d Dec 13 14:28:24.982002 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:28:24.982200 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9d5af9b6681b: link becomes ready Dec 13 14:28:24.989887 systemd-networkd[1031]: lxc9d5af9b6681b: Gained carrier Dec 13 14:28:25.174704 env[1226]: time="2024-12-13T14:28:25.174481524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:25.174704 env[1226]: time="2024-12-13T14:28:25.174548784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:25.174704 env[1226]: time="2024-12-13T14:28:25.174567657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:25.175725 env[1226]: time="2024-12-13T14:28:25.175643554Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20f1da80614b51f2409237cfdda8af0fa5932e690e5d0a5267e61a2a9d98f453 pid=2874 runtime=io.containerd.runc.v2 Dec 13 14:28:25.197460 systemd[1]: Started cri-containerd-20f1da80614b51f2409237cfdda8af0fa5932e690e5d0a5267e61a2a9d98f453.scope. Dec 13 14:28:25.255481 kubelet[1538]: E1213 14:28:25.255416 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:25.277020 env[1226]: time="2024-12-13T14:28:25.276941672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a7084c36-b97b-45ab-88d5-a1758a2b994d,Namespace:default,Attempt:0,} returns sandbox id \"20f1da80614b51f2409237cfdda8af0fa5932e690e5d0a5267e61a2a9d98f453\"" Dec 13 14:28:25.280593 env[1226]: time="2024-12-13T14:28:25.280527923Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:28:25.489837 env[1226]: time="2024-12-13T14:28:25.489238894Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:25.492276 env[1226]: time="2024-12-13T14:28:25.492222474Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:25.494857 env[1226]: time="2024-12-13T14:28:25.494811507Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:25.497102 env[1226]: time="2024-12-13T14:28:25.497045764Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:25.498070 env[1226]: time="2024-12-13T14:28:25.498000000Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:28:25.501748 env[1226]: time="2024-12-13T14:28:25.501687629Z" level=info msg="CreateContainer within sandbox \"20f1da80614b51f2409237cfdda8af0fa5932e690e5d0a5267e61a2a9d98f453\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:28:25.537558 env[1226]: time="2024-12-13T14:28:25.537481960Z" level=info msg="CreateContainer within sandbox \"20f1da80614b51f2409237cfdda8af0fa5932e690e5d0a5267e61a2a9d98f453\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"db388053da0a11ebddc2d53100fc2e60b476b718e8454b0e3435f108237fa2f9\"" Dec 13 14:28:25.538394 env[1226]: time="2024-12-13T14:28:25.538338730Z" level=info msg="StartContainer for \"db388053da0a11ebddc2d53100fc2e60b476b718e8454b0e3435f108237fa2f9\"" Dec 13 14:28:25.578744 systemd[1]: Started cri-containerd-db388053da0a11ebddc2d53100fc2e60b476b718e8454b0e3435f108237fa2f9.scope. Dec 13 14:28:25.629052 env[1226]: time="2024-12-13T14:28:25.628583879Z" level=info msg="StartContainer for \"db388053da0a11ebddc2d53100fc2e60b476b718e8454b0e3435f108237fa2f9\" returns successfully" Dec 13 14:28:25.652410 kubelet[1538]: I1213 14:28:25.652356 1538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.433305058 podStartE2EDuration="16.652286985s" podCreationTimestamp="2024-12-13 14:28:09 +0000 UTC" firstStartedPulling="2024-12-13 14:28:25.279506254 +0000 UTC m=+62.677047040" lastFinishedPulling="2024-12-13 14:28:25.49848817 +0000 UTC m=+62.896028967" observedRunningTime="2024-12-13 14:28:25.651898827 +0000 UTC m=+63.049439635" watchObservedRunningTime="2024-12-13 14:28:25.652286985 +0000 UTC m=+63.049827791" Dec 13 14:28:26.222746 systemd[1]: run-containerd-runc-k8s.io-db388053da0a11ebddc2d53100fc2e60b476b718e8454b0e3435f108237fa2f9-runc.1ngRQI.mount: Deactivated successfully. Dec 13 14:28:26.256347 kubelet[1538]: E1213 14:28:26.256271 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:26.974373 systemd-networkd[1031]: lxc9d5af9b6681b: Gained IPv6LL Dec 13 14:28:27.256943 kubelet[1538]: E1213 14:28:27.256768 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:28.257727 kubelet[1538]: E1213 14:28:28.257637 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:28.480365 systemd[1]: run-containerd-runc-k8s.io-156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef-runc.usQpkh.mount: Deactivated successfully. Dec 13 14:28:28.508662 env[1226]: time="2024-12-13T14:28:28.508460803Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:28:28.518378 env[1226]: time="2024-12-13T14:28:28.518299179Z" level=info msg="StopContainer for \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\" with timeout 2 (s)" Dec 13 14:28:28.518758 env[1226]: time="2024-12-13T14:28:28.518706371Z" level=info msg="Stop container \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\" with signal terminated" Dec 13 14:28:28.529913 systemd-networkd[1031]: lxc_health: Link DOWN Dec 13 14:28:28.529926 systemd-networkd[1031]: lxc_health: Lost carrier Dec 13 14:28:28.556575 systemd[1]: cri-containerd-156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef.scope: Deactivated successfully. Dec 13 14:28:28.557033 systemd[1]: cri-containerd-156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef.scope: Consumed 8.926s CPU time. Dec 13 14:28:28.586477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef-rootfs.mount: Deactivated successfully. Dec 13 14:28:29.258520 kubelet[1538]: E1213 14:28:29.258444 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:30.259438 kubelet[1538]: E1213 14:28:30.259346 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:30.374316 env[1226]: time="2024-12-13T14:28:30.374235457Z" level=info msg="shim disconnected" id=156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef Dec 13 14:28:30.375047 env[1226]: time="2024-12-13T14:28:30.374347576Z" level=warning msg="cleaning up after shim disconnected" id=156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef namespace=k8s.io Dec 13 14:28:30.375047 env[1226]: time="2024-12-13T14:28:30.374382196Z" level=info msg="cleaning up dead shim" Dec 13 14:28:30.387486 env[1226]: time="2024-12-13T14:28:30.387405728Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3010 runtime=io.containerd.runc.v2\n" Dec 13 14:28:30.391147 env[1226]: time="2024-12-13T14:28:30.391083851Z" level=info msg="StopContainer for \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\" returns successfully" Dec 13 14:28:30.392077 env[1226]: time="2024-12-13T14:28:30.392027048Z" level=info msg="StopPodSandbox for \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\"" Dec 13 14:28:30.392226 env[1226]: time="2024-12-13T14:28:30.392116594Z" level=info msg="Container to stop \"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:30.392226 env[1226]: time="2024-12-13T14:28:30.392141609Z" level=info msg="Container to stop \"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:30.392226 env[1226]: time="2024-12-13T14:28:30.392161514Z" level=info msg="Container to stop \"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:30.392226 env[1226]: time="2024-12-13T14:28:30.392181373Z" level=info msg="Container to stop \"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:30.392226 env[1226]: time="2024-12-13T14:28:30.392199748Z" level=info msg="Container to stop \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:30.395676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca-shm.mount: Deactivated successfully. Dec 13 14:28:30.405941 systemd[1]: cri-containerd-ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca.scope: Deactivated successfully. Dec 13 14:28:30.438596 env[1226]: time="2024-12-13T14:28:30.438525878Z" level=info msg="shim disconnected" id=ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca Dec 13 14:28:30.439252 env[1226]: time="2024-12-13T14:28:30.439202256Z" level=warning msg="cleaning up after shim disconnected" id=ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca namespace=k8s.io Dec 13 14:28:30.439603 env[1226]: time="2024-12-13T14:28:30.439580252Z" level=info msg="cleaning up dead shim" Dec 13 14:28:30.439725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca-rootfs.mount: Deactivated successfully. Dec 13 14:28:30.454937 env[1226]: time="2024-12-13T14:28:30.454878798Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3042 runtime=io.containerd.runc.v2\n" Dec 13 14:28:30.455481 env[1226]: time="2024-12-13T14:28:30.455420272Z" level=info msg="TearDown network for sandbox \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" successfully" Dec 13 14:28:30.455481 env[1226]: time="2024-12-13T14:28:30.455462557Z" level=info msg="StopPodSandbox for \"ac7f060ae4457dfbdfabb4a14e825d87bc290fbcb5758058f70d124a839c36ca\" returns successfully" Dec 13 14:28:30.643683 kubelet[1538]: I1213 14:28:30.643282 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-etc-cni-netd\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.643683 kubelet[1538]: I1213 14:28:30.643350 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-xtables-lock\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.643683 kubelet[1538]: I1213 14:28:30.643399 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtxvg\" (UniqueName: \"kubernetes.io/projected/c1c1f159-b48e-4c8a-947c-060c7f529d50-kube-api-access-xtxvg\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.643683 kubelet[1538]: I1213 14:28:30.643434 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1c1f159-b48e-4c8a-947c-060c7f529d50-clustermesh-secrets\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.643683 kubelet[1538]: I1213 14:28:30.643463 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-cgroup\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.643683 kubelet[1538]: I1213 14:28:30.643490 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-host-proc-sys-net\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.644294 kubelet[1538]: I1213 14:28:30.643519 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-host-proc-sys-kernel\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.644294 kubelet[1538]: I1213 14:28:30.643550 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-lib-modules\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.644294 kubelet[1538]: I1213 14:28:30.643585 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cni-path\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.644294 kubelet[1538]: I1213 14:28:30.643616 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-bpf-maps\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.644294 kubelet[1538]: I1213 14:28:30.643642 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-hostproc\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.644294 kubelet[1538]: I1213 14:28:30.643678 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1c1f159-b48e-4c8a-947c-060c7f529d50-hubble-tls\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.644643 kubelet[1538]: I1213 14:28:30.643706 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-run\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.644643 kubelet[1538]: I1213 14:28:30.643741 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-config-path\") pod \"c1c1f159-b48e-4c8a-947c-060c7f529d50\" (UID: \"c1c1f159-b48e-4c8a-947c-060c7f529d50\") " Dec 13 14:28:30.644869 kubelet[1538]: I1213 14:28:30.644829 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:30.645089 kubelet[1538]: I1213 14:28:30.645059 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:30.645249 kubelet[1538]: I1213 14:28:30.645228 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:30.647297 kubelet[1538]: I1213 14:28:30.647249 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:28:30.647437 kubelet[1538]: I1213 14:28:30.647345 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:30.647437 kubelet[1538]: I1213 14:28:30.647389 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cni-path" (OuterVolumeSpecName: "cni-path") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:30.647437 kubelet[1538]: I1213 14:28:30.647420 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:30.647622 kubelet[1538]: I1213 14:28:30.647450 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-hostproc" (OuterVolumeSpecName: "hostproc") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:30.650022 kubelet[1538]: I1213 14:28:30.649988 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:30.650259 kubelet[1538]: I1213 14:28:30.650222 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:30.650448 kubelet[1538]: I1213 14:28:30.650410 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:30.655820 systemd[1]: var-lib-kubelet-pods-c1c1f159\x2db48e\x2d4c8a\x2d947c\x2d060c7f529d50-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxtxvg.mount: Deactivated successfully. Dec 13 14:28:30.658473 kubelet[1538]: I1213 14:28:30.658429 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c1f159-b48e-4c8a-947c-060c7f529d50-kube-api-access-xtxvg" (OuterVolumeSpecName: "kube-api-access-xtxvg") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "kube-api-access-xtxvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:30.663142 systemd[1]: var-lib-kubelet-pods-c1c1f159\x2db48e\x2d4c8a\x2d947c\x2d060c7f529d50-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:28:30.665069 kubelet[1538]: I1213 14:28:30.664648 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c1f159-b48e-4c8a-947c-060c7f529d50-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:30.667559 kubelet[1538]: I1213 14:28:30.667533 1538 scope.go:117] "RemoveContainer" containerID="156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef" Dec 13 14:28:30.671021 systemd[1]: var-lib-kubelet-pods-c1c1f159\x2db48e\x2d4c8a\x2d947c\x2d060c7f529d50-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:30.672331 kubelet[1538]: I1213 14:28:30.671949 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1c1f159-b48e-4c8a-947c-060c7f529d50-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c1c1f159-b48e-4c8a-947c-060c7f529d50" (UID: "c1c1f159-b48e-4c8a-947c-060c7f529d50"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:30.675161 env[1226]: time="2024-12-13T14:28:30.675099823Z" level=info msg="RemoveContainer for \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\"" Dec 13 14:28:30.679577 systemd[1]: Removed slice kubepods-burstable-podc1c1f159_b48e_4c8a_947c_060c7f529d50.slice. Dec 13 14:28:30.679743 systemd[1]: kubepods-burstable-podc1c1f159_b48e_4c8a_947c_060c7f529d50.slice: Consumed 9.095s CPU time. Dec 13 14:28:30.683676 env[1226]: time="2024-12-13T14:28:30.683624970Z" level=info msg="RemoveContainer for \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\" returns successfully" Dec 13 14:28:30.683992 kubelet[1538]: I1213 14:28:30.683944 1538 scope.go:117] "RemoveContainer" containerID="6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be" Dec 13 14:28:30.685776 env[1226]: time="2024-12-13T14:28:30.685719836Z" level=info msg="RemoveContainer for \"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be\"" Dec 13 14:28:30.689928 env[1226]: time="2024-12-13T14:28:30.689888005Z" level=info msg="RemoveContainer for \"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be\" returns successfully" Dec 13 14:28:30.690240 kubelet[1538]: I1213 14:28:30.690218 1538 scope.go:117] "RemoveContainer" containerID="4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06" Dec 13 14:28:30.691889 env[1226]: time="2024-12-13T14:28:30.691852644Z" level=info msg="RemoveContainer for \"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06\"" Dec 13 14:28:30.695460 env[1226]: time="2024-12-13T14:28:30.695420592Z" level=info msg="RemoveContainer for \"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06\" returns successfully" Dec 13 14:28:30.695742 kubelet[1538]: I1213 14:28:30.695699 1538 scope.go:117] "RemoveContainer" containerID="6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15" Dec 13 14:28:30.698194 env[1226]: time="2024-12-13T14:28:30.698161340Z" level=info msg="RemoveContainer for \"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15\"" Dec 13 14:28:30.702611 env[1226]: time="2024-12-13T14:28:30.702550590Z" level=info msg="RemoveContainer for \"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15\" returns successfully" Dec 13 14:28:30.702795 kubelet[1538]: I1213 14:28:30.702766 1538 scope.go:117] "RemoveContainer" containerID="1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7" Dec 13 14:28:30.710244 env[1226]: time="2024-12-13T14:28:30.710201039Z" level=info msg="RemoveContainer for \"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7\"" Dec 13 14:28:30.714908 env[1226]: time="2024-12-13T14:28:30.714858087Z" level=info msg="RemoveContainer for \"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7\" returns successfully" Dec 13 14:28:30.715237 kubelet[1538]: I1213 14:28:30.715195 1538 scope.go:117] "RemoveContainer" containerID="156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef" Dec 13 14:28:30.715671 env[1226]: time="2024-12-13T14:28:30.715563604Z" level=error msg="ContainerStatus for \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\": not found" Dec 13 14:28:30.715902 kubelet[1538]: E1213 14:28:30.715857 1538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\": not found" containerID="156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef" Dec 13 14:28:30.716057 kubelet[1538]: I1213 14:28:30.716035 1538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef"} err="failed to get container status \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"156494a2b63d902ec58f1acd03e2ae4723a18f4f5080c5be6b251b5a8e3007ef\": not found" Dec 13 14:28:30.716161 kubelet[1538]: I1213 14:28:30.716069 1538 scope.go:117] "RemoveContainer" containerID="6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be" Dec 13 14:28:30.716388 env[1226]: time="2024-12-13T14:28:30.716310686Z" level=error msg="ContainerStatus for \"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be\": not found" Dec 13 14:28:30.716544 kubelet[1538]: E1213 14:28:30.716519 1538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be\": not found" containerID="6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be" Dec 13 14:28:30.716647 kubelet[1538]: I1213 14:28:30.716570 1538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be"} err="failed to get container status \"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be\": rpc error: code = NotFound desc = an error occurred when try to find container \"6337587c17c0029fff89e64b162ac7f6ace0d2110b4e3a2d202400ec6f4c60be\": not found" Dec 13 14:28:30.716647 kubelet[1538]: I1213 14:28:30.716592 1538 scope.go:117] "RemoveContainer" containerID="4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06" Dec 13 14:28:30.716903 env[1226]: time="2024-12-13T14:28:30.716826972Z" level=error msg="ContainerStatus for \"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06\": not found" Dec 13 14:28:30.717082 kubelet[1538]: E1213 14:28:30.717059 1538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06\": not found" containerID="4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06" Dec 13 14:28:30.717193 kubelet[1538]: I1213 14:28:30.717103 1538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06"} err="failed to get container status \"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b34ea205e39aa477f5a265177e392ff9499c731c8779b4e1fe2fa28697b8f06\": not found" Dec 13 14:28:30.717193 kubelet[1538]: I1213 14:28:30.717127 1538 scope.go:117] "RemoveContainer" containerID="6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15" Dec 13 14:28:30.717426 env[1226]: time="2024-12-13T14:28:30.717343922Z" level=error msg="ContainerStatus for \"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15\": not found" Dec 13 14:28:30.717559 kubelet[1538]: E1213 14:28:30.717537 1538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15\": not found" containerID="6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15" Dec 13 14:28:30.717687 kubelet[1538]: I1213 14:28:30.717578 1538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15"} err="failed to get container status \"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b76043d2f89ac51356bf369568e74a871ab08745f40fd5e1e8258f54d8eba15\": not found" Dec 13 14:28:30.717687 kubelet[1538]: I1213 14:28:30.717596 1538 scope.go:117] "RemoveContainer" containerID="1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7" Dec 13 14:28:30.717900 env[1226]: time="2024-12-13T14:28:30.717832157Z" level=error msg="ContainerStatus for \"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7\": not found" Dec 13 14:28:30.718135 kubelet[1538]: E1213 14:28:30.718082 1538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7\": not found" containerID="1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7" Dec 13 14:28:30.718135 kubelet[1538]: I1213 14:28:30.718123 1538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7"} err="failed to get container status \"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7\": rpc error: code = NotFound desc = an error occurred when try to find container \"1cfa4d4615630629f8b0951b6b805ecbffb912fb0843488a57f1984a17d47de7\": not found" Dec 13 14:28:30.744524 kubelet[1538]: I1213 14:28:30.744487 1538 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xtxvg\" (UniqueName: \"kubernetes.io/projected/c1c1f159-b48e-4c8a-947c-060c7f529d50-kube-api-access-xtxvg\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.744524 kubelet[1538]: I1213 14:28:30.744529 1538 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1c1f159-b48e-4c8a-947c-060c7f529d50-clustermesh-secrets\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.744768 kubelet[1538]: I1213 14:28:30.744550 1538 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-etc-cni-netd\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.744768 kubelet[1538]: I1213 14:28:30.744568 1538 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-xtables-lock\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.744768 kubelet[1538]: I1213 14:28:30.744585 1538 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-host-proc-sys-kernel\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.744768 kubelet[1538]: I1213 14:28:30.744601 1538 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-lib-modules\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.744768 kubelet[1538]: I1213 14:28:30.744617 1538 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-cgroup\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.744768 kubelet[1538]: I1213 14:28:30.744632 1538 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-host-proc-sys-net\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.744768 kubelet[1538]: I1213 14:28:30.744647 1538 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1c1f159-b48e-4c8a-947c-060c7f529d50-hubble-tls\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.744768 kubelet[1538]: I1213 14:28:30.744663 1538 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-run\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.745083 kubelet[1538]: I1213 14:28:30.744680 1538 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-cni-path\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.745083 kubelet[1538]: I1213 14:28:30.744697 1538 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-bpf-maps\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.745083 kubelet[1538]: I1213 14:28:30.744715 1538 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1c1f159-b48e-4c8a-947c-060c7f529d50-hostproc\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:30.745083 kubelet[1538]: I1213 14:28:30.744733 1538 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1c1f159-b48e-4c8a-947c-060c7f529d50-cilium-config-path\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:31.260564 kubelet[1538]: E1213 14:28:31.260472 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:31.443430 kubelet[1538]: I1213 14:28:31.443359 1538 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c1c1f159-b48e-4c8a-947c-060c7f529d50" path="/var/lib/kubelet/pods/c1c1f159-b48e-4c8a-947c-060c7f529d50/volumes" Dec 13 14:28:31.518472 systemd[1]: Started sshd@9-10.128.0.21:22-218.92.0.190:17243.service. Dec 13 14:28:31.890222 kubelet[1538]: I1213 14:28:31.890165 1538 topology_manager.go:215] "Topology Admit Handler" podUID="ee72e78d-10b0-49a7-96ec-1c1100303d55" podNamespace="kube-system" podName="cilium-operator-5cc964979-rmvvv" Dec 13 14:28:31.890222 kubelet[1538]: E1213 14:28:31.890252 1538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1c1f159-b48e-4c8a-947c-060c7f529d50" containerName="cilium-agent" Dec 13 14:28:31.890611 kubelet[1538]: E1213 14:28:31.890271 1538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1c1f159-b48e-4c8a-947c-060c7f529d50" containerName="mount-cgroup" Dec 13 14:28:31.890611 kubelet[1538]: E1213 14:28:31.890283 1538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1c1f159-b48e-4c8a-947c-060c7f529d50" containerName="mount-bpf-fs" Dec 13 14:28:31.890611 kubelet[1538]: E1213 14:28:31.890294 1538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1c1f159-b48e-4c8a-947c-060c7f529d50" containerName="clean-cilium-state" Dec 13 14:28:31.890611 kubelet[1538]: E1213 14:28:31.890307 1538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1c1f159-b48e-4c8a-947c-060c7f529d50" containerName="apply-sysctl-overwrites" Dec 13 14:28:31.890611 kubelet[1538]: I1213 14:28:31.890337 1538 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c1f159-b48e-4c8a-947c-060c7f529d50" containerName="cilium-agent" Dec 13 14:28:31.898077 systemd[1]: Created slice kubepods-besteffort-podee72e78d_10b0_49a7_96ec_1c1100303d55.slice. Dec 13 14:28:31.941601 kubelet[1538]: I1213 14:28:31.941536 1538 topology_manager.go:215] "Topology Admit Handler" podUID="2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" podNamespace="kube-system" podName="cilium-q8mfh" Dec 13 14:28:31.950590 systemd[1]: Created slice kubepods-burstable-pod2de0d4c2_cd59_44bb_b7f3_2e2d6d028b68.slice. Dec 13 14:28:32.053294 kubelet[1538]: I1213 14:28:32.053219 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-host-proc-sys-kernel\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.053743 kubelet[1538]: I1213 14:28:32.053714 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwgbm\" (UniqueName: \"kubernetes.io/projected/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-kube-api-access-xwgbm\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.053982 kubelet[1538]: I1213 14:28:32.053943 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cni-path\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054097 kubelet[1538]: I1213 14:28:32.054023 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-host-proc-sys-net\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054097 kubelet[1538]: I1213 14:28:32.054064 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-config-path\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054231 kubelet[1538]: I1213 14:28:32.054108 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k245k\" (UniqueName: \"kubernetes.io/projected/ee72e78d-10b0-49a7-96ec-1c1100303d55-kube-api-access-k245k\") pod \"cilium-operator-5cc964979-rmvvv\" (UID: \"ee72e78d-10b0-49a7-96ec-1c1100303d55\") " pod="kube-system/cilium-operator-5cc964979-rmvvv" Dec 13 14:28:32.054231 kubelet[1538]: I1213 14:28:32.054144 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-ipsec-secrets\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054231 kubelet[1538]: I1213 14:28:32.054181 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-etc-cni-netd\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054231 kubelet[1538]: I1213 14:28:32.054218 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-clustermesh-secrets\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054451 kubelet[1538]: I1213 14:28:32.054260 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-cgroup\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054451 kubelet[1538]: I1213 14:28:32.054298 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-hubble-tls\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054451 kubelet[1538]: I1213 14:28:32.054337 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-hostproc\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054451 kubelet[1538]: I1213 14:28:32.054371 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-lib-modules\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054451 kubelet[1538]: I1213 14:28:32.054417 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-run\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054710 kubelet[1538]: I1213 14:28:32.054453 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-bpf-maps\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.054710 kubelet[1538]: I1213 14:28:32.054553 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee72e78d-10b0-49a7-96ec-1c1100303d55-cilium-config-path\") pod \"cilium-operator-5cc964979-rmvvv\" (UID: \"ee72e78d-10b0-49a7-96ec-1c1100303d55\") " pod="kube-system/cilium-operator-5cc964979-rmvvv" Dec 13 14:28:32.054710 kubelet[1538]: I1213 14:28:32.054591 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-xtables-lock\") pod \"cilium-q8mfh\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " pod="kube-system/cilium-q8mfh" Dec 13 14:28:32.210578 env[1226]: time="2024-12-13T14:28:32.204879636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-rmvvv,Uid:ee72e78d-10b0-49a7-96ec-1c1100303d55,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:32.236021 env[1226]: time="2024-12-13T14:28:32.235908545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:32.236383 env[1226]: time="2024-12-13T14:28:32.236327275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:32.236583 env[1226]: time="2024-12-13T14:28:32.236545062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:32.237014 env[1226]: time="2024-12-13T14:28:32.236943780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9bab825afb67a05c8f5c098b76d22b773d50bf1405ebc87ef7664dba6007adf pid=3075 runtime=io.containerd.runc.v2 Dec 13 14:28:32.257823 systemd[1]: Started cri-containerd-d9bab825afb67a05c8f5c098b76d22b773d50bf1405ebc87ef7664dba6007adf.scope. Dec 13 14:28:32.264700 kubelet[1538]: E1213 14:28:32.263660 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:32.265980 env[1226]: time="2024-12-13T14:28:32.265802411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8mfh,Uid:2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:32.294154 env[1226]: time="2024-12-13T14:28:32.293904779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:32.294154 env[1226]: time="2024-12-13T14:28:32.294058602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:32.294154 env[1226]: time="2024-12-13T14:28:32.294133829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:32.294513 env[1226]: time="2024-12-13T14:28:32.294411036Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/861cf42f799e3bf379c8dbf07d8c98e4fa1455c507252fa999cdc9287ec33359 pid=3111 runtime=io.containerd.runc.v2 Dec 13 14:28:32.316783 systemd[1]: Started cri-containerd-861cf42f799e3bf379c8dbf07d8c98e4fa1455c507252fa999cdc9287ec33359.scope. Dec 13 14:28:32.360198 env[1226]: time="2024-12-13T14:28:32.360137370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-rmvvv,Uid:ee72e78d-10b0-49a7-96ec-1c1100303d55,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9bab825afb67a05c8f5c098b76d22b773d50bf1405ebc87ef7664dba6007adf\"" Dec 13 14:28:32.363458 env[1226]: time="2024-12-13T14:28:32.363329675Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:28:32.381942 env[1226]: time="2024-12-13T14:28:32.381887569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8mfh,Uid:2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68,Namespace:kube-system,Attempt:0,} returns sandbox id \"861cf42f799e3bf379c8dbf07d8c98e4fa1455c507252fa999cdc9287ec33359\"" Dec 13 14:28:32.387484 env[1226]: time="2024-12-13T14:28:32.387431898Z" level=info msg="CreateContainer within sandbox \"861cf42f799e3bf379c8dbf07d8c98e4fa1455c507252fa999cdc9287ec33359\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:28:32.403110 env[1226]: time="2024-12-13T14:28:32.403056171Z" level=info msg="CreateContainer within sandbox \"861cf42f799e3bf379c8dbf07d8c98e4fa1455c507252fa999cdc9287ec33359\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a\"" Dec 13 14:28:32.403789 env[1226]: time="2024-12-13T14:28:32.403750649Z" level=info msg="StartContainer for \"bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a\"" Dec 13 14:28:32.429874 systemd[1]: Started cri-containerd-bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a.scope. Dec 13 14:28:32.451157 systemd[1]: cri-containerd-bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a.scope: Deactivated successfully. Dec 13 14:28:32.466582 env[1226]: time="2024-12-13T14:28:32.465111369Z" level=info msg="shim disconnected" id=bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a Dec 13 14:28:32.466582 env[1226]: time="2024-12-13T14:28:32.465193429Z" level=warning msg="cleaning up after shim disconnected" id=bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a namespace=k8s.io Dec 13 14:28:32.466582 env[1226]: time="2024-12-13T14:28:32.465210812Z" level=info msg="cleaning up dead shim" Dec 13 14:28:32.479309 env[1226]: time="2024-12-13T14:28:32.479243829Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3177 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:28:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:28:32.479790 env[1226]: time="2024-12-13T14:28:32.479626287Z" level=error msg="copy shim log" error="read /proc/self/fd/59: file already closed" Dec 13 14:28:32.480340 env[1226]: time="2024-12-13T14:28:32.480251362Z" level=error msg="Failed to pipe stdout of container \"bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a\"" error="reading from a closed fifo" Dec 13 14:28:32.483119 env[1226]: time="2024-12-13T14:28:32.483040296Z" level=error msg="Failed to pipe stderr of container \"bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a\"" error="reading from a closed fifo" Dec 13 14:28:32.485394 env[1226]: time="2024-12-13T14:28:32.485324266Z" level=error msg="StartContainer for \"bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:28:32.485718 kubelet[1538]: E1213 14:28:32.485679 1538 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a" Dec 13 14:28:32.488267 kubelet[1538]: E1213 14:28:32.488232 1538 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:28:32.488267 kubelet[1538]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:28:32.488267 kubelet[1538]: rm /hostbin/cilium-mount Dec 13 14:28:32.488463 kubelet[1538]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xwgbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-q8mfh_kube-system(2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:28:32.488463 kubelet[1538]: E1213 14:28:32.488319 1538 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-q8mfh" podUID="2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" Dec 13 14:28:32.677484 env[1226]: time="2024-12-13T14:28:32.677421053Z" level=info msg="StopPodSandbox for \"861cf42f799e3bf379c8dbf07d8c98e4fa1455c507252fa999cdc9287ec33359\"" Dec 13 14:28:32.677891 env[1226]: time="2024-12-13T14:28:32.677849721Z" level=info msg="Container to stop \"bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:32.689358 systemd[1]: cri-containerd-861cf42f799e3bf379c8dbf07d8c98e4fa1455c507252fa999cdc9287ec33359.scope: Deactivated successfully. Dec 13 14:28:32.725010 env[1226]: time="2024-12-13T14:28:32.723702630Z" level=info msg="shim disconnected" id=861cf42f799e3bf379c8dbf07d8c98e4fa1455c507252fa999cdc9287ec33359 Dec 13 14:28:32.725371 env[1226]: time="2024-12-13T14:28:32.725322023Z" level=warning msg="cleaning up after shim disconnected" id=861cf42f799e3bf379c8dbf07d8c98e4fa1455c507252fa999cdc9287ec33359 namespace=k8s.io Dec 13 14:28:32.725371 env[1226]: time="2024-12-13T14:28:32.725359770Z" level=info msg="cleaning up dead shim" Dec 13 14:28:32.738732 env[1226]: time="2024-12-13T14:28:32.738654831Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3209 runtime=io.containerd.runc.v2\n" Dec 13 14:28:32.739205 env[1226]: time="2024-12-13T14:28:32.739153556Z" level=info msg="TearDown network for sandbox \"861cf42f799e3bf379c8dbf07d8c98e4fa1455c507252fa999cdc9287ec33359\" successfully" Dec 13 14:28:32.739205 env[1226]: time="2024-12-13T14:28:32.739197903Z" level=info msg="StopPodSandbox for \"861cf42f799e3bf379c8dbf07d8c98e4fa1455c507252fa999cdc9287ec33359\" returns successfully" Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861203 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-xtables-lock\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861285 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-config-path\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861324 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-run\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861298 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861355 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-host-proc-sys-net\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861394 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-ipsec-secrets\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861437 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-clustermesh-secrets\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861506 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-lib-modules\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861539 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-hostproc\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861570 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-bpf-maps\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861603 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-host-proc-sys-kernel\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861638 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwgbm\" (UniqueName: \"kubernetes.io/projected/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-kube-api-access-xwgbm\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861671 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cni-path\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861703 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-etc-cni-netd\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861740 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-cgroup\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.862007 kubelet[1538]: I1213 14:28:32.861776 1538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-hubble-tls\") pod \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\" (UID: \"2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68\") " Dec 13 14:28:32.863153 kubelet[1538]: I1213 14:28:32.861840 1538 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-xtables-lock\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.863153 kubelet[1538]: I1213 14:28:32.862108 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.863153 kubelet[1538]: I1213 14:28:32.862166 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.863153 kubelet[1538]: I1213 14:28:32.862624 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.867882 kubelet[1538]: I1213 14:28:32.867838 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.868148 kubelet[1538]: I1213 14:28:32.868041 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-hostproc" (OuterVolumeSpecName: "hostproc") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.868280 kubelet[1538]: I1213 14:28:32.868068 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.868450 kubelet[1538]: I1213 14:28:32.868418 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cni-path" (OuterVolumeSpecName: "cni-path") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.868558 kubelet[1538]: I1213 14:28:32.868480 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.868558 kubelet[1538]: I1213 14:28:32.868508 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.871787 kubelet[1538]: I1213 14:28:32.871753 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:28:32.872117 kubelet[1538]: I1213 14:28:32.872087 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:32.872358 kubelet[1538]: I1213 14:28:32.872326 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:32.875620 kubelet[1538]: I1213 14:28:32.875581 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-kube-api-access-xwgbm" (OuterVolumeSpecName: "kube-api-access-xwgbm") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "kube-api-access-xwgbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:32.876580 kubelet[1538]: I1213 14:28:32.876539 1538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" (UID: "2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:32.962435 kubelet[1538]: I1213 14:28:32.962363 1538 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-host-proc-sys-net\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962435 kubelet[1538]: I1213 14:28:32.962421 1538 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-ipsec-secrets\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962435 kubelet[1538]: I1213 14:28:32.962439 1538 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-clustermesh-secrets\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962435 kubelet[1538]: I1213 14:28:32.962456 1538 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-lib-modules\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962435 kubelet[1538]: I1213 14:28:32.962475 1538 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-bpf-maps\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962881 kubelet[1538]: I1213 14:28:32.962494 1538 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-host-proc-sys-kernel\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962881 kubelet[1538]: I1213 14:28:32.962510 1538 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xwgbm\" (UniqueName: \"kubernetes.io/projected/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-kube-api-access-xwgbm\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962881 kubelet[1538]: I1213 14:28:32.962524 1538 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cni-path\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962881 kubelet[1538]: I1213 14:28:32.962538 1538 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-etc-cni-netd\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962881 kubelet[1538]: I1213 14:28:32.962552 1538 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-hostproc\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962881 kubelet[1538]: I1213 14:28:32.962567 1538 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-cgroup\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962881 kubelet[1538]: I1213 14:28:32.962581 1538 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-hubble-tls\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962881 kubelet[1538]: I1213 14:28:32.962601 1538 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-config-path\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:32.962881 kubelet[1538]: I1213 14:28:32.962616 1538 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68-cilium-run\") on node \"10.128.0.21\" DevicePath \"\"" Dec 13 14:28:33.175348 systemd[1]: var-lib-kubelet-pods-2de0d4c2\x2dcd59\x2d44bb\x2db7f3\x2d2e2d6d028b68-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxwgbm.mount: Deactivated successfully. Dec 13 14:28:33.175515 systemd[1]: var-lib-kubelet-pods-2de0d4c2\x2dcd59\x2d44bb\x2db7f3\x2d2e2d6d028b68-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:28:33.175628 systemd[1]: var-lib-kubelet-pods-2de0d4c2\x2dcd59\x2d44bb\x2db7f3\x2d2e2d6d028b68-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:33.175731 systemd[1]: var-lib-kubelet-pods-2de0d4c2\x2dcd59\x2d44bb\x2db7f3\x2d2e2d6d028b68-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:33.264400 kubelet[1538]: E1213 14:28:33.264339 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:33.368181 kubelet[1538]: E1213 14:28:33.368121 1538 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:28:33.453308 systemd[1]: Removed slice kubepods-burstable-pod2de0d4c2_cd59_44bb_b7f3_2e2d6d028b68.slice. Dec 13 14:28:33.637290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1960372489.mount: Deactivated successfully. Dec 13 14:28:33.683853 kubelet[1538]: I1213 14:28:33.683803 1538 scope.go:117] "RemoveContainer" containerID="bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a" Dec 13 14:28:33.692567 env[1226]: time="2024-12-13T14:28:33.692501880Z" level=info msg="RemoveContainer for \"bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a\"" Dec 13 14:28:33.702315 env[1226]: time="2024-12-13T14:28:33.702257885Z" level=info msg="RemoveContainer for \"bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a\" returns successfully" Dec 13 14:28:33.726011 kubelet[1538]: I1213 14:28:33.725332 1538 topology_manager.go:215] "Topology Admit Handler" podUID="35a78d7f-01b0-459d-b331-cedf56bdf8ac" podNamespace="kube-system" podName="cilium-cmwb5" Dec 13 14:28:33.726319 kubelet[1538]: E1213 14:28:33.726299 1538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" containerName="mount-cgroup" Dec 13 14:28:33.726525 kubelet[1538]: I1213 14:28:33.726495 1538 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" containerName="mount-cgroup" Dec 13 14:28:33.737927 systemd[1]: Created slice kubepods-burstable-pod35a78d7f_01b0_459d_b331_cedf56bdf8ac.slice. Dec 13 14:28:33.868424 kubelet[1538]: I1213 14:28:33.868355 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35a78d7f-01b0-459d-b331-cedf56bdf8ac-cilium-run\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.868829 kubelet[1538]: I1213 14:28:33.868806 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35a78d7f-01b0-459d-b331-cedf56bdf8ac-clustermesh-secrets\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.869182 kubelet[1538]: I1213 14:28:33.869151 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35a78d7f-01b0-459d-b331-cedf56bdf8ac-lib-modules\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.869435 kubelet[1538]: I1213 14:28:33.869419 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35a78d7f-01b0-459d-b331-cedf56bdf8ac-bpf-maps\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.869645 kubelet[1538]: I1213 14:28:33.869627 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35a78d7f-01b0-459d-b331-cedf56bdf8ac-cilium-config-path\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.869870 kubelet[1538]: I1213 14:28:33.869855 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35a78d7f-01b0-459d-b331-cedf56bdf8ac-host-proc-sys-net\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.870096 kubelet[1538]: I1213 14:28:33.870081 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pwvk\" (UniqueName: \"kubernetes.io/projected/35a78d7f-01b0-459d-b331-cedf56bdf8ac-kube-api-access-5pwvk\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.870305 kubelet[1538]: I1213 14:28:33.870287 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35a78d7f-01b0-459d-b331-cedf56bdf8ac-cilium-cgroup\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.870518 kubelet[1538]: I1213 14:28:33.870500 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35a78d7f-01b0-459d-b331-cedf56bdf8ac-etc-cni-netd\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.870717 kubelet[1538]: I1213 14:28:33.870701 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35a78d7f-01b0-459d-b331-cedf56bdf8ac-cni-path\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.870928 kubelet[1538]: I1213 14:28:33.870913 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/35a78d7f-01b0-459d-b331-cedf56bdf8ac-cilium-ipsec-secrets\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.871167 kubelet[1538]: I1213 14:28:33.871147 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35a78d7f-01b0-459d-b331-cedf56bdf8ac-host-proc-sys-kernel\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.871428 kubelet[1538]: I1213 14:28:33.871412 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35a78d7f-01b0-459d-b331-cedf56bdf8ac-hubble-tls\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.871598 kubelet[1538]: I1213 14:28:33.871584 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35a78d7f-01b0-459d-b331-cedf56bdf8ac-xtables-lock\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:33.871805 kubelet[1538]: I1213 14:28:33.871790 1538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35a78d7f-01b0-459d-b331-cedf56bdf8ac-hostproc\") pod \"cilium-cmwb5\" (UID: \"35a78d7f-01b0-459d-b331-cedf56bdf8ac\") " pod="kube-system/cilium-cmwb5" Dec 13 14:28:34.049417 env[1226]: time="2024-12-13T14:28:34.049354211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cmwb5,Uid:35a78d7f-01b0-459d-b331-cedf56bdf8ac,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:34.091369 env[1226]: time="2024-12-13T14:28:34.091231586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:34.091671 env[1226]: time="2024-12-13T14:28:34.091319750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:34.091671 env[1226]: time="2024-12-13T14:28:34.091339275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:34.091671 env[1226]: time="2024-12-13T14:28:34.091612563Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a pid=3238 runtime=io.containerd.runc.v2 Dec 13 14:28:34.124281 systemd[1]: Started cri-containerd-4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a.scope. Dec 13 14:28:34.186009 env[1226]: time="2024-12-13T14:28:34.184836607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cmwb5,Uid:35a78d7f-01b0-459d-b331-cedf56bdf8ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a\"" Dec 13 14:28:34.189438 env[1226]: time="2024-12-13T14:28:34.189361445Z" level=info msg="CreateContainer within sandbox \"4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:28:34.218134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831768679.mount: Deactivated successfully. Dec 13 14:28:34.238222 env[1226]: time="2024-12-13T14:28:34.238138735Z" level=info msg="CreateContainer within sandbox \"4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2b3873c4bf2c3f08b531a58a9801697322f6cc30ef0c0327b2bd7fc7cac6fb47\"" Dec 13 14:28:34.240024 env[1226]: time="2024-12-13T14:28:34.239952160Z" level=info msg="StartContainer for \"2b3873c4bf2c3f08b531a58a9801697322f6cc30ef0c0327b2bd7fc7cac6fb47\"" Dec 13 14:28:34.265649 kubelet[1538]: E1213 14:28:34.265536 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:34.281463 systemd[1]: Started cri-containerd-2b3873c4bf2c3f08b531a58a9801697322f6cc30ef0c0327b2bd7fc7cac6fb47.scope. Dec 13 14:28:34.344963 env[1226]: time="2024-12-13T14:28:34.344796700Z" level=info msg="StartContainer for \"2b3873c4bf2c3f08b531a58a9801697322f6cc30ef0c0327b2bd7fc7cac6fb47\" returns successfully" Dec 13 14:28:34.357259 systemd[1]: cri-containerd-2b3873c4bf2c3f08b531a58a9801697322f6cc30ef0c0327b2bd7fc7cac6fb47.scope: Deactivated successfully. Dec 13 14:28:34.532619 env[1226]: time="2024-12-13T14:28:34.532538107Z" level=info msg="shim disconnected" id=2b3873c4bf2c3f08b531a58a9801697322f6cc30ef0c0327b2bd7fc7cac6fb47 Dec 13 14:28:34.532619 env[1226]: time="2024-12-13T14:28:34.532627265Z" level=warning msg="cleaning up after shim disconnected" id=2b3873c4bf2c3f08b531a58a9801697322f6cc30ef0c0327b2bd7fc7cac6fb47 namespace=k8s.io Dec 13 14:28:34.533073 env[1226]: time="2024-12-13T14:28:34.532643575Z" level=info msg="cleaning up dead shim" Dec 13 14:28:34.552184 env[1226]: time="2024-12-13T14:28:34.552124053Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3321 runtime=io.containerd.runc.v2\n" Dec 13 14:28:34.669473 env[1226]: time="2024-12-13T14:28:34.669285062Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:34.671880 env[1226]: time="2024-12-13T14:28:34.671830274Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:34.674302 env[1226]: time="2024-12-13T14:28:34.674258607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:34.675179 env[1226]: time="2024-12-13T14:28:34.675132162Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:28:34.678998 env[1226]: time="2024-12-13T14:28:34.678921804Z" level=info msg="CreateContainer within sandbox \"d9bab825afb67a05c8f5c098b76d22b773d50bf1405ebc87ef7664dba6007adf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:28:34.703355 env[1226]: time="2024-12-13T14:28:34.703294362Z" level=info msg="CreateContainer within sandbox \"d9bab825afb67a05c8f5c098b76d22b773d50bf1405ebc87ef7664dba6007adf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"53a32e88fcebf5dabaad2a1eb448a2c8a7be3b73fd46f27dd594f0d9cda56ac0\"" Dec 13 14:28:34.704685 env[1226]: time="2024-12-13T14:28:34.704435701Z" level=info msg="StartContainer for \"53a32e88fcebf5dabaad2a1eb448a2c8a7be3b73fd46f27dd594f0d9cda56ac0\"" Dec 13 14:28:34.706541 env[1226]: time="2024-12-13T14:28:34.706471599Z" level=info msg="CreateContainer within sandbox \"4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:28:34.734315 env[1226]: time="2024-12-13T14:28:34.733475966Z" level=info msg="CreateContainer within sandbox \"4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"81ffbae714e014fa347536eabc3ca9648b6f0300a497e160b2fe0104022c2c63\"" Dec 13 14:28:34.736255 env[1226]: time="2024-12-13T14:28:34.736207235Z" level=info msg="StartContainer for \"81ffbae714e014fa347536eabc3ca9648b6f0300a497e160b2fe0104022c2c63\"" Dec 13 14:28:34.754338 systemd[1]: Started cri-containerd-53a32e88fcebf5dabaad2a1eb448a2c8a7be3b73fd46f27dd594f0d9cda56ac0.scope. Dec 13 14:28:34.782547 systemd[1]: Started cri-containerd-81ffbae714e014fa347536eabc3ca9648b6f0300a497e160b2fe0104022c2c63.scope. Dec 13 14:28:34.824172 env[1226]: time="2024-12-13T14:28:34.824101261Z" level=info msg="StartContainer for \"53a32e88fcebf5dabaad2a1eb448a2c8a7be3b73fd46f27dd594f0d9cda56ac0\" returns successfully" Dec 13 14:28:34.851424 env[1226]: time="2024-12-13T14:28:34.851341890Z" level=info msg="StartContainer for \"81ffbae714e014fa347536eabc3ca9648b6f0300a497e160b2fe0104022c2c63\" returns successfully" Dec 13 14:28:34.864724 systemd[1]: cri-containerd-81ffbae714e014fa347536eabc3ca9648b6f0300a497e160b2fe0104022c2c63.scope: Deactivated successfully. Dec 13 14:28:34.888942 kubelet[1538]: I1213 14:28:34.888903 1538 setters.go:568] "Node became not ready" node="10.128.0.21" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:28:34Z","lastTransitionTime":"2024-12-13T14:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:28:34.924952 env[1226]: time="2024-12-13T14:28:34.924773832Z" level=info msg="shim disconnected" id=81ffbae714e014fa347536eabc3ca9648b6f0300a497e160b2fe0104022c2c63 Dec 13 14:28:34.924952 env[1226]: time="2024-12-13T14:28:34.924854991Z" level=warning msg="cleaning up after shim disconnected" id=81ffbae714e014fa347536eabc3ca9648b6f0300a497e160b2fe0104022c2c63 namespace=k8s.io Dec 13 14:28:34.924952 env[1226]: time="2024-12-13T14:28:34.924870737Z" level=info msg="cleaning up dead shim" Dec 13 14:28:34.941332 env[1226]: time="2024-12-13T14:28:34.941270249Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3429 runtime=io.containerd.runc.v2\n" Dec 13 14:28:35.175809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b3873c4bf2c3f08b531a58a9801697322f6cc30ef0c0327b2bd7fc7cac6fb47-rootfs.mount: Deactivated successfully. Dec 13 14:28:35.266864 kubelet[1538]: E1213 14:28:35.266790 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:35.443124 kubelet[1538]: I1213 14:28:35.442952 1538 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68" path="/var/lib/kubelet/pods/2de0d4c2-cd59-44bb-b7f3-2e2d6d028b68/volumes" Dec 13 14:28:35.571796 kubelet[1538]: W1213 14:28:35.571716 1538 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2de0d4c2_cd59_44bb_b7f3_2e2d6d028b68.slice/cri-containerd-bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a.scope WatchSource:0}: container "bae2a7c1b54fab947bbccffd2b4a682e11e77f7865e40a8d72b169f20601e64a" in namespace "k8s.io": not found Dec 13 14:28:35.722718 env[1226]: time="2024-12-13T14:28:35.722150402Z" level=info msg="CreateContainer within sandbox \"4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:28:35.747360 env[1226]: time="2024-12-13T14:28:35.747278799Z" level=info msg="CreateContainer within sandbox \"4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3ad043ba5e3a0d53b93795111214a31c74b0192aa4f2b7e26633b30252f92aa9\"" Dec 13 14:28:35.748207 env[1226]: time="2024-12-13T14:28:35.748010285Z" level=info msg="StartContainer for \"3ad043ba5e3a0d53b93795111214a31c74b0192aa4f2b7e26633b30252f92aa9\"" Dec 13 14:28:35.785516 systemd[1]: Started cri-containerd-3ad043ba5e3a0d53b93795111214a31c74b0192aa4f2b7e26633b30252f92aa9.scope. Dec 13 14:28:35.801624 kubelet[1538]: I1213 14:28:35.800819 1538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-rmvvv" podStartSLOduration=2.487615612 podStartE2EDuration="4.800739901s" podCreationTimestamp="2024-12-13 14:28:31 +0000 UTC" firstStartedPulling="2024-12-13 14:28:32.362455626 +0000 UTC m=+69.759996408" lastFinishedPulling="2024-12-13 14:28:34.675579909 +0000 UTC m=+72.073120697" observedRunningTime="2024-12-13 14:28:35.74622413 +0000 UTC m=+73.143764938" watchObservedRunningTime="2024-12-13 14:28:35.800739901 +0000 UTC m=+73.198280742" Dec 13 14:28:35.842322 systemd[1]: cri-containerd-3ad043ba5e3a0d53b93795111214a31c74b0192aa4f2b7e26633b30252f92aa9.scope: Deactivated successfully. Dec 13 14:28:35.843337 env[1226]: time="2024-12-13T14:28:35.842736581Z" level=info msg="StartContainer for \"3ad043ba5e3a0d53b93795111214a31c74b0192aa4f2b7e26633b30252f92aa9\" returns successfully" Dec 13 14:28:35.879015 env[1226]: time="2024-12-13T14:28:35.878922994Z" level=info msg="shim disconnected" id=3ad043ba5e3a0d53b93795111214a31c74b0192aa4f2b7e26633b30252f92aa9 Dec 13 14:28:35.879015 env[1226]: time="2024-12-13T14:28:35.879015958Z" level=warning msg="cleaning up after shim disconnected" id=3ad043ba5e3a0d53b93795111214a31c74b0192aa4f2b7e26633b30252f92aa9 namespace=k8s.io Dec 13 14:28:35.879425 env[1226]: time="2024-12-13T14:28:35.879032195Z" level=info msg="cleaning up dead shim" Dec 13 14:28:35.894153 env[1226]: time="2024-12-13T14:28:35.894082394Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3488 runtime=io.containerd.runc.v2\n" Dec 13 14:28:35.946620 sshd[3059]: Received disconnect from 218.92.0.190 port 17243:11: [preauth] Dec 13 14:28:35.946620 sshd[3059]: Disconnected from 218.92.0.190 port 17243 [preauth] Dec 13 14:28:35.948095 systemd[1]: sshd@9-10.128.0.21:22-218.92.0.190:17243.service: Deactivated successfully. Dec 13 14:28:36.174809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ad043ba5e3a0d53b93795111214a31c74b0192aa4f2b7e26633b30252f92aa9-rootfs.mount: Deactivated successfully. Dec 13 14:28:36.267413 kubelet[1538]: E1213 14:28:36.267347 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:36.729633 env[1226]: time="2024-12-13T14:28:36.729546256Z" level=info msg="CreateContainer within sandbox \"4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:28:36.754141 env[1226]: time="2024-12-13T14:28:36.754078183Z" level=info msg="CreateContainer within sandbox \"4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196\"" Dec 13 14:28:36.754862 env[1226]: time="2024-12-13T14:28:36.754812814Z" level=info msg="StartContainer for \"dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196\"" Dec 13 14:28:36.793606 systemd[1]: Started cri-containerd-dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196.scope. Dec 13 14:28:36.833372 systemd[1]: cri-containerd-dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196.scope: Deactivated successfully. Dec 13 14:28:36.835278 env[1226]: time="2024-12-13T14:28:36.835154362Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35a78d7f_01b0_459d_b331_cedf56bdf8ac.slice/cri-containerd-dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196.scope/memory.events\": no such file or directory" Dec 13 14:28:36.837945 env[1226]: time="2024-12-13T14:28:36.837876671Z" level=info msg="StartContainer for \"dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196\" returns successfully" Dec 13 14:28:36.870051 env[1226]: time="2024-12-13T14:28:36.869960755Z" level=info msg="shim disconnected" id=dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196 Dec 13 14:28:36.870385 env[1226]: time="2024-12-13T14:28:36.870054474Z" level=warning msg="cleaning up after shim disconnected" id=dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196 namespace=k8s.io Dec 13 14:28:36.870385 env[1226]: time="2024-12-13T14:28:36.870070788Z" level=info msg="cleaning up dead shim" Dec 13 14:28:36.882634 env[1226]: time="2024-12-13T14:28:36.882574362Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3546 runtime=io.containerd.runc.v2\n" Dec 13 14:28:37.174872 systemd[1]: run-containerd-runc-k8s.io-dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196-runc.xNANgj.mount: Deactivated successfully. Dec 13 14:28:37.175060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196-rootfs.mount: Deactivated successfully. Dec 13 14:28:37.268233 kubelet[1538]: E1213 14:28:37.268159 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:37.741561 env[1226]: time="2024-12-13T14:28:37.741452142Z" level=info msg="CreateContainer within sandbox \"4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:28:37.766681 env[1226]: time="2024-12-13T14:28:37.760650588Z" level=info msg="CreateContainer within sandbox \"4dd397a45b231e56b2c6c013c7794aeeabd46cd2840786a37632c0e59ba9b51a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1165c030248e956d858a32daba2d5f790c8fc6a306e309c6178c152b74875486\"" Dec 13 14:28:37.766681 env[1226]: time="2024-12-13T14:28:37.761789959Z" level=info msg="StartContainer for \"1165c030248e956d858a32daba2d5f790c8fc6a306e309c6178c152b74875486\"" Dec 13 14:28:37.803209 systemd[1]: Started cri-containerd-1165c030248e956d858a32daba2d5f790c8fc6a306e309c6178c152b74875486.scope. Dec 13 14:28:37.856214 env[1226]: time="2024-12-13T14:28:37.856128852Z" level=info msg="StartContainer for \"1165c030248e956d858a32daba2d5f790c8fc6a306e309c6178c152b74875486\" returns successfully" Dec 13 14:28:38.175737 systemd[1]: run-containerd-runc-k8s.io-1165c030248e956d858a32daba2d5f790c8fc6a306e309c6178c152b74875486-runc.Hzskxi.mount: Deactivated successfully. Dec 13 14:28:38.268565 kubelet[1538]: E1213 14:28:38.268426 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:38.335028 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:28:38.688667 kubelet[1538]: W1213 14:28:38.688590 1538 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35a78d7f_01b0_459d_b331_cedf56bdf8ac.slice/cri-containerd-2b3873c4bf2c3f08b531a58a9801697322f6cc30ef0c0327b2bd7fc7cac6fb47.scope WatchSource:0}: task 2b3873c4bf2c3f08b531a58a9801697322f6cc30ef0c0327b2bd7fc7cac6fb47 not found: not found Dec 13 14:28:38.772098 kubelet[1538]: I1213 14:28:38.772045 1538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-cmwb5" podStartSLOduration=5.771956884 podStartE2EDuration="5.771956884s" podCreationTimestamp="2024-12-13 14:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:38.771538195 +0000 UTC m=+76.169079024" watchObservedRunningTime="2024-12-13 14:28:38.771956884 +0000 UTC m=+76.169497692" Dec 13 14:28:39.269165 kubelet[1538]: E1213 14:28:39.269070 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:39.369635 systemd[1]: run-containerd-runc-k8s.io-1165c030248e956d858a32daba2d5f790c8fc6a306e309c6178c152b74875486-runc.n3uUzN.mount: Deactivated successfully. Dec 13 14:28:40.269854 kubelet[1538]: E1213 14:28:40.269782 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:41.271816 kubelet[1538]: E1213 14:28:41.271750 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:41.390095 systemd-networkd[1031]: lxc_health: Link UP Dec 13 14:28:41.401009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:28:41.404338 systemd-networkd[1031]: lxc_health: Gained carrier Dec 13 14:28:41.608491 systemd[1]: run-containerd-runc-k8s.io-1165c030248e956d858a32daba2d5f790c8fc6a306e309c6178c152b74875486-runc.I2i81v.mount: Deactivated successfully. Dec 13 14:28:41.808076 kubelet[1538]: W1213 14:28:41.807721 1538 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35a78d7f_01b0_459d_b331_cedf56bdf8ac.slice/cri-containerd-81ffbae714e014fa347536eabc3ca9648b6f0300a497e160b2fe0104022c2c63.scope WatchSource:0}: task 81ffbae714e014fa347536eabc3ca9648b6f0300a497e160b2fe0104022c2c63 not found: not found Dec 13 14:28:42.273030 kubelet[1538]: E1213 14:28:42.272929 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:43.196432 kubelet[1538]: E1213 14:28:43.196352 1538 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:43.231076 systemd-networkd[1031]: lxc_health: Gained IPv6LL Dec 13 14:28:43.273441 kubelet[1538]: E1213 14:28:43.273372 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:43.948261 systemd[1]: run-containerd-runc-k8s.io-1165c030248e956d858a32daba2d5f790c8fc6a306e309c6178c152b74875486-runc.tvmPKI.mount: Deactivated successfully. Dec 13 14:28:44.274669 kubelet[1538]: E1213 14:28:44.274475 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:44.920034 kubelet[1538]: W1213 14:28:44.919951 1538 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35a78d7f_01b0_459d_b331_cedf56bdf8ac.slice/cri-containerd-3ad043ba5e3a0d53b93795111214a31c74b0192aa4f2b7e26633b30252f92aa9.scope WatchSource:0}: task 3ad043ba5e3a0d53b93795111214a31c74b0192aa4f2b7e26633b30252f92aa9 not found: not found Dec 13 14:28:45.275180 kubelet[1538]: E1213 14:28:45.275017 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:46.214272 systemd[1]: run-containerd-runc-k8s.io-1165c030248e956d858a32daba2d5f790c8fc6a306e309c6178c152b74875486-runc.ccePLx.mount: Deactivated successfully. Dec 13 14:28:46.276660 kubelet[1538]: E1213 14:28:46.276594 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:47.277723 kubelet[1538]: E1213 14:28:47.277651 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:48.030613 kubelet[1538]: W1213 14:28:48.030514 1538 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35a78d7f_01b0_459d_b331_cedf56bdf8ac.slice/cri-containerd-dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196.scope WatchSource:0}: task dfdd83da95ed5b9ac94ce3c08b4bbfa15d83ff481d56d4c63f4fc29c4d787196 not found: not found Dec 13 14:28:48.278726 kubelet[1538]: E1213 14:28:48.278653 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:49.279553 kubelet[1538]: E1213 14:28:49.279475 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:50.280001 kubelet[1538]: E1213 14:28:50.279919 1538 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"