Dec 13 14:29:38.103043 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:29:38.103081 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:29:38.103099 kernel: BIOS-provided physical RAM map: Dec 13 14:29:38.103112 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 14:29:38.103124 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 14:29:38.103136 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 14:29:38.103155 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 14:29:38.103168 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 14:29:38.103181 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable Dec 13 14:29:38.103194 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data Dec 13 14:29:38.103208 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable Dec 13 14:29:38.103221 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 14:29:38.103234 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 14:29:38.103247 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 14:29:38.103267 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 14:29:38.103282 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 14:29:38.103295 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 14:29:38.103343 kernel: NX (Execute Disable) protection: active Dec 13 14:29:38.103356 kernel: efi: EFI v2.70 by EDK II Dec 13 14:29:38.103370 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 Dec 13 14:29:38.103383 kernel: random: crng init done Dec 13 14:29:38.103575 kernel: SMBIOS 2.4 present. Dec 13 14:29:38.103596 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 14:29:38.103609 kernel: Hypervisor detected: KVM Dec 13 14:29:38.103762 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:29:38.103779 kernel: kvm-clock: cpu 0, msr 18c19a001, primary cpu clock Dec 13 14:29:38.103793 kernel: kvm-clock: using sched offset of 13136541629 cycles Dec 13 14:29:38.103819 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:29:38.103969 kernel: tsc: Detected 2299.998 MHz processor Dec 13 14:29:38.103990 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:29:38.104006 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:29:38.104020 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 14:29:38.104038 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:29:38.104180 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 14:29:38.104200 kernel: Using GB pages for direct mapping Dec 13 14:29:38.104216 kernel: Secure boot disabled Dec 13 14:29:38.104230 kernel: ACPI: Early table checksum verification disabled Dec 13 14:29:38.104245 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 14:29:38.104262 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 14:29:38.104280 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 14:29:38.104306 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 14:29:38.104320 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 14:29:38.104335 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 14:29:38.104351 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 14:29:38.104366 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 14:29:38.104382 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 14:29:38.104402 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 14:29:38.104443 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 14:29:38.104460 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 14:29:38.104477 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 14:29:38.104494 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 14:29:38.104510 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 14:29:38.104526 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 14:29:38.104542 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 14:29:38.104559 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 14:29:38.104580 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 14:29:38.104596 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 14:29:38.104612 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:29:38.104629 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:29:38.104645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 14:29:38.104661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 14:29:38.104675 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 14:29:38.104691 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 14:29:38.104707 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 14:29:38.104726 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 14:29:38.104742 kernel: Zone ranges: Dec 13 14:29:38.104758 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:29:38.104773 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:29:38.104789 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 14:29:38.104805 kernel: Movable zone start for each node Dec 13 14:29:38.104831 kernel: Early memory node ranges Dec 13 14:29:38.104847 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 14:29:38.104862 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 14:29:38.104881 kernel: node 0: [mem 0x0000000000100000-0x00000000bd276fff] Dec 13 14:29:38.104897 kernel: node 0: [mem 0x00000000bd281000-0x00000000bf8ecfff] Dec 13 14:29:38.104913 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 14:29:38.104930 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 14:29:38.104947 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 14:29:38.104963 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:29:38.104979 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 14:29:38.104996 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 14:29:38.105010 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 13 14:29:38.105031 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 14:29:38.105048 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 14:29:38.105064 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:29:38.105081 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:29:38.105098 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:29:38.105114 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:29:38.105131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:29:38.105147 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:29:38.105163 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:29:38.105182 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:29:38.105197 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:29:38.105212 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 14:29:38.105227 kernel: Booting paravirtualized kernel on KVM Dec 13 14:29:38.105242 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:29:38.105258 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:29:38.105274 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:29:38.105289 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:29:38.105305 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:29:38.105325 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:29:38.105341 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:29:38.105358 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Dec 13 14:29:38.105373 kernel: Policy zone: Normal Dec 13 14:29:38.105392 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:29:38.105409 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:29:38.105447 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 14:29:38.105467 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:29:38.105483 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:29:38.105503 kernel: Memory: 7515408K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 344876K reserved, 0K cma-reserved) Dec 13 14:29:38.105520 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:29:38.105536 kernel: Kernel/User page tables isolation: enabled Dec 13 14:29:38.105553 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:29:38.105570 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:29:38.105587 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:29:38.105604 kernel: rcu: RCU event tracing is enabled. Dec 13 14:29:38.105622 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:29:38.105643 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:29:38.105684 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:29:38.105702 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:29:38.105723 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:29:38.105741 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:29:38.105782 kernel: Console: colour dummy device 80x25 Dec 13 14:29:38.105800 kernel: printk: console [ttyS0] enabled Dec 13 14:29:38.105825 kernel: ACPI: Core revision 20210730 Dec 13 14:29:38.105843 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:29:38.105860 kernel: x2apic enabled Dec 13 14:29:38.105886 kernel: Switched APIC routing to physical x2apic. Dec 13 14:29:38.105903 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 14:29:38.105922 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 14:29:38.105940 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 14:29:38.105958 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 14:29:38.105976 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 14:29:38.105993 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:29:38.106015 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 14:29:38.106032 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 14:29:38.106048 kernel: Spectre V2 : Mitigation: IBRS Dec 13 14:29:38.106064 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:29:38.106082 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:29:38.106099 kernel: RETBleed: Mitigation: IBRS Dec 13 14:29:38.106116 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:29:38.106134 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Dec 13 14:29:38.106152 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:29:38.106174 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 14:29:38.106192 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:29:38.106209 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:29:38.106227 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:29:38.106244 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:29:38.106262 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:29:38.106280 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:29:38.106298 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:29:38.106315 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:29:38.106336 kernel: LSM: Security Framework initializing Dec 13 14:29:38.106354 kernel: SELinux: Initializing. Dec 13 14:29:38.106371 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:29:38.106388 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:29:38.106406 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 14:29:38.109043 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 14:29:38.109069 kernel: signal: max sigframe size: 1776 Dec 13 14:29:38.109088 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:29:38.109105 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:29:38.109266 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:29:38.109284 kernel: x86: Booting SMP configuration: Dec 13 14:29:38.109302 kernel: .... node #0, CPUs: #1 Dec 13 14:29:38.109319 kernel: kvm-clock: cpu 1, msr 18c19a041, secondary cpu clock Dec 13 14:29:38.109477 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:29:38.109497 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:29:38.109515 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:29:38.109532 kernel: smpboot: Max logical packages: 1 Dec 13 14:29:38.109680 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 14:29:38.109698 kernel: devtmpfs: initialized Dec 13 14:29:38.109716 kernel: x86/mm: Memory block size: 128MB Dec 13 14:29:38.109732 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 14:29:38.109750 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:29:38.109767 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:29:38.109844 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:29:38.109862 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:29:38.109879 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:29:38.109901 kernel: audit: type=2000 audit(1734100177.123:1): state=initialized audit_enabled=0 res=1 Dec 13 14:29:38.109919 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:29:38.109936 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:29:38.109954 kernel: cpuidle: using governor menu Dec 13 14:29:38.109972 kernel: ACPI: bus type PCI registered Dec 13 14:29:38.109989 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:29:38.110007 kernel: dca service started, version 1.12.1 Dec 13 14:29:38.110024 kernel: PCI: Using configuration type 1 for base access Dec 13 14:29:38.110043 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:29:38.110063 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:29:38.110081 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:29:38.110098 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:29:38.110115 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:29:38.110133 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:29:38.110150 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:29:38.110168 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:29:38.110186 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:29:38.110203 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:29:38.110224 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:29:38.110242 kernel: ACPI: Interpreter enabled Dec 13 14:29:38.110260 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:29:38.110277 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:29:38.110295 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:29:38.110313 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:29:38.110330 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:29:38.112703 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:29:38.112906 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:29:38.112933 kernel: PCI host bridge to bus 0000:00 Dec 13 14:29:38.113117 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:29:38.113288 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:29:38.113464 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:29:38.113619 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 14:29:38.113772 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:29:38.113965 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:29:38.114171 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 14:29:38.114357 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 14:29:38.114551 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:29:38.114735 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 14:29:38.114910 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 14:29:38.115124 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 14:29:38.115317 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:29:38.115773 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 14:29:38.116229 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 14:29:38.116763 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:29:38.116953 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 14:29:38.117142 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 14:29:38.117174 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:29:38.117193 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:29:38.117210 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:29:38.117227 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:29:38.117244 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:29:38.117261 kernel: iommu: Default domain type: Translated Dec 13 14:29:38.117278 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:29:38.117295 kernel: vgaarb: loaded Dec 13 14:29:38.117312 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:29:38.117333 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Dec 13 14:29:38.117350 kernel: PTP clock support registered Dec 13 14:29:38.117367 kernel: Registered efivars operations Dec 13 14:29:38.117384 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:29:38.117401 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:29:38.117432 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 14:29:38.117449 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 14:29:38.117466 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff] Dec 13 14:29:38.117483 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 14:29:38.117511 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 14:29:38.117528 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:29:38.117545 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:29:38.117562 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:29:38.117580 kernel: pnp: PnP ACPI init Dec 13 14:29:38.117597 kernel: pnp: PnP ACPI: found 7 devices Dec 13 14:29:38.117614 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:29:38.117632 kernel: NET: Registered PF_INET protocol family Dec 13 14:29:38.117656 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:29:38.117678 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 14:29:38.117695 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:29:38.117713 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:29:38.117730 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:29:38.117747 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 14:29:38.117764 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:29:38.117782 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:29:38.117798 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:29:38.117819 kernel: NET: Registered PF_XDP protocol family Dec 13 14:29:38.117991 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:29:38.118151 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:29:38.118310 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:29:38.118476 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 14:29:38.118647 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:29:38.118670 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:29:38.118693 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:29:38.118710 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 14:29:38.118727 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:29:38.118744 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 14:29:38.118762 kernel: clocksource: Switched to clocksource tsc Dec 13 14:29:38.118778 kernel: Initialise system trusted keyrings Dec 13 14:29:38.118794 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 14:29:38.118812 kernel: Key type asymmetric registered Dec 13 14:29:38.118827 kernel: Asymmetric key parser 'x509' registered Dec 13 14:29:38.118849 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:29:38.118873 kernel: io scheduler mq-deadline registered Dec 13 14:29:38.118891 kernel: io scheduler kyber registered Dec 13 14:29:38.118906 kernel: io scheduler bfq registered Dec 13 14:29:38.118922 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:29:38.118939 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 14:29:38.119126 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 14:29:38.119151 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 14:29:38.119333 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 14:29:38.119358 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 14:29:38.125018 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 14:29:38.125206 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:29:38.125230 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:29:38.125249 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:29:38.125267 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 14:29:38.125379 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 14:29:38.125595 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 14:29:38.125629 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:29:38.125647 kernel: i8042: Warning: Keylock active Dec 13 14:29:38.125665 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:29:38.125682 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:29:38.125875 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:29:38.126042 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:29:38.126197 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:29:37 UTC (1734100177) Dec 13 14:29:38.126348 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:29:38.126376 kernel: intel_pstate: CPU model not supported Dec 13 14:29:38.126395 kernel: pstore: Registered efi as persistent store backend Dec 13 14:29:38.126413 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:29:38.126469 kernel: Segment Routing with IPv6 Dec 13 14:29:38.126485 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:29:38.126503 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:29:38.126521 kernel: Key type dns_resolver registered Dec 13 14:29:38.126539 kernel: IPI shorthand broadcast: enabled Dec 13 14:29:38.126557 kernel: sched_clock: Marking stable (745416799, 149120991)->(936005412, -41467622) Dec 13 14:29:38.126579 kernel: registered taskstats version 1 Dec 13 14:29:38.126597 kernel: Loading compiled-in X.509 certificates Dec 13 14:29:38.126615 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:29:38.126634 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:29:38.126651 kernel: Key type .fscrypt registered Dec 13 14:29:38.126669 kernel: Key type fscrypt-provisioning registered Dec 13 14:29:38.126687 kernel: pstore: Using crash dump compression: deflate Dec 13 14:29:38.126705 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:29:38.126722 kernel: ima: No architecture policies found Dec 13 14:29:38.126743 kernel: clk: Disabling unused clocks Dec 13 14:29:38.126761 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:29:38.126779 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:29:38.126795 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:29:38.126820 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:29:38.126837 kernel: Run /init as init process Dec 13 14:29:38.126856 kernel: with arguments: Dec 13 14:29:38.126873 kernel: /init Dec 13 14:29:38.126891 kernel: with environment: Dec 13 14:29:38.126911 kernel: HOME=/ Dec 13 14:29:38.126929 kernel: TERM=linux Dec 13 14:29:38.126947 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:29:38.126969 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:29:38.126991 systemd[1]: Detected virtualization kvm. Dec 13 14:29:38.127010 systemd[1]: Detected architecture x86-64. Dec 13 14:29:38.127027 systemd[1]: Running in initrd. Dec 13 14:29:38.127056 systemd[1]: No hostname configured, using default hostname. Dec 13 14:29:38.127074 systemd[1]: Hostname set to <localhost>. Dec 13 14:29:38.127094 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:29:38.127112 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:29:38.127131 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:29:38.127150 systemd[1]: Reached target cryptsetup.target. Dec 13 14:29:38.127168 systemd[1]: Reached target paths.target. Dec 13 14:29:38.127186 systemd[1]: Reached target slices.target. Dec 13 14:29:38.127207 systemd[1]: Reached target swap.target. Dec 13 14:29:38.127225 systemd[1]: Reached target timers.target. Dec 13 14:29:38.127245 systemd[1]: Listening on iscsid.socket. Dec 13 14:29:38.127264 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:29:38.127283 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:29:38.127301 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:29:38.127319 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:29:38.127337 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:29:38.127359 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:29:38.127378 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:29:38.132696 systemd[1]: Reached target sockets.target. Dec 13 14:29:38.132750 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:29:38.132914 systemd[1]: Finished network-cleanup.service. Dec 13 14:29:38.132934 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:29:38.132953 systemd[1]: Starting systemd-journald.service... Dec 13 14:29:38.132976 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:29:38.132995 kernel: audit: type=1334 audit(1734100178.099:2): prog-id=6 op=LOAD Dec 13 14:29:38.133014 systemd[1]: Starting systemd-resolved.service... Dec 13 14:29:38.133180 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:29:38.133203 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:29:38.133223 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:29:38.133240 kernel: audit: type=1130 audit(1734100178.118:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.133257 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:29:38.133403 kernel: audit: type=1130 audit(1734100178.126:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.133508 systemd-journald[189]: Journal started Dec 13 14:29:38.133613 systemd-journald[189]: Runtime Journal (/run/log/journal/7cfbc37b4f251c39fac92ace5eeadaa8) is 8.0M, max 148.8M, 140.8M free. Dec 13 14:29:38.099000 audit: BPF prog-id=6 op=LOAD Dec 13 14:29:38.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.151159 systemd[1]: Started systemd-journald.service. Dec 13 14:29:38.151231 kernel: audit: type=1130 audit(1734100178.135:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.138453 systemd-modules-load[190]: Inserted module 'overlay' Dec 13 14:29:38.143186 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:29:38.148312 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:29:38.158568 kernel: audit: type=1130 audit(1734100178.140:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.171791 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:29:38.175483 kernel: audit: type=1130 audit(1734100178.170:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.192760 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:29:38.211184 kernel: audit: type=1130 audit(1734100178.195:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.197833 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:29:38.218977 systemd-resolved[191]: Positive Trust Anchors: Dec 13 14:29:38.224555 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:29:38.219464 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:29:38.219529 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:29:38.238536 kernel: Bridge firewalling registered Dec 13 14:29:38.238572 dracut-cmdline[206]: dracut-dracut-053 Dec 13 14:29:38.238572 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:29:38.227732 systemd-resolved[191]: Defaulting to hostname 'linux'. Dec 13 14:29:38.230658 systemd[1]: Started systemd-resolved.service. Dec 13 14:29:38.237760 systemd-modules-load[190]: Inserted module 'br_netfilter' Dec 13 14:29:38.272548 kernel: SCSI subsystem initialized Dec 13 14:29:38.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.276687 systemd[1]: Reached target nss-lookup.target. Dec 13 14:29:38.287563 kernel: audit: type=1130 audit(1734100178.275:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.296933 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:29:38.297012 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:29:38.298859 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:29:38.303752 systemd-modules-load[190]: Inserted module 'dm_multipath' Dec 13 14:29:38.305527 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:29:38.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.314049 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:29:38.325564 kernel: audit: type=1130 audit(1734100178.311:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.328517 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:29:38.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.343451 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:29:38.364463 kernel: iscsi: registered transport (tcp) Dec 13 14:29:38.392076 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:29:38.392184 kernel: QLogic iSCSI HBA Driver Dec 13 14:29:38.438774 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:29:38.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.440399 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:29:38.498475 kernel: raid6: avx2x4 gen() 18061 MB/s Dec 13 14:29:38.515464 kernel: raid6: avx2x4 xor() 6707 MB/s Dec 13 14:29:38.532463 kernel: raid6: avx2x2 gen() 18036 MB/s Dec 13 14:29:38.550460 kernel: raid6: avx2x2 xor() 18605 MB/s Dec 13 14:29:38.567469 kernel: raid6: avx2x1 gen() 13886 MB/s Dec 13 14:29:38.584457 kernel: raid6: avx2x1 xor() 16201 MB/s Dec 13 14:29:38.602469 kernel: raid6: sse2x4 gen() 11061 MB/s Dec 13 14:29:38.620497 kernel: raid6: sse2x4 xor() 6538 MB/s Dec 13 14:29:38.637469 kernel: raid6: sse2x2 gen() 11958 MB/s Dec 13 14:29:38.654466 kernel: raid6: sse2x2 xor() 7395 MB/s Dec 13 14:29:38.671461 kernel: raid6: sse2x1 gen() 10479 MB/s Dec 13 14:29:38.689351 kernel: raid6: sse2x1 xor() 5171 MB/s Dec 13 14:29:38.689392 kernel: raid6: using algorithm avx2x4 gen() 18061 MB/s Dec 13 14:29:38.689442 kernel: raid6: .... xor() 6707 MB/s, rmw enabled Dec 13 14:29:38.690336 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:29:38.706463 kernel: xor: automatically using best checksumming function avx Dec 13 14:29:38.813459 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:29:38.826004 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:29:38.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.825000 audit: BPF prog-id=7 op=LOAD Dec 13 14:29:38.825000 audit: BPF prog-id=8 op=LOAD Dec 13 14:29:38.827732 systemd[1]: Starting systemd-udevd.service... Dec 13 14:29:38.845456 systemd-udevd[388]: Using default interface naming scheme 'v252'. Dec 13 14:29:38.853235 systemd[1]: Started systemd-udevd.service. Dec 13 14:29:38.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.855840 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:29:38.878863 dracut-pre-trigger[395]: rd.md=0: removing MD RAID activation Dec 13 14:29:38.919730 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:29:38.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:38.920854 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:29:38.987414 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:29:38.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:39.086082 kernel: scsi host0: Virtio SCSI HBA Dec 13 14:29:39.086206 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:29:39.109279 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 14:29:39.207188 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:29:39.207289 kernel: AES CTR mode by8 optimization enabled Dec 13 14:29:39.221433 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 14:29:39.281479 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 14:29:39.281721 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 14:29:39.281945 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 14:29:39.282150 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 14:29:39.282351 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:29:39.282376 kernel: GPT:17805311 != 25165823 Dec 13 14:29:39.282399 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:29:39.282435 kernel: GPT:17805311 != 25165823 Dec 13 14:29:39.282457 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:29:39.282478 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:29:39.282501 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 14:29:39.341974 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:29:39.361717 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (442) Dec 13 14:29:39.361232 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:29:39.383537 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:29:39.383807 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:29:39.419824 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:29:39.433667 systemd[1]: Starting disk-uuid.service... Dec 13 14:29:39.456800 disk-uuid[521]: Primary Header is updated. Dec 13 14:29:39.456800 disk-uuid[521]: Secondary Entries is updated. Dec 13 14:29:39.456800 disk-uuid[521]: Secondary Header is updated. Dec 13 14:29:39.488277 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:29:39.488324 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:29:39.514478 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:29:40.506177 disk-uuid[522]: The operation has completed successfully. Dec 13 14:29:40.514594 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:29:40.577185 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:29:40.577320 systemd[1]: Finished disk-uuid.service. Dec 13 14:29:40.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:40.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:40.605405 systemd[1]: Starting verity-setup.service... Dec 13 14:29:40.633448 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:29:40.722003 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:29:40.736932 systemd[1]: Finished verity-setup.service. Dec 13 14:29:40.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:40.738297 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:29:40.839451 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:29:40.840354 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:29:40.847825 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:29:40.894567 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:29:40.894598 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:29:40.894620 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:29:40.848830 systemd[1]: Starting ignition-setup.service... Dec 13 14:29:40.907572 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:29:40.863820 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:29:40.938785 systemd[1]: Finished ignition-setup.service. Dec 13 14:29:40.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:40.940626 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:29:40.993560 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:29:40.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:40.993000 audit: BPF prog-id=9 op=LOAD Dec 13 14:29:40.995888 systemd[1]: Starting systemd-networkd.service... Dec 13 14:29:41.030332 systemd-networkd[696]: lo: Link UP Dec 13 14:29:41.030345 systemd-networkd[696]: lo: Gained carrier Dec 13 14:29:41.031646 systemd-networkd[696]: Enumeration completed Dec 13 14:29:41.031786 systemd[1]: Started systemd-networkd.service. Dec 13 14:29:41.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:41.032116 systemd-networkd[696]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:29:41.034765 systemd-networkd[696]: eth0: Link UP Dec 13 14:29:41.034772 systemd-networkd[696]: eth0: Gained carrier Dec 13 14:29:41.044185 systemd-networkd[696]: eth0: DHCPv4 address 10.128.0.81/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 14:29:41.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:41.061895 systemd[1]: Reached target network.target. Dec 13 14:29:41.131602 iscsid[707]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:29:41.131602 iscsid[707]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:29:41.131602 iscsid[707]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Dec 13 14:29:41.131602 iscsid[707]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:29:41.131602 iscsid[707]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:29:41.131602 iscsid[707]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:29:41.131602 iscsid[707]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:29:41.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:41.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:41.081775 systemd[1]: Starting iscsiuio.service... Dec 13 14:29:41.230756 ignition[642]: Ignition 2.14.0 Dec 13 14:29:41.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:41.093862 systemd[1]: Started iscsiuio.service. Dec 13 14:29:41.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:41.230770 ignition[642]: Stage: fetch-offline Dec 13 14:29:41.112954 systemd[1]: Starting iscsid.service... Dec 13 14:29:41.230852 ignition[642]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:29:41.136954 systemd[1]: Started iscsid.service. Dec 13 14:29:41.230895 ignition[642]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:29:41.144870 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:29:41.253295 ignition[642]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:29:41.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:41.163363 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:29:41.253544 ignition[642]: parsed url from cmdline: "" Dec 13 14:29:41.197825 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:29:41.253553 ignition[642]: no config URL provided Dec 13 14:29:41.233719 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:29:41.253562 ignition[642]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:29:41.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:41.266616 systemd[1]: Reached target remote-fs.target. Dec 13 14:29:41.253574 ignition[642]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:29:41.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:41.267860 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:29:41.253584 ignition[642]: failed to fetch config: resource requires networking Dec 13 14:29:41.290974 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:29:41.253739 ignition[642]: Ignition finished successfully Dec 13 14:29:41.305896 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:29:41.334042 ignition[721]: Ignition 2.14.0 Dec 13 14:29:41.321974 systemd[1]: Starting ignition-fetch.service... Dec 13 14:29:41.334051 ignition[721]: Stage: fetch Dec 13 14:29:41.354104 unknown[721]: fetched base config from "system" Dec 13 14:29:41.334184 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:29:41.354117 unknown[721]: fetched base config from "system" Dec 13 14:29:41.334214 ignition[721]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:29:41.354126 unknown[721]: fetched user config from "gcp" Dec 13 14:29:41.342821 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:29:41.366106 systemd[1]: Finished ignition-fetch.service. Dec 13 14:29:41.343015 ignition[721]: parsed url from cmdline: "" Dec 13 14:29:41.383001 systemd[1]: Starting ignition-kargs.service... Dec 13 14:29:41.343022 ignition[721]: no config URL provided Dec 13 14:29:41.416076 systemd[1]: Finished ignition-kargs.service. Dec 13 14:29:41.343030 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:29:41.438031 systemd[1]: Starting ignition-disks.service... Dec 13 14:29:41.343040 ignition[721]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:29:41.460078 systemd[1]: Finished ignition-disks.service. Dec 13 14:29:41.343080 ignition[721]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 14:29:41.460902 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:29:41.351543 ignition[721]: GET result: OK Dec 13 14:29:41.483622 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:29:41.351610 ignition[721]: parsing config with SHA512: 99f5509ec61e4a8bccb08ffc9f4aab4dc543c56e03484742445eb0373ee658fe8431995267cddd9538778509929e5439390d268a002d1ea3a0381149b9e86633 Dec 13 14:29:41.497636 systemd[1]: Reached target local-fs.target. Dec 13 14:29:41.354908 ignition[721]: fetch: fetch complete Dec 13 14:29:41.512623 systemd[1]: Reached target sysinit.target. Dec 13 14:29:41.354915 ignition[721]: fetch: fetch passed Dec 13 14:29:41.526609 systemd[1]: Reached target basic.target. Dec 13 14:29:41.354969 ignition[721]: Ignition finished successfully Dec 13 14:29:41.527995 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:29:41.396953 ignition[727]: Ignition 2.14.0 Dec 13 14:29:41.396963 ignition[727]: Stage: kargs Dec 13 14:29:41.397096 ignition[727]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:29:41.397127 ignition[727]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:29:41.405254 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:29:41.406349 ignition[727]: kargs: kargs passed Dec 13 14:29:41.406402 ignition[727]: Ignition finished successfully Dec 13 14:29:41.449924 ignition[733]: Ignition 2.14.0 Dec 13 14:29:41.449933 ignition[733]: Stage: disks Dec 13 14:29:41.450078 ignition[733]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:29:41.450109 ignition[733]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:29:41.457700 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:29:41.458933 ignition[733]: disks: disks passed Dec 13 14:29:41.458988 ignition[733]: Ignition finished successfully Dec 13 14:29:41.575234 systemd-fsck[741]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 14:29:41.783489 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:29:41.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:41.792873 systemd[1]: Mounting sysroot.mount... Dec 13 14:29:41.822530 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:29:41.823990 systemd[1]: Mounted sysroot.mount. Dec 13 14:29:41.824335 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:29:41.839482 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:29:41.858188 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:29:41.858247 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:29:41.858282 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:29:41.874969 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:29:41.956604 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (747) Dec 13 14:29:41.956644 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:29:41.956667 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:29:41.956691 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:29:41.898678 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:29:41.971607 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:29:41.950862 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:29:41.980631 initrd-setup-root[769]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:29:41.990579 initrd-setup-root[778]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:29:42.000567 initrd-setup-root[786]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:29:42.000941 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:29:42.019900 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:29:42.056764 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:29:42.079309 kernel: kauditd_printk_skb: 24 callbacks suppressed Dec 13 14:29:42.079389 kernel: audit: type=1130 audit(1734100182.069:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:42.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:42.072042 systemd[1]: Starting ignition-mount.service... Dec 13 14:29:42.097709 systemd-networkd[696]: eth0: Gained IPv6LL Dec 13 14:29:42.109661 systemd[1]: Starting sysroot-boot.service... Dec 13 14:29:42.131878 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:29:42.132187 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:29:42.148361 systemd[1]: Finished sysroot-boot.service. Dec 13 14:29:42.159620 ignition[813]: INFO : Ignition 2.14.0 Dec 13 14:29:42.159620 ignition[813]: INFO : Stage: mount Dec 13 14:29:42.159620 ignition[813]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:29:42.159620 ignition[813]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:29:42.159620 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:29:42.292780 kernel: audit: type=1130 audit(1734100182.173:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:42.292823 kernel: audit: type=1130 audit(1734100182.208:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:42.292843 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (823) Dec 13 14:29:42.292867 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:29:42.292888 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:29:42.292907 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:29:42.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:42.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:42.196117 systemd[1]: Finished ignition-mount.service. Dec 13 14:29:42.323745 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:29:42.323831 ignition[813]: INFO : mount: mount passed Dec 13 14:29:42.323831 ignition[813]: INFO : Ignition finished successfully Dec 13 14:29:42.211062 systemd[1]: Starting ignition-files.service... Dec 13 14:29:42.245038 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:29:42.318592 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:29:42.374464 ignition[842]: INFO : Ignition 2.14.0 Dec 13 14:29:42.374464 ignition[842]: INFO : Stage: files Dec 13 14:29:42.374464 ignition[842]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:29:42.374464 ignition[842]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:29:42.374464 ignition[842]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:29:42.374464 ignition[842]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:29:42.374464 ignition[842]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:29:42.374464 ignition[842]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:29:42.374464 ignition[842]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:29:42.374464 ignition[842]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:29:42.374464 ignition[842]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:29:42.374464 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Dec 13 14:29:42.374464 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:29:42.543572 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (842) Dec 13 14:29:42.370739 unknown[842]: wrote ssh authorized keys file for user: core Dec 13 14:29:42.552584 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2463217373" Dec 13 14:29:42.552584 ignition[842]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2463217373": device or resource busy Dec 13 14:29:42.552584 ignition[842]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2463217373", trying btrfs: device or resource busy Dec 13 14:29:42.552584 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2463217373" Dec 13 14:29:42.552584 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2463217373" Dec 13 14:29:42.552584 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem2463217373" Dec 13 14:29:42.552584 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem2463217373" Dec 13 14:29:42.552584 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Dec 13 14:29:42.552584 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 14:29:42.552584 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:29:42.552584 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem985272525" Dec 13 14:29:42.552584 ignition[842]: CRITICAL : files: createFilesystemsFiles: createFiles: op(7): op(8): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem985272525": device or resource busy Dec 13 14:29:42.552584 ignition[842]: ERROR : files: createFilesystemsFiles: createFiles: op(7): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem985272525", trying btrfs: device or resource busy Dec 13 14:29:42.552584 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem985272525" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem985272525" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [started] unmounting "/mnt/oem985272525" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [finished] unmounting "/mnt/oem985272525" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:29:42.792625 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem988968211" Dec 13 14:29:42.792625 ignition[842]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem988968211": device or resource busy Dec 13 14:29:43.028605 ignition[842]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem988968211", trying btrfs: device or resource busy Dec 13 14:29:43.028605 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem988968211" Dec 13 14:29:43.028605 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem988968211" Dec 13 14:29:43.028605 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem988968211" Dec 13 14:29:43.028605 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem988968211" Dec 13 14:29:43.028605 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 14:29:43.028605 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 14:29:43.028605 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(12): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:29:43.028605 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2164844312" Dec 13 14:29:43.028605 ignition[842]: CRITICAL : files: createFilesystemsFiles: createFiles: op(12): op(13): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2164844312": device or resource busy Dec 13 14:29:43.028605 ignition[842]: ERROR : files: createFilesystemsFiles: createFiles: op(12): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2164844312", trying btrfs: device or resource busy Dec 13 14:29:43.028605 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2164844312" Dec 13 14:29:43.028605 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2164844312" Dec 13 14:29:43.028605 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [started] unmounting "/mnt/oem2164844312" Dec 13 14:29:43.273617 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [finished] unmounting "/mnt/oem2164844312" Dec 13 14:29:43.273617 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 14:29:43.273617 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:29:43.273617 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(16): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 14:29:43.273617 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(16): GET result: OK Dec 13 14:29:43.273617 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(17): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(17): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(18): [started] processing unit "oem-gce.service" Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(18): [finished] processing unit "oem-gce.service" Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(19): [started] processing unit "oem-gce-enable-oslogin.service" Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(19): [finished] processing unit "oem-gce-enable-oslogin.service" Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(1a): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(1a): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(1b): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(1b): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(1c): [started] setting preset to enabled for "oem-gce.service" Dec 13 14:29:43.273617 ignition[842]: INFO : files: op(1c): [finished] setting preset to enabled for "oem-gce.service" Dec 13 14:29:43.746616 kernel: audit: type=1130 audit(1734100183.288:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.746679 kernel: audit: type=1130 audit(1734100183.394:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.746706 kernel: audit: type=1130 audit(1734100183.460:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.746729 kernel: audit: type=1131 audit(1734100183.460:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.746749 kernel: audit: type=1130 audit(1734100183.557:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.746764 kernel: audit: type=1131 audit(1734100183.557:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.746778 kernel: audit: type=1130 audit(1734100183.707:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.747037 ignition[842]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:29:43.747037 ignition[842]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:29:43.747037 ignition[842]: INFO : files: files passed Dec 13 14:29:43.747037 ignition[842]: INFO : Ignition finished successfully Dec 13 14:29:43.279043 systemd[1]: Finished ignition-files.service. Dec 13 14:29:43.299494 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:29:43.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.334798 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:29:43.868644 initrd-setup-root-after-ignition[865]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:29:43.335925 systemd[1]: Starting ignition-quench.service... Dec 13 14:29:43.385099 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:29:43.396259 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:29:43.396408 systemd[1]: Finished ignition-quench.service. Dec 13 14:29:43.461885 systemd[1]: Reached target ignition-complete.target. Dec 13 14:29:43.515951 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:29:43.546511 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:29:43.546714 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:29:43.558955 systemd[1]: Reached target initrd-fs.target. Dec 13 14:29:43.629757 systemd[1]: Reached target initrd.target. Dec 13 14:29:44.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.663852 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:29:43.665264 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:29:44.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.692109 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:29:44.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.710477 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:29:44.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.762624 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:29:44.119608 ignition[880]: INFO : Ignition 2.14.0 Dec 13 14:29:44.119608 ignition[880]: INFO : Stage: umount Dec 13 14:29:44.119608 ignition[880]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:29:44.119608 ignition[880]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:29:43.768900 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:29:44.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.191708 iscsid[707]: iscsid shutting down. Dec 13 14:29:44.206723 ignition[880]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:29:44.206723 ignition[880]: INFO : umount: umount passed Dec 13 14:29:44.206723 ignition[880]: INFO : Ignition finished successfully Dec 13 14:29:44.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.803985 systemd[1]: Stopped target timers.target. Dec 13 14:29:44.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.818900 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:29:44.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.819096 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:29:44.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.835120 systemd[1]: Stopped target initrd.target. Dec 13 14:29:44.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.851886 systemd[1]: Stopped target basic.target. Dec 13 14:29:44.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.875945 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:29:44.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:43.897778 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:29:43.912790 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:29:43.929796 systemd[1]: Stopped target remote-fs.target. Dec 13 14:29:43.945801 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:29:43.960808 systemd[1]: Stopped target sysinit.target. Dec 13 14:29:43.976821 systemd[1]: Stopped target local-fs.target. Dec 13 14:29:43.992797 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:29:44.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.009814 systemd[1]: Stopped target swap.target. Dec 13 14:29:44.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.023726 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:29:44.023932 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:29:44.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.038934 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:29:44.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.053805 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:29:44.053999 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:29:44.072952 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:29:44.073136 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:29:44.090923 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:29:44.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.091103 systemd[1]: Stopped ignition-files.service. Dec 13 14:29:44.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.585000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:29:44.107234 systemd[1]: Stopping ignition-mount.service... Dec 13 14:29:44.141993 systemd[1]: Stopping iscsid.service... Dec 13 14:29:44.153750 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:29:44.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.153965 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:29:44.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.179660 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:29:44.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.198576 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:29:44.198858 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:29:44.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.214934 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:29:44.215113 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:29:44.237780 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:29:44.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.238972 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:29:44.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.239093 systemd[1]: Stopped iscsid.service. Dec 13 14:29:44.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.249357 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:29:44.249497 systemd[1]: Stopped ignition-mount.service. Dec 13 14:29:44.265328 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:29:44.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.265473 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:29:44.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.279457 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:29:44.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:44.279606 systemd[1]: Stopped ignition-disks.service. Dec 13 14:29:44.295678 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:29:44.295770 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:29:44.310706 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:29:44.310786 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:29:44.325735 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:29:44.325819 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:29:44.929491 systemd-journald[189]: Received SIGTERM from PID 1 (n/a). Dec 13 14:29:44.340707 systemd[1]: Stopped target paths.target. Dec 13 14:29:44.354599 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:29:44.358558 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:29:44.370575 systemd[1]: Stopped target slices.target. Dec 13 14:29:44.384603 systemd[1]: Stopped target sockets.target. Dec 13 14:29:44.397671 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:29:44.397746 systemd[1]: Closed iscsid.socket. Dec 13 14:29:44.418809 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:29:44.418911 systemd[1]: Stopped ignition-setup.service. Dec 13 14:29:44.433784 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:29:44.433856 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:29:44.448894 systemd[1]: Stopping iscsiuio.service... Dec 13 14:29:44.463027 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:29:44.463165 systemd[1]: Stopped iscsiuio.service. Dec 13 14:29:44.471174 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:29:44.471284 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:29:44.492649 systemd[1]: Stopped target network.target. Dec 13 14:29:44.507657 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:29:44.507739 systemd[1]: Closed iscsiuio.socket. Dec 13 14:29:44.522875 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:29:44.527509 systemd-networkd[696]: eth0: DHCPv6 lease lost Dec 13 14:29:44.536807 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:29:44.555992 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:29:44.556119 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:29:44.564458 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:29:44.564600 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:29:44.587335 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:29:44.587378 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:29:44.602796 systemd[1]: Stopping network-cleanup.service... Dec 13 14:29:44.615574 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:29:44.615690 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:29:44.628726 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:29:44.628800 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:29:44.646810 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:29:44.646872 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:29:44.661845 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:29:44.678457 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:29:44.679103 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:29:44.679262 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:29:44.688287 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:29:44.688379 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:29:44.708698 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:29:44.708764 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:29:44.726741 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:29:44.726822 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:29:44.741856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:29:44.741928 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:29:44.757779 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:29:44.757848 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:29:44.775910 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:29:44.799601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:29:44.799728 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:29:44.815398 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:29:44.815567 systemd[1]: Stopped network-cleanup.service. Dec 13 14:29:44.831166 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:29:44.831289 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:29:44.845995 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:29:44.862840 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:29:44.899489 systemd[1]: Switching root. Dec 13 14:29:44.949275 systemd-journald[189]: Journal stopped Dec 13 14:29:49.580893 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:29:49.581027 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:29:49.581062 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:29:49.581084 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:29:49.581106 kernel: SELinux: policy capability open_perms=1 Dec 13 14:29:49.581129 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:29:49.581152 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:29:49.581178 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:29:49.581200 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:29:49.581229 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:29:49.581260 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:29:49.581284 systemd[1]: Successfully loaded SELinux policy in 108.950ms. Dec 13 14:29:49.581323 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.691ms. Dec 13 14:29:49.581349 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:29:49.581374 systemd[1]: Detected virtualization kvm. Dec 13 14:29:49.581397 systemd[1]: Detected architecture x86-64. Dec 13 14:29:49.581459 systemd[1]: Detected first boot. Dec 13 14:29:49.581485 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:29:49.581514 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:29:49.581538 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:29:49.581564 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:29:49.581595 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:29:49.581622 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:29:49.581652 kernel: kauditd_printk_skb: 51 callbacks suppressed Dec 13 14:29:49.581675 kernel: audit: type=1334 audit(1734100188.705:89): prog-id=12 op=LOAD Dec 13 14:29:49.581698 kernel: audit: type=1334 audit(1734100188.705:90): prog-id=3 op=UNLOAD Dec 13 14:29:49.581726 kernel: audit: type=1334 audit(1734100188.711:91): prog-id=13 op=LOAD Dec 13 14:29:49.581748 kernel: audit: type=1334 audit(1734100188.718:92): prog-id=14 op=LOAD Dec 13 14:29:49.581771 kernel: audit: type=1334 audit(1734100188.718:93): prog-id=4 op=UNLOAD Dec 13 14:29:49.581794 kernel: audit: type=1334 audit(1734100188.718:94): prog-id=5 op=UNLOAD Dec 13 14:29:49.581816 kernel: audit: type=1334 audit(1734100188.725:95): prog-id=15 op=LOAD Dec 13 14:29:49.581836 kernel: audit: type=1334 audit(1734100188.725:96): prog-id=12 op=UNLOAD Dec 13 14:29:49.581858 kernel: audit: type=1334 audit(1734100188.753:97): prog-id=16 op=LOAD Dec 13 14:29:49.581879 kernel: audit: type=1334 audit(1734100188.759:98): prog-id=17 op=LOAD Dec 13 14:29:49.581915 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:29:49.581939 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:29:49.581962 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:29:49.581986 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:29:49.582011 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:29:49.582036 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:29:49.582060 systemd[1]: Created slice system-getty.slice. Dec 13 14:29:49.582083 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:29:49.582111 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:29:49.582134 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:29:49.582158 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:29:49.582180 systemd[1]: Created slice user.slice. Dec 13 14:29:49.582204 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:29:49.582230 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:29:49.582253 systemd[1]: Set up automount boot.automount. Dec 13 14:29:49.582277 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:29:49.582302 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:29:49.582329 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:29:49.582352 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:29:49.582376 systemd[1]: Reached target integritysetup.target. Dec 13 14:29:49.582400 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:29:49.583506 systemd[1]: Reached target remote-fs.target. Dec 13 14:29:49.583549 systemd[1]: Reached target slices.target. Dec 13 14:29:49.583571 systemd[1]: Reached target swap.target. Dec 13 14:29:49.583591 systemd[1]: Reached target torcx.target. Dec 13 14:29:49.583617 systemd[1]: Reached target veritysetup.target. Dec 13 14:29:49.583648 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:29:49.583683 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:29:49.583706 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:29:49.583729 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:29:49.583751 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:29:49.583771 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:29:49.583793 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:29:49.583815 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:29:49.583843 systemd[1]: Mounting media.mount... Dec 13 14:29:49.583927 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:49.583955 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:29:49.583978 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:29:49.584000 systemd[1]: Mounting tmp.mount... Dec 13 14:29:49.584022 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:29:49.584045 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:29:49.584068 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:29:49.584091 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:29:49.584113 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:29:49.584136 systemd[1]: Starting modprobe@drm.service... Dec 13 14:29:49.584163 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:29:49.584187 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:29:49.584209 systemd[1]: Starting modprobe@loop.service... Dec 13 14:29:49.584231 kernel: fuse: init (API version 7.34) Dec 13 14:29:49.584256 kernel: loop: module loaded Dec 13 14:29:49.584279 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:29:49.584302 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:29:49.584323 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:29:49.584346 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:29:49.584374 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:29:49.584398 systemd[1]: Stopped systemd-journald.service. Dec 13 14:29:49.584438 systemd[1]: Starting systemd-journald.service... Dec 13 14:29:49.584463 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:29:49.584487 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:29:49.584515 systemd-journald[1005]: Journal started Dec 13 14:29:49.584626 systemd-journald[1005]: Runtime Journal (/run/log/journal/7cfbc37b4f251c39fac92ace5eeadaa8) is 8.0M, max 148.8M, 140.8M free. Dec 13 14:29:44.948000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:29:45.244000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:29:45.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:29:45.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:29:45.394000 audit: BPF prog-id=10 op=LOAD Dec 13 14:29:45.394000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:29:45.394000 audit: BPF prog-id=11 op=LOAD Dec 13 14:29:45.394000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:29:45.549000 audit[913]: AVC avc: denied { associate } for pid=913 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:29:45.549000 audit[913]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558b2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=896 pid=913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:45.549000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:29:45.559000 audit[913]: AVC avc: denied { associate } for pid=913 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:29:45.559000 audit[913]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155989 a2=1ed a3=0 items=2 ppid=896 pid=913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:45.559000 audit: CWD cwd="/" Dec 13 14:29:45.559000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:45.559000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:45.559000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:29:48.705000 audit: BPF prog-id=12 op=LOAD Dec 13 14:29:48.705000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:29:48.711000 audit: BPF prog-id=13 op=LOAD Dec 13 14:29:48.718000 audit: BPF prog-id=14 op=LOAD Dec 13 14:29:48.718000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:29:48.718000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:29:48.725000 audit: BPF prog-id=15 op=LOAD Dec 13 14:29:48.725000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:29:48.753000 audit: BPF prog-id=16 op=LOAD Dec 13 14:29:48.759000 audit: BPF prog-id=17 op=LOAD Dec 13 14:29:48.760000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:29:48.760000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:29:48.767000 audit: BPF prog-id=18 op=LOAD Dec 13 14:29:48.767000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:29:48.774000 audit: BPF prog-id=19 op=LOAD Dec 13 14:29:48.780000 audit: BPF prog-id=20 op=LOAD Dec 13 14:29:48.780000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:29:48.780000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:29:48.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:48.797000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:29:48.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:48.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.546000 audit: BPF prog-id=21 op=LOAD Dec 13 14:29:49.546000 audit: BPF prog-id=22 op=LOAD Dec 13 14:29:49.546000 audit: BPF prog-id=23 op=LOAD Dec 13 14:29:49.546000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:29:49.546000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:29:49.577000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:29:49.577000 audit[1005]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe43a74ac0 a2=4000 a3=7ffe43a74b5c items=0 ppid=1 pid=1005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:49.577000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:29:48.705178 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:29:45.546755 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:29:48.783232 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:29:45.547918 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:29:45.547954 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:29:45.548009 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:29:45.548028 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:29:45.548096 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:29:45.548121 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:29:45.548464 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:29:45.548545 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:29:45.548570 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:29:45.549761 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:29:45.549809 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:29:45.549834 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:29:45.549852 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:29:45.549873 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:29:45.549893 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:29:48.085790 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:48Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:29:48.086096 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:48Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:29:48.086230 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:48Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:29:48.086496 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:48Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:29:48.086561 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:48Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:29:48.086636 /usr/lib/systemd/system-generators/torcx-generator[913]: time="2024-12-13T14:29:48Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:29:49.608467 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:29:49.622457 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:29:49.636440 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:29:49.642780 systemd[1]: Stopped verity-setup.service. Dec 13 14:29:49.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.661444 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:49.670449 systemd[1]: Started systemd-journald.service. Dec 13 14:29:49.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.679924 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:29:49.687762 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:29:49.694744 systemd[1]: Mounted media.mount. Dec 13 14:29:49.701793 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:29:49.710753 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:29:49.719779 systemd[1]: Mounted tmp.mount. Dec 13 14:29:49.726936 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:29:49.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.736005 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:29:49.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.745005 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:29:49.745229 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:29:49.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.754048 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:29:49.754255 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:29:49.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.763016 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:29:49.763241 systemd[1]: Finished modprobe@drm.service. Dec 13 14:29:49.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.772011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:29:49.772223 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:29:49.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.781024 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:29:49.781231 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:29:49.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.789985 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:29:49.790195 systemd[1]: Finished modprobe@loop.service. Dec 13 14:29:49.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.798994 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:29:49.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.807994 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:29:49.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.818054 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:29:49.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.827031 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:29:49.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.836354 systemd[1]: Reached target network-pre.target. Dec 13 14:29:49.846191 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:29:49.857091 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:29:49.864594 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:29:49.868275 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:29:49.877313 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:29:49.884704 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:29:49.886452 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:29:49.888551 systemd-journald[1005]: Time spent on flushing to /var/log/journal/7cfbc37b4f251c39fac92ace5eeadaa8 is 59.149ms for 1144 entries. Dec 13 14:29:49.888551 systemd-journald[1005]: System Journal (/var/log/journal/7cfbc37b4f251c39fac92ace5eeadaa8) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:29:49.979330 systemd-journald[1005]: Received client request to flush runtime journal. Dec 13 14:29:49.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.901631 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:29:49.903570 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:29:49.912491 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:29:49.921324 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:29:49.981465 udevadm[1019]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:29:49.932014 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:29:49.941745 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:29:49.948382 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:29:49.956968 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:29:49.969277 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:29:49.980647 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:29:49.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:49.990157 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:29:49.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:50.589233 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:29:50.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:50.597000 audit: BPF prog-id=24 op=LOAD Dec 13 14:29:50.597000 audit: BPF prog-id=25 op=LOAD Dec 13 14:29:50.597000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:29:50.597000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:29:50.599646 systemd[1]: Starting systemd-udevd.service... Dec 13 14:29:50.623710 systemd-udevd[1022]: Using default interface naming scheme 'v252'. Dec 13 14:29:50.677778 systemd[1]: Started systemd-udevd.service. Dec 13 14:29:50.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:50.688000 audit: BPF prog-id=26 op=LOAD Dec 13 14:29:50.691073 systemd[1]: Starting systemd-networkd.service... Dec 13 14:29:50.703000 audit: BPF prog-id=27 op=LOAD Dec 13 14:29:50.703000 audit: BPF prog-id=28 op=LOAD Dec 13 14:29:50.703000 audit: BPF prog-id=29 op=LOAD Dec 13 14:29:50.706170 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:29:50.751885 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:29:50.787191 systemd[1]: Started systemd-userdbd.service. Dec 13 14:29:50.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:50.933462 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1028) Dec 13 14:29:50.937611 systemd-networkd[1037]: lo: Link UP Dec 13 14:29:50.937628 systemd-networkd[1037]: lo: Gained carrier Dec 13 14:29:50.944442 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:29:50.938371 systemd-networkd[1037]: Enumeration completed Dec 13 14:29:50.938527 systemd[1]: Started systemd-networkd.service. Dec 13 14:29:50.938705 systemd-networkd[1037]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:29:50.940783 systemd-networkd[1037]: eth0: Link UP Dec 13 14:29:50.940791 systemd-networkd[1037]: eth0: Gained carrier Dec 13 14:29:50.952603 systemd-networkd[1037]: eth0: DHCPv4 address 10.128.0.81/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 14:29:50.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:50.952000 audit[1023]: AVC avc: denied { confidentiality } for pid=1023 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:29:50.952000 audit[1023]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559a6366c410 a1=337fc a2=7fe53a38ebc5 a3=5 items=110 ppid=1022 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:50.952000 audit: CWD cwd="/" Dec 13 14:29:50.952000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=1 name=(null) inode=14148 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=2 name=(null) inode=14148 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=3 name=(null) inode=14149 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=4 name=(null) inode=14148 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=5 name=(null) inode=14150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=6 name=(null) inode=14148 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=7 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=8 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=9 name=(null) inode=14152 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=10 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=11 name=(null) inode=14153 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=12 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=13 name=(null) inode=14154 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=14 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=15 name=(null) inode=14155 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=16 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=17 name=(null) inode=14156 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=18 name=(null) inode=14148 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=19 name=(null) inode=14157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=20 name=(null) inode=14157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=21 name=(null) inode=14158 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=22 name=(null) inode=14157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=23 name=(null) inode=14159 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=24 name=(null) inode=14157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=25 name=(null) inode=14160 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=26 name=(null) inode=14157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=27 name=(null) inode=14161 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=28 name=(null) inode=14157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=29 name=(null) inode=14162 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=30 name=(null) inode=14148 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=31 name=(null) inode=14163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=32 name=(null) inode=14163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=33 name=(null) inode=14164 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=34 name=(null) inode=14163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=35 name=(null) inode=14165 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=36 name=(null) inode=14163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=37 name=(null) inode=14166 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=38 name=(null) inode=14163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=39 name=(null) inode=14167 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=40 name=(null) inode=14163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=41 name=(null) inode=14168 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=42 name=(null) inode=14148 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=43 name=(null) inode=14169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=44 name=(null) inode=14169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=45 name=(null) inode=14170 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=46 name=(null) inode=14169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=47 name=(null) inode=14171 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=48 name=(null) inode=14169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=49 name=(null) inode=14172 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=50 name=(null) inode=14169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=51 name=(null) inode=14173 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=52 name=(null) inode=14169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=53 name=(null) inode=14174 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:51.013459 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:29:50.952000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=55 name=(null) inode=14175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=56 name=(null) inode=14175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=57 name=(null) inode=14176 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=58 name=(null) inode=14175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=59 name=(null) inode=14177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=60 name=(null) inode=14175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=61 name=(null) inode=14178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=62 name=(null) inode=14178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=63 name=(null) inode=14179 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=64 name=(null) inode=14178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=65 name=(null) inode=14180 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=66 name=(null) inode=14178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=67 name=(null) inode=14181 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=68 name=(null) inode=14178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=69 name=(null) inode=14182 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=70 name=(null) inode=14178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=71 name=(null) inode=14183 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=72 name=(null) inode=14175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=73 name=(null) inode=14184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=74 name=(null) inode=14184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=75 name=(null) inode=14185 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=76 name=(null) inode=14184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=77 name=(null) inode=14186 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=78 name=(null) inode=14184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=79 name=(null) inode=14187 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=80 name=(null) inode=14184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=81 name=(null) inode=14188 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=82 name=(null) inode=14184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=83 name=(null) inode=14189 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=84 name=(null) inode=14175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=85 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=86 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=87 name=(null) inode=14191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=88 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=89 name=(null) inode=14192 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=90 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=91 name=(null) inode=14193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=92 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=93 name=(null) inode=14194 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=94 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=95 name=(null) inode=14195 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=96 name=(null) inode=14175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=97 name=(null) inode=14196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=98 name=(null) inode=14196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=99 name=(null) inode=14197 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=100 name=(null) inode=14196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=101 name=(null) inode=14198 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=102 name=(null) inode=14196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=103 name=(null) inode=14199 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=104 name=(null) inode=14196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=105 name=(null) inode=14200 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=106 name=(null) inode=14196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=107 name=(null) inode=14201 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PATH item=109 name=(null) inode=14202 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:29:50.952000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:29:51.032449 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 14:29:51.043447 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 14:29:51.043495 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:29:51.068401 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:29:51.068550 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 14:29:51.084465 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:29:51.091887 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:29:51.108133 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:29:51.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.118316 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:29:51.147591 lvm[1059]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:29:51.177850 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:29:51.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.187793 systemd[1]: Reached target cryptsetup.target. Dec 13 14:29:51.198112 systemd[1]: Starting lvm2-activation.service... Dec 13 14:29:51.204191 lvm[1060]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:29:51.237970 systemd[1]: Finished lvm2-activation.service. Dec 13 14:29:51.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.246799 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:29:51.255597 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:29:51.255659 systemd[1]: Reached target local-fs.target. Dec 13 14:29:51.264584 systemd[1]: Reached target machines.target. Dec 13 14:29:51.274172 systemd[1]: Starting ldconfig.service... Dec 13 14:29:51.282682 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:51.282779 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:51.284547 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:29:51.293196 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:29:51.304198 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:29:51.306565 systemd[1]: Starting systemd-sysext.service... Dec 13 14:29:51.307411 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1062 (bootctl) Dec 13 14:29:51.310715 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:29:51.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.331992 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:29:51.337166 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:29:51.345025 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:29:51.345546 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:29:51.366487 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 14:29:51.471647 systemd-fsck[1073]: fsck.fat 4.2 (2021-01-31) Dec 13 14:29:51.471647 systemd-fsck[1073]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:29:51.475736 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:29:51.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.488825 systemd[1]: Mounting boot.mount... Dec 13 14:29:51.526981 systemd[1]: Mounted boot.mount. Dec 13 14:29:51.550381 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:29:51.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.692220 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:29:51.693186 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:29:51.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.719729 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:29:51.746465 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 14:29:51.769165 (sd-sysext)[1077]: Using extensions 'kubernetes'. Dec 13 14:29:51.770496 (sd-sysext)[1077]: Merged extensions into '/usr'. Dec 13 14:29:51.794921 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:51.797562 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:29:51.804987 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:29:51.807645 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:29:51.816369 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:29:51.825811 systemd[1]: Starting modprobe@loop.service... Dec 13 14:29:51.832661 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:51.832909 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:51.833111 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:51.837507 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:29:51.845598 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:29:51.845865 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:29:51.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.855328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:29:51.855624 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:29:51.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.864405 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:29:51.864717 systemd[1]: Finished modprobe@loop.service. Dec 13 14:29:51.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.874460 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:29:51.874677 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:29:51.876260 systemd[1]: Finished systemd-sysext.service. Dec 13 14:29:51.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:51.886525 systemd[1]: Starting ensure-sysext.service... Dec 13 14:29:51.896290 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:29:51.908976 systemd[1]: Reloading. Dec 13 14:29:51.941612 systemd-tmpfiles[1084]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:29:51.952048 systemd-tmpfiles[1084]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:29:51.969025 systemd-tmpfiles[1084]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:29:52.011619 ldconfig[1061]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:29:52.047105 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2024-12-13T14:29:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:29:52.048374 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2024-12-13T14:29:52Z" level=info msg="torcx already run" Dec 13 14:29:52.186344 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:29:52.186379 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:29:52.227390 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:29:52.315000 audit: BPF prog-id=30 op=LOAD Dec 13 14:29:52.315000 audit: BPF prog-id=31 op=LOAD Dec 13 14:29:52.315000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:29:52.315000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:29:52.317000 audit: BPF prog-id=32 op=LOAD Dec 13 14:29:52.317000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:29:52.318000 audit: BPF prog-id=33 op=LOAD Dec 13 14:29:52.318000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:29:52.319000 audit: BPF prog-id=34 op=LOAD Dec 13 14:29:52.319000 audit: BPF prog-id=35 op=LOAD Dec 13 14:29:52.319000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:29:52.319000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:29:52.320000 audit: BPF prog-id=36 op=LOAD Dec 13 14:29:52.320000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:29:52.320000 audit: BPF prog-id=37 op=LOAD Dec 13 14:29:52.320000 audit: BPF prog-id=38 op=LOAD Dec 13 14:29:52.320000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:29:52.320000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:29:52.325290 systemd[1]: Finished ldconfig.service. Dec 13 14:29:52.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:52.334333 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:29:52.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:52.351139 systemd[1]: Starting audit-rules.service... Dec 13 14:29:52.360630 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:29:52.370050 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:29:52.380972 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:29:52.389000 audit: BPF prog-id=39 op=LOAD Dec 13 14:29:52.393104 systemd[1]: Starting systemd-resolved.service... Dec 13 14:29:52.400000 audit: BPF prog-id=40 op=LOAD Dec 13 14:29:52.403792 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:29:52.412862 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:29:52.421680 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:29:52.421000 audit[1175]: SYSTEM_BOOT pid=1175 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:29:52.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:52.431444 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:29:52.431715 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:29:52.436000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:29:52.436000 audit[1178]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc3731f4e0 a2=420 a3=0 items=0 ppid=1148 pid=1178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:52.436000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:29:52.438250 augenrules[1178]: No rules Dec 13 14:29:52.441367 systemd[1]: Finished audit-rules.service. Dec 13 14:29:52.449113 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:29:52.464469 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:52.465009 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:29:52.467634 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:29:52.476819 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:29:52.485785 systemd[1]: Starting modprobe@loop.service... Dec 13 14:29:52.494670 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:29:52.502229 enable-oslogin[1186]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 14:29:52.503640 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:52.503985 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:52.506233 systemd[1]: Starting systemd-update-done.service... Dec 13 14:29:52.513559 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:29:52.513907 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:52.517453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:29:52.517707 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:29:52.527847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:29:52.528059 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:29:52.536159 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:29:52.536366 systemd[1]: Finished modprobe@loop.service. Dec 13 14:29:52.545570 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:29:52.545833 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:29:52.555587 systemd[1]: Finished systemd-update-done.service. Dec 13 14:29:52.567518 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:52.567964 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:29:52.572193 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:29:52.581600 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:29:52.590938 systemd[1]: Starting modprobe@loop.service... Dec 13 14:29:52.595715 systemd-resolved[1167]: Positive Trust Anchors: Dec 13 14:29:52.596121 systemd-resolved[1167]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:29:52.596287 systemd-resolved[1167]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:29:52.599605 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:29:52.605219 enable-oslogin[1191]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 14:29:52.608621 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:52.608876 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:52.609062 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:29:52.609209 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:52.612692 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:29:52.615414 systemd-resolved[1167]: Defaulting to hostname 'linux'. Dec 13 14:29:52.622062 systemd[1]: Started systemd-resolved.service. Dec 13 14:29:52.631247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:29:52.631493 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:29:52.640227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:29:52.640440 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:29:52.649343 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:29:52.649587 systemd[1]: Finished modprobe@loop.service. Dec 13 14:29:52.655567 systemd-timesyncd[1172]: Contacted time server 169.254.169.254:123 (169.254.169.254). Dec 13 14:29:52.655645 systemd-timesyncd[1172]: Initial clock synchronization to Fri 2024-12-13 14:29:52.906850 UTC. Dec 13 14:29:52.658944 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:29:52.668695 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:29:52.668932 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:29:52.682870 systemd[1]: Reached target network.target. Dec 13 14:29:52.691756 systemd[1]: Reached target nss-lookup.target. Dec 13 14:29:52.700755 systemd[1]: Reached target time-set.target. Dec 13 14:29:52.709777 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:52.710261 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:29:52.712313 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:29:52.721586 systemd[1]: Starting modprobe@drm.service... Dec 13 14:29:52.730740 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:29:52.739295 systemd[1]: Starting modprobe@loop.service... Dec 13 14:29:52.748350 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:29:52.752867 enable-oslogin[1197]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 14:29:52.756664 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:52.756934 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:52.758793 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:29:52.767602 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:29:52.767826 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:52.770375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:29:52.770620 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:29:52.779192 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:29:52.779414 systemd[1]: Finished modprobe@drm.service. Dec 13 14:29:52.788180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:29:52.788410 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:29:52.797176 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:29:52.797440 systemd[1]: Finished modprobe@loop.service. Dec 13 14:29:52.806149 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:29:52.806436 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:29:52.815569 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:29:52.815747 systemd[1]: Reached target sysinit.target. Dec 13 14:29:52.824716 systemd[1]: Started motdgen.path. Dec 13 14:29:52.831628 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:29:52.841831 systemd[1]: Started logrotate.timer. Dec 13 14:29:52.848733 systemd[1]: Started mdadm.timer. Dec 13 14:29:52.855617 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:29:52.864590 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:29:52.864643 systemd[1]: Reached target paths.target. Dec 13 14:29:52.871593 systemd[1]: Reached target timers.target. Dec 13 14:29:52.879250 systemd[1]: Listening on dbus.socket. Dec 13 14:29:52.888093 systemd[1]: Starting docker.socket... Dec 13 14:29:52.899568 systemd[1]: Listening on sshd.socket. Dec 13 14:29:52.906742 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:52.906839 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:29:52.907889 systemd[1]: Finished ensure-sysext.service. Dec 13 14:29:52.913626 systemd-networkd[1037]: eth0: Gained IPv6LL Dec 13 14:29:52.917031 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:29:52.926829 systemd[1]: Listening on docker.socket. Dec 13 14:29:52.934714 systemd[1]: Reached target network-online.target. Dec 13 14:29:52.943568 systemd[1]: Reached target sockets.target. Dec 13 14:29:52.951600 systemd[1]: Reached target basic.target. Dec 13 14:29:52.958656 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:29:52.958705 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:29:52.960377 systemd[1]: Starting containerd.service... Dec 13 14:29:52.969141 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:29:52.979471 systemd[1]: Starting dbus.service... Dec 13 14:29:52.990034 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:29:53.002416 systemd[1]: Starting extend-filesystems.service... Dec 13 14:29:53.003718 jq[1204]: false Dec 13 14:29:53.009621 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:29:53.012549 systemd[1]: Starting kubelet.service... Dec 13 14:29:53.022807 systemd[1]: Starting motdgen.service... Dec 13 14:29:53.031622 systemd[1]: Starting oem-gce.service... Dec 13 14:29:53.040683 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:29:53.051058 systemd[1]: Starting sshd-keygen.service... Dec 13 14:29:53.063997 systemd[1]: Starting systemd-logind.service... Dec 13 14:29:53.071658 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:53.071778 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 14:29:53.072569 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:29:53.073853 systemd[1]: Starting update-engine.service... Dec 13 14:29:53.082585 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:29:53.092375 extend-filesystems[1205]: Found loop1 Dec 13 14:29:53.092375 extend-filesystems[1205]: Found sda Dec 13 14:29:53.092375 extend-filesystems[1205]: Found sda1 Dec 13 14:29:53.092375 extend-filesystems[1205]: Found sda2 Dec 13 14:29:53.092375 extend-filesystems[1205]: Found sda3 Dec 13 14:29:53.092375 extend-filesystems[1205]: Found usr Dec 13 14:29:53.092375 extend-filesystems[1205]: Found sda4 Dec 13 14:29:53.092375 extend-filesystems[1205]: Found sda6 Dec 13 14:29:53.092375 extend-filesystems[1205]: Found sda7 Dec 13 14:29:53.092375 extend-filesystems[1205]: Found sda9 Dec 13 14:29:53.092375 extend-filesystems[1205]: Checking size of /dev/sda9 Dec 13 14:29:53.283907 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 14:29:53.283984 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 14:29:53.095355 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:29:53.160750 dbus-daemon[1203]: [system] SELinux support is enabled Dec 13 14:29:53.284649 jq[1223]: true Dec 13 14:29:53.284814 extend-filesystems[1205]: Resized partition /dev/sda9 Dec 13 14:29:53.095669 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:29:53.186797 dbus-daemon[1203]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1037 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:29:53.293017 extend-filesystems[1245]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:29:53.099313 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:29:53.215389 dbus-daemon[1203]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:29:53.100131 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:29:53.148615 systemd[1]: Created slice system-sshd.slice. Dec 13 14:29:53.311757 mkfs.ext4[1234]: mke2fs 1.46.5 (30-Dec-2021) Dec 13 14:29:53.311757 mkfs.ext4[1234]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Dec 13 14:29:53.311757 mkfs.ext4[1234]: Creating filesystem with 262144 4k blocks and 65536 inodes Dec 13 14:29:53.311757 mkfs.ext4[1234]: Filesystem UUID: 5779f325-a968-426e-8634-471f2919d114 Dec 13 14:29:53.311757 mkfs.ext4[1234]: Superblock backups stored on blocks: Dec 13 14:29:53.311757 mkfs.ext4[1234]: 32768, 98304, 163840, 229376 Dec 13 14:29:53.311757 mkfs.ext4[1234]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:29:53.311757 mkfs.ext4[1234]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:29:53.311757 mkfs.ext4[1234]: Creating journal (8192 blocks): done Dec 13 14:29:53.311757 mkfs.ext4[1234]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:29:53.161055 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:29:53.161338 systemd[1]: Finished motdgen.service. Dec 13 14:29:53.317079 jq[1233]: true Dec 13 14:29:53.168864 systemd[1]: Started dbus.service. Dec 13 14:29:53.180731 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:29:53.180819 systemd[1]: Reached target system-config.target. Dec 13 14:29:53.189702 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:29:53.189736 systemd[1]: Reached target user-config.target. Dec 13 14:29:53.318231 umount[1248]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Dec 13 14:29:53.225782 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:29:53.325480 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:29:53.329504 update_engine[1221]: I1213 14:29:53.325860 1221 main.cc:92] Flatcar Update Engine starting Dec 13 14:29:53.332516 systemd[1]: Started update-engine.service. Dec 13 14:29:53.334745 update_engine[1221]: I1213 14:29:53.332898 1221 update_check_scheduler.cc:74] Next update check in 4m6s Dec 13 14:29:53.344641 systemd[1]: Started locksmithd.service. Dec 13 14:29:53.351477 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 14:29:53.366156 extend-filesystems[1245]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 14:29:53.366156 extend-filesystems[1245]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 14:29:53.366156 extend-filesystems[1245]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 14:29:53.365634 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:29:53.424777 env[1232]: time="2024-12-13T14:29:53.407237493Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:29:53.425080 extend-filesystems[1205]: Resized filesystem in /dev/sda9 Dec 13 14:29:53.365918 systemd[1]: Finished extend-filesystems.service. Dec 13 14:29:53.471083 bash[1272]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:29:53.472305 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:29:53.494630 dbus-daemon[1203]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:29:53.494865 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:29:53.495528 dbus-daemon[1203]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1249 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:29:53.507699 systemd[1]: Starting polkit.service... Dec 13 14:29:53.546078 systemd-logind[1218]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:29:53.546140 systemd-logind[1218]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:29:53.546174 systemd-logind[1218]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:29:53.555713 systemd-logind[1218]: New seat seat0. Dec 13 14:29:53.567834 systemd[1]: Started systemd-logind.service. Dec 13 14:29:53.597704 coreos-metadata[1202]: Dec 13 14:29:53.597 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 14:29:53.599353 env[1232]: time="2024-12-13T14:29:53.599297293Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:29:53.599701 env[1232]: time="2024-12-13T14:29:53.599673044Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:53.607601 env[1232]: time="2024-12-13T14:29:53.603871694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:29:53.607601 env[1232]: time="2024-12-13T14:29:53.603938989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:53.607601 env[1232]: time="2024-12-13T14:29:53.604347442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:29:53.607601 env[1232]: time="2024-12-13T14:29:53.604403135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:53.607601 env[1232]: time="2024-12-13T14:29:53.604454960Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:29:53.607601 env[1232]: time="2024-12-13T14:29:53.604475845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:53.607601 env[1232]: time="2024-12-13T14:29:53.604649872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:53.607601 env[1232]: time="2024-12-13T14:29:53.605131569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:53.607601 env[1232]: time="2024-12-13T14:29:53.605408057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:29:53.607601 env[1232]: time="2024-12-13T14:29:53.605458831Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:29:53.608620 env[1232]: time="2024-12-13T14:29:53.605579069Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:29:53.608620 env[1232]: time="2024-12-13T14:29:53.605621136Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:29:53.615355 env[1232]: time="2024-12-13T14:29:53.615213465Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:29:53.615355 env[1232]: time="2024-12-13T14:29:53.615322537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:29:53.615548 env[1232]: time="2024-12-13T14:29:53.615377136Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:29:53.615548 env[1232]: time="2024-12-13T14:29:53.615463336Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:29:53.615670 env[1232]: time="2024-12-13T14:29:53.615562537Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:29:53.615670 env[1232]: time="2024-12-13T14:29:53.615591868Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:29:53.615670 env[1232]: time="2024-12-13T14:29:53.615639486Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:29:53.615856 env[1232]: time="2024-12-13T14:29:53.615666428Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:29:53.615856 env[1232]: time="2024-12-13T14:29:53.615688378Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:29:53.616010 env[1232]: time="2024-12-13T14:29:53.615730117Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:29:53.616079 env[1232]: time="2024-12-13T14:29:53.616016782Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:29:53.616079 env[1232]: time="2024-12-13T14:29:53.616064234Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:29:53.616279 env[1232]: time="2024-12-13T14:29:53.616250892Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:29:53.616496 env[1232]: time="2024-12-13T14:29:53.616460279Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:29:53.617167 coreos-metadata[1202]: Dec 13 14:29:53.617 INFO Fetch failed with 404: resource not found Dec 13 14:29:53.617293 coreos-metadata[1202]: Dec 13 14:29:53.617 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 14:29:53.619539 coreos-metadata[1202]: Dec 13 14:29:53.619 INFO Fetch successful Dec 13 14:29:53.619640 coreos-metadata[1202]: Dec 13 14:29:53.619 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 14:29:53.619987 env[1232]: time="2024-12-13T14:29:53.619937305Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:29:53.620107 env[1232]: time="2024-12-13T14:29:53.620011417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620107 env[1232]: time="2024-12-13T14:29:53.620049550Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:29:53.620220 env[1232]: time="2024-12-13T14:29:53.620165964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620291 env[1232]: time="2024-12-13T14:29:53.620195806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620352 env[1232]: time="2024-12-13T14:29:53.620300615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620352 env[1232]: time="2024-12-13T14:29:53.620322950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620482 env[1232]: time="2024-12-13T14:29:53.620354955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620482 env[1232]: time="2024-12-13T14:29:53.620440083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620482 env[1232]: time="2024-12-13T14:29:53.620463798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620631 env[1232]: time="2024-12-13T14:29:53.620493664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620631 env[1232]: time="2024-12-13T14:29:53.620520608Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:29:53.620739 env[1232]: time="2024-12-13T14:29:53.620713378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620794 env[1232]: time="2024-12-13T14:29:53.620742119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620794 env[1232]: time="2024-12-13T14:29:53.620768609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.620891 env[1232]: time="2024-12-13T14:29:53.620791334Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:29:53.620891 env[1232]: time="2024-12-13T14:29:53.620820346Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:29:53.620891 env[1232]: time="2024-12-13T14:29:53.620843856Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:29:53.620891 env[1232]: time="2024-12-13T14:29:53.620872717Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:29:53.621063 env[1232]: time="2024-12-13T14:29:53.620924219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:29:53.621422 env[1232]: time="2024-12-13T14:29:53.621312175Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:29:53.625474 env[1232]: time="2024-12-13T14:29:53.621434034Z" level=info msg="Connect containerd service" Dec 13 14:29:53.625580 coreos-metadata[1202]: Dec 13 14:29:53.621 INFO Fetch failed with 404: resource not found Dec 13 14:29:53.625580 coreos-metadata[1202]: Dec 13 14:29:53.621 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 14:29:53.625580 coreos-metadata[1202]: Dec 13 14:29:53.622 INFO Fetch failed with 404: resource not found Dec 13 14:29:53.625580 coreos-metadata[1202]: Dec 13 14:29:53.622 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 14:29:53.625580 coreos-metadata[1202]: Dec 13 14:29:53.623 INFO Fetch successful Dec 13 14:29:53.628743 unknown[1202]: wrote ssh authorized keys file for user: core Dec 13 14:29:53.629646 env[1232]: time="2024-12-13T14:29:53.629600095Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:29:53.656063 env[1232]: time="2024-12-13T14:29:53.655991394Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:29:53.666364 env[1232]: time="2024-12-13T14:29:53.666287997Z" level=info msg="Start subscribing containerd event" Dec 13 14:29:53.667097 env[1232]: time="2024-12-13T14:29:53.667049839Z" level=info msg="Start recovering state" Dec 13 14:29:53.667370 env[1232]: time="2024-12-13T14:29:53.667344206Z" level=info msg="Start event monitor" Dec 13 14:29:53.667821 env[1232]: time="2024-12-13T14:29:53.667785912Z" level=info msg="Start snapshots syncer" Dec 13 14:29:53.668129 env[1232]: time="2024-12-13T14:29:53.668095048Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:29:53.668409 env[1232]: time="2024-12-13T14:29:53.668380362Z" level=info msg="Start streaming server" Dec 13 14:29:53.669270 env[1232]: time="2024-12-13T14:29:53.669228943Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:29:53.670648 env[1232]: time="2024-12-13T14:29:53.670618104Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:29:53.675839 systemd[1]: Started containerd.service. Dec 13 14:29:53.676274 env[1232]: time="2024-12-13T14:29:53.676224979Z" level=info msg="containerd successfully booted in 0.294543s" Dec 13 14:29:53.678188 update-ssh-keys[1279]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:29:53.684472 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:29:53.704549 polkitd[1276]: Started polkitd version 121 Dec 13 14:29:53.729952 polkitd[1276]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:29:53.730220 polkitd[1276]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:29:53.733369 polkitd[1276]: Finished loading, compiling and executing 2 rules Dec 13 14:29:53.734248 dbus-daemon[1203]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:29:53.734514 systemd[1]: Started polkit.service. Dec 13 14:29:53.737803 polkitd[1276]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:29:53.760579 systemd-hostnamed[1249]: Hostname set to <ci-3510-3-6-19c092b7e9e6e0ad3d89.c.flatcar-212911.internal> (transient) Dec 13 14:29:53.763203 systemd-resolved[1167]: System hostname changed to 'ci-3510-3-6-19c092b7e9e6e0ad3d89.c.flatcar-212911.internal'. Dec 13 14:29:55.288258 systemd[1]: Started kubelet.service. Dec 13 14:29:55.942094 sshd_keygen[1231]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:29:56.017938 locksmithd[1266]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:29:56.026719 systemd[1]: Finished sshd-keygen.service. Dec 13 14:29:56.037007 systemd[1]: Starting issuegen.service... Dec 13 14:29:56.046876 systemd[1]: Started sshd@0-10.128.0.81:22-139.178.68.195:50772.service. Dec 13 14:29:56.057825 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:29:56.058106 systemd[1]: Finished issuegen.service. Dec 13 14:29:56.069745 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:29:56.086076 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:29:56.097307 systemd[1]: Started getty@tty1.service. Dec 13 14:29:56.107080 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:29:56.116466 systemd[1]: Reached target getty.target. Dec 13 14:29:56.430754 sshd[1306]: Accepted publickey for core from 139.178.68.195 port 50772 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:29:56.434220 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:56.459766 systemd[1]: Created slice user-500.slice. Dec 13 14:29:56.468681 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:29:56.480627 systemd-logind[1218]: New session 1 of user core. Dec 13 14:29:56.489621 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:29:56.501068 systemd[1]: Starting user@500.service... Dec 13 14:29:56.523364 (systemd)[1316]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:56.761662 systemd[1316]: Queued start job for default target default.target. Dec 13 14:29:56.762529 systemd[1316]: Reached target paths.target. Dec 13 14:29:56.762577 systemd[1316]: Reached target sockets.target. Dec 13 14:29:56.762616 systemd[1316]: Reached target timers.target. Dec 13 14:29:56.762638 systemd[1316]: Reached target basic.target. Dec 13 14:29:56.762718 systemd[1316]: Reached target default.target. Dec 13 14:29:56.762775 systemd[1316]: Startup finished in 224ms. Dec 13 14:29:56.762813 systemd[1]: Started user@500.service. Dec 13 14:29:56.767480 kubelet[1292]: E1213 14:29:56.767379 1292 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:56.771381 systemd[1]: Started session-1.scope. Dec 13 14:29:56.779157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:56.779382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:56.779784 systemd[1]: kubelet.service: Consumed 1.469s CPU time. Dec 13 14:29:57.009166 systemd[1]: Started sshd@1-10.128.0.81:22-139.178.68.195:55958.service. Dec 13 14:29:57.350828 sshd[1325]: Accepted publickey for core from 139.178.68.195 port 55958 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:29:57.351879 sshd[1325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:57.360867 systemd[1]: Started session-2.scope. Dec 13 14:29:57.362659 systemd-logind[1218]: New session 2 of user core. Dec 13 14:29:57.574312 sshd[1325]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:57.580083 systemd[1]: sshd@1-10.128.0.81:22-139.178.68.195:55958.service: Deactivated successfully. Dec 13 14:29:57.581346 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:29:57.582681 systemd-logind[1218]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:29:57.584049 systemd-logind[1218]: Removed session 2. Dec 13 14:29:57.621299 systemd[1]: Started sshd@2-10.128.0.81:22-139.178.68.195:55970.service. Dec 13 14:29:57.931173 sshd[1331]: Accepted publickey for core from 139.178.68.195 port 55970 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:29:57.932689 sshd[1331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:57.941203 systemd[1]: Started session-3.scope. Dec 13 14:29:57.942510 systemd-logind[1218]: New session 3 of user core. Dec 13 14:29:58.152765 sshd[1331]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:58.156988 systemd[1]: sshd@2-10.128.0.81:22-139.178.68.195:55970.service: Deactivated successfully. Dec 13 14:29:58.158164 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:29:58.160661 systemd-logind[1218]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:29:58.162492 systemd-logind[1218]: Removed session 3. Dec 13 14:29:59.191146 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Dec 13 14:30:01.117466 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 14:30:01.142030 systemd-nspawn[1337]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Dec 13 14:30:01.142030 systemd-nspawn[1337]: Press ^] three times within 1s to kill container. Dec 13 14:30:01.155495 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:30:01.178616 systemd[1]: tmp-unifiedFVvBZI.mount: Deactivated successfully. Dec 13 14:30:01.244326 systemd[1]: Started oem-gce.service. Dec 13 14:30:01.244886 systemd[1]: Reached target multi-user.target. Dec 13 14:30:01.247836 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:30:01.259419 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:30:01.259702 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:30:01.261581 systemd[1]: Startup finished in 1.038s (kernel) + 7.305s (initrd) + 16.143s (userspace) = 24.488s. Dec 13 14:30:01.292002 systemd-nspawn[1337]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 14:30:01.292002 systemd-nspawn[1337]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 14:30:01.292363 systemd-nspawn[1337]: + /usr/bin/google_instance_setup Dec 13 14:30:01.948500 instance-setup[1343]: INFO Running google_set_multiqueue. Dec 13 14:30:01.969041 instance-setup[1343]: INFO Set channels for eth0 to 2. Dec 13 14:30:01.972808 instance-setup[1343]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 14:30:01.974259 instance-setup[1343]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 14:30:01.975150 instance-setup[1343]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 14:30:01.976476 instance-setup[1343]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 14:30:01.976918 instance-setup[1343]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 14:30:01.978676 instance-setup[1343]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 14:30:01.979176 instance-setup[1343]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 14:30:01.980867 instance-setup[1343]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 14:30:01.993158 instance-setup[1343]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 14:30:01.993326 instance-setup[1343]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 14:30:02.034970 systemd-nspawn[1337]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 14:30:02.373729 startup-script[1374]: INFO Starting startup scripts. Dec 13 14:30:02.387676 startup-script[1374]: INFO No startup scripts found in metadata. Dec 13 14:30:02.387839 startup-script[1374]: INFO Finished running startup scripts. Dec 13 14:30:02.421731 systemd-nspawn[1337]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 14:30:02.421731 systemd-nspawn[1337]: + daemon_pids=() Dec 13 14:30:02.422535 systemd-nspawn[1337]: + for d in accounts clock_skew network Dec 13 14:30:02.422535 systemd-nspawn[1337]: + daemon_pids+=($!) Dec 13 14:30:02.422535 systemd-nspawn[1337]: + for d in accounts clock_skew network Dec 13 14:30:02.422535 systemd-nspawn[1337]: + daemon_pids+=($!) Dec 13 14:30:02.422535 systemd-nspawn[1337]: + for d in accounts clock_skew network Dec 13 14:30:02.422806 systemd-nspawn[1337]: + daemon_pids+=($!) Dec 13 14:30:02.422806 systemd-nspawn[1337]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 14:30:02.422806 systemd-nspawn[1337]: + /usr/bin/systemd-notify --ready Dec 13 14:30:02.423237 systemd-nspawn[1337]: + /usr/bin/google_accounts_daemon Dec 13 14:30:02.423608 systemd-nspawn[1337]: + /usr/bin/google_clock_skew_daemon Dec 13 14:30:02.423708 systemd-nspawn[1337]: + /usr/bin/google_network_daemon Dec 13 14:30:02.480660 systemd-nspawn[1337]: + wait -n 36 37 38 Dec 13 14:30:03.037560 google-networking[1379]: INFO Starting Google Networking daemon. Dec 13 14:30:03.094996 google-clock-skew[1378]: INFO Starting Google Clock Skew daemon. Dec 13 14:30:03.109953 google-clock-skew[1378]: INFO Clock drift token has changed: 0. Dec 13 14:30:03.115221 systemd-nspawn[1337]: hwclock: Cannot access the Hardware Clock via any known method. Dec 13 14:30:03.115385 systemd-nspawn[1337]: hwclock: Use the --verbose option to see the details of our search for an access method. Dec 13 14:30:03.116065 google-clock-skew[1378]: WARNING Failed to sync system time with hardware clock. Dec 13 14:30:03.205276 groupadd[1389]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 14:30:03.209149 groupadd[1389]: group added to /etc/gshadow: name=google-sudoers Dec 13 14:30:03.213026 groupadd[1389]: new group: name=google-sudoers, GID=1000 Dec 13 14:30:03.226414 google-accounts[1377]: INFO Starting Google Accounts daemon. Dec 13 14:30:03.252283 google-accounts[1377]: WARNING OS Login not installed. Dec 13 14:30:03.253403 google-accounts[1377]: INFO Creating a new user account for 0. Dec 13 14:30:03.259096 systemd-nspawn[1337]: useradd: invalid user name '0': use --badname to ignore Dec 13 14:30:03.259970 google-accounts[1377]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 14:30:07.030648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:30:07.030977 systemd[1]: Stopped kubelet.service. Dec 13 14:30:07.031047 systemd[1]: kubelet.service: Consumed 1.469s CPU time. Dec 13 14:30:07.033304 systemd[1]: Starting kubelet.service... Dec 13 14:30:07.281454 systemd[1]: Started kubelet.service. Dec 13 14:30:07.343646 kubelet[1403]: E1213 14:30:07.343591 1403 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:30:07.347858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:30:07.348103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:30:08.289195 systemd[1]: Started sshd@3-10.128.0.81:22-139.178.68.195:35860.service. Dec 13 14:30:08.574749 sshd[1410]: Accepted publickey for core from 139.178.68.195 port 35860 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:30:08.576761 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:08.583541 systemd[1]: Started session-4.scope. Dec 13 14:30:08.584351 systemd-logind[1218]: New session 4 of user core. Dec 13 14:30:08.788266 sshd[1410]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:08.792559 systemd[1]: sshd@3-10.128.0.81:22-139.178.68.195:35860.service: Deactivated successfully. Dec 13 14:30:08.793670 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:30:08.794608 systemd-logind[1218]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:30:08.795911 systemd-logind[1218]: Removed session 4. Dec 13 14:30:08.834694 systemd[1]: Started sshd@4-10.128.0.81:22-139.178.68.195:35872.service. Dec 13 14:30:09.123270 sshd[1416]: Accepted publickey for core from 139.178.68.195 port 35872 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:30:09.125243 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:09.131499 systemd-logind[1218]: New session 5 of user core. Dec 13 14:30:09.132536 systemd[1]: Started session-5.scope. Dec 13 14:30:09.332778 sshd[1416]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:09.336861 systemd[1]: sshd@4-10.128.0.81:22-139.178.68.195:35872.service: Deactivated successfully. Dec 13 14:30:09.337967 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:30:09.338831 systemd-logind[1218]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:30:09.340268 systemd-logind[1218]: Removed session 5. Dec 13 14:30:09.379355 systemd[1]: Started sshd@5-10.128.0.81:22-139.178.68.195:35874.service. Dec 13 14:30:09.667576 sshd[1422]: Accepted publickey for core from 139.178.68.195 port 35874 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:30:09.669658 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:09.676467 systemd[1]: Started session-6.scope. Dec 13 14:30:09.677279 systemd-logind[1218]: New session 6 of user core. Dec 13 14:30:09.884314 sshd[1422]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:09.888698 systemd[1]: sshd@5-10.128.0.81:22-139.178.68.195:35874.service: Deactivated successfully. Dec 13 14:30:09.889796 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:30:09.890668 systemd-logind[1218]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:30:09.891922 systemd-logind[1218]: Removed session 6. Dec 13 14:30:09.930486 systemd[1]: Started sshd@6-10.128.0.81:22-139.178.68.195:35878.service. Dec 13 14:30:10.218918 sshd[1428]: Accepted publickey for core from 139.178.68.195 port 35878 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:30:10.220866 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:10.228118 systemd[1]: Started session-7.scope. Dec 13 14:30:10.228737 systemd-logind[1218]: New session 7 of user core. Dec 13 14:30:10.418359 sudo[1431]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:30:10.418836 sudo[1431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:30:10.437693 systemd[1]: Starting coreos-metadata.service... Dec 13 14:30:10.488214 coreos-metadata[1435]: Dec 13 14:30:10.487 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 13 14:30:10.489982 coreos-metadata[1435]: Dec 13 14:30:10.489 INFO Fetch successful Dec 13 14:30:10.490113 coreos-metadata[1435]: Dec 13 14:30:10.489 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 13 14:30:10.491242 coreos-metadata[1435]: Dec 13 14:30:10.491 INFO Fetch successful Dec 13 14:30:10.491516 coreos-metadata[1435]: Dec 13 14:30:10.491 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 13 14:30:10.492321 coreos-metadata[1435]: Dec 13 14:30:10.492 INFO Fetch successful Dec 13 14:30:10.492633 coreos-metadata[1435]: Dec 13 14:30:10.492 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 13 14:30:10.493644 coreos-metadata[1435]: Dec 13 14:30:10.493 INFO Fetch successful Dec 13 14:30:10.505621 systemd[1]: Finished coreos-metadata.service. Dec 13 14:30:11.531238 systemd[1]: Stopped kubelet.service. Dec 13 14:30:11.535984 systemd[1]: Starting kubelet.service... Dec 13 14:30:11.565539 systemd[1]: Reloading. Dec 13 14:30:11.710498 /usr/lib/systemd/system-generators/torcx-generator[1493]: time="2024-12-13T14:30:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:30:11.713873 /usr/lib/systemd/system-generators/torcx-generator[1493]: time="2024-12-13T14:30:11Z" level=info msg="torcx already run" Dec 13 14:30:11.838543 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:30:11.838571 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:30:11.862944 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:30:12.015341 systemd[1]: Started kubelet.service. Dec 13 14:30:12.018888 systemd[1]: Stopping kubelet.service... Dec 13 14:30:12.019388 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:30:12.019672 systemd[1]: Stopped kubelet.service. Dec 13 14:30:12.021920 systemd[1]: Starting kubelet.service... Dec 13 14:30:12.221258 systemd[1]: Started kubelet.service. Dec 13 14:30:12.275403 kubelet[1544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:30:12.275856 kubelet[1544]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:30:12.275925 kubelet[1544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:30:12.276105 kubelet[1544]: I1213 14:30:12.276068 1544 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:30:12.716943 kubelet[1544]: I1213 14:30:12.716790 1544 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:30:12.716943 kubelet[1544]: I1213 14:30:12.716831 1544 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:30:12.717543 kubelet[1544]: I1213 14:30:12.717498 1544 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:30:12.754613 kubelet[1544]: I1213 14:30:12.754502 1544 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:30:12.774408 kubelet[1544]: I1213 14:30:12.774362 1544 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:30:12.774785 kubelet[1544]: I1213 14:30:12.774745 1544 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:30:12.775082 kubelet[1544]: I1213 14:30:12.774787 1544 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.128.0.81","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:30:12.776281 kubelet[1544]: I1213 14:30:12.776239 1544 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:30:12.776281 kubelet[1544]: I1213 14:30:12.776277 1544 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:30:12.776526 kubelet[1544]: I1213 14:30:12.776494 1544 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:30:12.778116 kubelet[1544]: I1213 14:30:12.778047 1544 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:30:12.778116 kubelet[1544]: I1213 14:30:12.778077 1544 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:30:12.778116 kubelet[1544]: I1213 14:30:12.778112 1544 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:30:12.778377 kubelet[1544]: I1213 14:30:12.778139 1544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:30:12.778788 kubelet[1544]: E1213 14:30:12.778738 1544 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:12.778897 kubelet[1544]: E1213 14:30:12.778843 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:12.784379 kubelet[1544]: I1213 14:30:12.784332 1544 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:30:12.799361 kubelet[1544]: I1213 14:30:12.799304 1544 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:30:12.799559 kubelet[1544]: W1213 14:30:12.799412 1544 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:30:12.800164 kubelet[1544]: I1213 14:30:12.800117 1544 server.go:1264] "Started kubelet" Dec 13 14:30:12.801691 kubelet[1544]: I1213 14:30:12.801627 1544 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:30:12.803292 kubelet[1544]: I1213 14:30:12.803245 1544 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:30:12.807276 kubelet[1544]: I1213 14:30:12.807207 1544 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:30:12.807708 kubelet[1544]: I1213 14:30:12.807688 1544 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:30:12.815096 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:30:12.815361 kubelet[1544]: I1213 14:30:12.815328 1544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:30:12.822760 kubelet[1544]: E1213 14:30:12.822718 1544 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:30:12.823975 kubelet[1544]: E1213 14:30:12.823737 1544 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.81\" not found" Dec 13 14:30:12.823975 kubelet[1544]: I1213 14:30:12.823814 1544 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:30:12.824160 kubelet[1544]: I1213 14:30:12.824027 1544 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:30:12.824160 kubelet[1544]: I1213 14:30:12.824135 1544 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:30:12.825699 kubelet[1544]: I1213 14:30:12.825376 1544 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:30:12.829071 kubelet[1544]: I1213 14:30:12.829038 1544 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:30:12.829222 kubelet[1544]: I1213 14:30:12.829205 1544 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:30:12.832130 kubelet[1544]: E1213 14:30:12.832093 1544 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.81\" not found" node="10.128.0.81" Dec 13 14:30:12.853502 kubelet[1544]: I1213 14:30:12.853391 1544 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:30:12.853502 kubelet[1544]: I1213 14:30:12.853414 1544 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:30:12.853502 kubelet[1544]: I1213 14:30:12.853458 1544 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:30:12.858563 kubelet[1544]: I1213 14:30:12.858538 1544 policy_none.go:49] "None policy: Start" Dec 13 14:30:12.860110 kubelet[1544]: I1213 14:30:12.860089 1544 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:30:12.860298 kubelet[1544]: I1213 14:30:12.860283 1544 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:30:12.869732 systemd[1]: Created slice kubepods.slice. Dec 13 14:30:12.881456 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:30:12.890842 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:30:12.900057 kubelet[1544]: I1213 14:30:12.900022 1544 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:30:12.900500 kubelet[1544]: I1213 14:30:12.900449 1544 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:30:12.900769 kubelet[1544]: I1213 14:30:12.900752 1544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:30:12.907404 kubelet[1544]: E1213 14:30:12.907383 1544 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.81\" not found" Dec 13 14:30:12.925711 kubelet[1544]: I1213 14:30:12.925677 1544 kubelet_node_status.go:73] "Attempting to register node" node="10.128.0.81" Dec 13 14:30:12.932957 kubelet[1544]: I1213 14:30:12.932922 1544 kubelet_node_status.go:76] "Successfully registered node" node="10.128.0.81" Dec 13 14:30:12.945611 kubelet[1544]: I1213 14:30:12.945564 1544 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:30:12.946108 env[1232]: time="2024-12-13T14:30:12.946028008Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:30:12.946732 kubelet[1544]: I1213 14:30:12.946619 1544 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:30:12.993323 kubelet[1544]: I1213 14:30:12.990725 1544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:30:12.993892 kubelet[1544]: I1213 14:30:12.993848 1544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:30:12.993892 kubelet[1544]: I1213 14:30:12.993883 1544 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:30:12.994095 kubelet[1544]: I1213 14:30:12.993911 1544 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:30:12.994095 kubelet[1544]: E1213 14:30:12.993980 1544 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:30:13.244593 sudo[1431]: pam_unix(sudo:session): session closed for user root Dec 13 14:30:13.288995 sshd[1428]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:13.293674 systemd-logind[1218]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:30:13.293949 systemd[1]: sshd@6-10.128.0.81:22-139.178.68.195:35878.service: Deactivated successfully. Dec 13 14:30:13.295071 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:30:13.296366 systemd-logind[1218]: Removed session 7. Dec 13 14:30:13.720686 kubelet[1544]: I1213 14:30:13.720157 1544 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:30:13.720686 kubelet[1544]: W1213 14:30:13.720449 1544 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:30:13.720686 kubelet[1544]: W1213 14:30:13.720503 1544 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:30:13.721372 kubelet[1544]: W1213 14:30:13.721072 1544 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:30:13.779743 kubelet[1544]: I1213 14:30:13.779705 1544 apiserver.go:52] "Watching apiserver" Dec 13 14:30:13.779955 kubelet[1544]: E1213 14:30:13.779725 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:13.784599 kubelet[1544]: I1213 14:30:13.784547 1544 topology_manager.go:215] "Topology Admit Handler" podUID="e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" podNamespace="kube-system" podName="cilium-gx9r7" Dec 13 14:30:13.784806 kubelet[1544]: I1213 14:30:13.784779 1544 topology_manager.go:215] "Topology Admit Handler" podUID="694b9481-e699-4eda-920e-a367a2208211" podNamespace="kube-system" podName="kube-proxy-rmxq2" Dec 13 14:30:13.793031 systemd[1]: Created slice kubepods-besteffort-pod694b9481_e699_4eda_920e_a367a2208211.slice. Dec 13 14:30:13.806663 systemd[1]: Created slice kubepods-burstable-pode6b0ebde_e32a_4cff_a94b_b2cb07fdbe8a.slice. Dec 13 14:30:13.824921 kubelet[1544]: I1213 14:30:13.824878 1544 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:30:13.832512 kubelet[1544]: I1213 14:30:13.832450 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-xtables-lock\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.832512 kubelet[1544]: I1213 14:30:13.832509 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-host-proc-sys-kernel\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.832784 kubelet[1544]: I1213 14:30:13.832541 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-hubble-tls\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.832784 kubelet[1544]: I1213 14:30:13.832566 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/694b9481-e699-4eda-920e-a367a2208211-kube-proxy\") pod \"kube-proxy-rmxq2\" (UID: \"694b9481-e699-4eda-920e-a367a2208211\") " pod="kube-system/kube-proxy-rmxq2" Dec 13 14:30:13.832784 kubelet[1544]: I1213 14:30:13.832591 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-run\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.832784 kubelet[1544]: I1213 14:30:13.832617 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-lib-modules\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.832784 kubelet[1544]: I1213 14:30:13.832656 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-clustermesh-secrets\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.832784 kubelet[1544]: I1213 14:30:13.832686 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-cgroup\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.833126 kubelet[1544]: I1213 14:30:13.832726 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-etc-cni-netd\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.833126 kubelet[1544]: I1213 14:30:13.832755 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76q96\" (UniqueName: \"kubernetes.io/projected/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-kube-api-access-76q96\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.833126 kubelet[1544]: I1213 14:30:13.832782 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/694b9481-e699-4eda-920e-a367a2208211-lib-modules\") pod \"kube-proxy-rmxq2\" (UID: \"694b9481-e699-4eda-920e-a367a2208211\") " pod="kube-system/kube-proxy-rmxq2" Dec 13 14:30:13.833126 kubelet[1544]: I1213 14:30:13.832817 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fztv4\" (UniqueName: \"kubernetes.io/projected/694b9481-e699-4eda-920e-a367a2208211-kube-api-access-fztv4\") pod \"kube-proxy-rmxq2\" (UID: \"694b9481-e699-4eda-920e-a367a2208211\") " pod="kube-system/kube-proxy-rmxq2" Dec 13 14:30:13.833126 kubelet[1544]: I1213 14:30:13.832846 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-bpf-maps\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.833376 kubelet[1544]: I1213 14:30:13.832875 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-host-proc-sys-net\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.833376 kubelet[1544]: I1213 14:30:13.832903 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-config-path\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.833376 kubelet[1544]: I1213 14:30:13.832930 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/694b9481-e699-4eda-920e-a367a2208211-xtables-lock\") pod \"kube-proxy-rmxq2\" (UID: \"694b9481-e699-4eda-920e-a367a2208211\") " pod="kube-system/kube-proxy-rmxq2" Dec 13 14:30:13.833376 kubelet[1544]: I1213 14:30:13.832975 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-hostproc\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:13.833376 kubelet[1544]: I1213 14:30:13.833005 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cni-path\") pod \"cilium-gx9r7\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " pod="kube-system/cilium-gx9r7" Dec 13 14:30:14.104039 env[1232]: time="2024-12-13T14:30:14.103883187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmxq2,Uid:694b9481-e699-4eda-920e-a367a2208211,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:14.115714 env[1232]: time="2024-12-13T14:30:14.115649060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gx9r7,Uid:e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:14.699437 env[1232]: time="2024-12-13T14:30:14.699352072Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.701998 env[1232]: time="2024-12-13T14:30:14.701922641Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.703669 env[1232]: time="2024-12-13T14:30:14.703625518Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.706970 env[1232]: time="2024-12-13T14:30:14.706879775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.708154 env[1232]: time="2024-12-13T14:30:14.708102560Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.709153 env[1232]: time="2024-12-13T14:30:14.709115737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.711469 env[1232]: time="2024-12-13T14:30:14.711405725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.712457 env[1232]: time="2024-12-13T14:30:14.712392858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.748814 env[1232]: time="2024-12-13T14:30:14.747015482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:14.748814 env[1232]: time="2024-12-13T14:30:14.747068260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:14.748814 env[1232]: time="2024-12-13T14:30:14.747098868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:14.748814 env[1232]: time="2024-12-13T14:30:14.747310764Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd pid=1603 runtime=io.containerd.runc.v2 Dec 13 14:30:14.754461 env[1232]: time="2024-12-13T14:30:14.754179145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:14.754461 env[1232]: time="2024-12-13T14:30:14.754235696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:14.754461 env[1232]: time="2024-12-13T14:30:14.754257566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:14.754776 env[1232]: time="2024-12-13T14:30:14.754557019Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/45ef6df6b27f41bf48389413485b7e83106d3407fd145986577b10e635c98efa pid=1606 runtime=io.containerd.runc.v2 Dec 13 14:30:14.774934 systemd[1]: Started cri-containerd-a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd.scope. Dec 13 14:30:14.780545 kubelet[1544]: E1213 14:30:14.780485 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:14.822550 env[1232]: time="2024-12-13T14:30:14.822481633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gx9r7,Uid:e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\"" Dec 13 14:30:14.827982 env[1232]: time="2024-12-13T14:30:14.827514449Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:30:14.834372 systemd[1]: Started cri-containerd-45ef6df6b27f41bf48389413485b7e83106d3407fd145986577b10e635c98efa.scope. Dec 13 14:30:14.872190 env[1232]: time="2024-12-13T14:30:14.871808689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmxq2,Uid:694b9481-e699-4eda-920e-a367a2208211,Namespace:kube-system,Attempt:0,} returns sandbox id \"45ef6df6b27f41bf48389413485b7e83106d3407fd145986577b10e635c98efa\"" Dec 13 14:30:14.948274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount191126507.mount: Deactivated successfully. Dec 13 14:30:15.780793 kubelet[1544]: E1213 14:30:15.780725 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:16.781734 kubelet[1544]: E1213 14:30:16.781663 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:17.782619 kubelet[1544]: E1213 14:30:17.782569 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:18.783316 kubelet[1544]: E1213 14:30:18.783201 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:19.784323 kubelet[1544]: E1213 14:30:19.784273 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:20.784974 kubelet[1544]: E1213 14:30:20.784915 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:20.924599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963501272.mount: Deactivated successfully. Dec 13 14:30:21.785258 kubelet[1544]: E1213 14:30:21.785144 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:22.785516 kubelet[1544]: E1213 14:30:22.785398 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:23.786308 kubelet[1544]: E1213 14:30:23.786231 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:23.794323 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:30:24.270238 env[1232]: time="2024-12-13T14:30:24.269729865Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:24.273312 env[1232]: time="2024-12-13T14:30:24.273243861Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:24.276134 env[1232]: time="2024-12-13T14:30:24.276070235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:24.277024 env[1232]: time="2024-12-13T14:30:24.276973583Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:30:24.280456 env[1232]: time="2024-12-13T14:30:24.280401381Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:30:24.282134 env[1232]: time="2024-12-13T14:30:24.282076173Z" level=info msg="CreateContainer within sandbox \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:30:24.309580 env[1232]: time="2024-12-13T14:30:24.309519322Z" level=info msg="CreateContainer within sandbox \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d\"" Dec 13 14:30:24.310797 env[1232]: time="2024-12-13T14:30:24.310753498Z" level=info msg="StartContainer for \"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d\"" Dec 13 14:30:24.347624 systemd[1]: run-containerd-runc-k8s.io-04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d-runc.It8DJT.mount: Deactivated successfully. Dec 13 14:30:24.350650 systemd[1]: Started cri-containerd-04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d.scope. Dec 13 14:30:24.399185 env[1232]: time="2024-12-13T14:30:24.399131830Z" level=info msg="StartContainer for \"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d\" returns successfully" Dec 13 14:30:24.410632 systemd[1]: cri-containerd-04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d.scope: Deactivated successfully. Dec 13 14:30:24.787160 kubelet[1544]: E1213 14:30:24.787082 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:25.298341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d-rootfs.mount: Deactivated successfully. Dec 13 14:30:25.660363 systemd[1]: Started sshd@7-10.128.0.81:22-92.255.85.189:61516.service. Dec 13 14:30:25.787538 kubelet[1544]: E1213 14:30:25.787486 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:26.235970 env[1232]: time="2024-12-13T14:30:26.235659622Z" level=info msg="shim disconnected" id=04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d Dec 13 14:30:26.235970 env[1232]: time="2024-12-13T14:30:26.235737072Z" level=warning msg="cleaning up after shim disconnected" id=04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d namespace=k8s.io Dec 13 14:30:26.235970 env[1232]: time="2024-12-13T14:30:26.235754229Z" level=info msg="cleaning up dead shim" Dec 13 14:30:26.250187 env[1232]: time="2024-12-13T14:30:26.250128561Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1725 runtime=io.containerd.runc.v2\n" Dec 13 14:30:26.788559 kubelet[1544]: E1213 14:30:26.788512 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:27.041626 env[1232]: time="2024-12-13T14:30:27.041171911Z" level=info msg="CreateContainer within sandbox \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:30:27.084371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398938780.mount: Deactivated successfully. Dec 13 14:30:27.098078 env[1232]: time="2024-12-13T14:30:27.098025884Z" level=info msg="CreateContainer within sandbox \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd\"" Dec 13 14:30:27.099084 env[1232]: time="2024-12-13T14:30:27.099042086Z" level=info msg="StartContainer for \"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd\"" Dec 13 14:30:27.157097 systemd[1]: Started cri-containerd-b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd.scope. Dec 13 14:30:27.226656 env[1232]: time="2024-12-13T14:30:27.226589702Z" level=info msg="StartContainer for \"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd\" returns successfully" Dec 13 14:30:27.246586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:30:27.246955 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:30:27.247173 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:30:27.251184 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:30:27.260141 systemd[1]: cri-containerd-b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd.scope: Deactivated successfully. Dec 13 14:30:27.273665 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:30:27.389327 env[1232]: time="2024-12-13T14:30:27.388240722Z" level=info msg="shim disconnected" id=b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd Dec 13 14:30:27.389327 env[1232]: time="2024-12-13T14:30:27.388312789Z" level=warning msg="cleaning up after shim disconnected" id=b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd namespace=k8s.io Dec 13 14:30:27.389327 env[1232]: time="2024-12-13T14:30:27.388328243Z" level=info msg="cleaning up dead shim" Dec 13 14:30:27.411059 env[1232]: time="2024-12-13T14:30:27.411000775Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1790 runtime=io.containerd.runc.v2\n" Dec 13 14:30:27.499014 sshd[1723]: Invalid user user from 92.255.85.189 port 61516 Dec 13 14:30:27.695332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd-rootfs.mount: Deactivated successfully. Dec 13 14:30:27.695519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895065247.mount: Deactivated successfully. Dec 13 14:30:27.718916 sshd[1723]: Failed password for invalid user user from 92.255.85.189 port 61516 ssh2 Dec 13 14:30:27.789159 kubelet[1544]: E1213 14:30:27.789106 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:27.885858 sshd[1723]: Connection closed by invalid user user 92.255.85.189 port 61516 [preauth] Dec 13 14:30:27.887984 systemd[1]: sshd@7-10.128.0.81:22-92.255.85.189:61516.service: Deactivated successfully. Dec 13 14:30:28.043793 env[1232]: time="2024-12-13T14:30:28.043660487Z" level=info msg="CreateContainer within sandbox \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:30:28.084329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959573997.mount: Deactivated successfully. Dec 13 14:30:28.105626 env[1232]: time="2024-12-13T14:30:28.105556067Z" level=info msg="CreateContainer within sandbox \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e\"" Dec 13 14:30:28.106634 env[1232]: time="2024-12-13T14:30:28.106591684Z" level=info msg="StartContainer for \"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e\"" Dec 13 14:30:28.144321 systemd[1]: Started cri-containerd-ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e.scope. Dec 13 14:30:28.197655 env[1232]: time="2024-12-13T14:30:28.197599198Z" level=info msg="StartContainer for \"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e\" returns successfully" Dec 13 14:30:28.202223 systemd[1]: cri-containerd-ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e.scope: Deactivated successfully. Dec 13 14:30:28.272781 env[1232]: time="2024-12-13T14:30:28.271554003Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:28.385972 env[1232]: time="2024-12-13T14:30:28.385172874Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:28.388709 env[1232]: time="2024-12-13T14:30:28.388642796Z" level=info msg="shim disconnected" id=ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e Dec 13 14:30:28.388709 env[1232]: time="2024-12-13T14:30:28.388696673Z" level=warning msg="cleaning up after shim disconnected" id=ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e namespace=k8s.io Dec 13 14:30:28.388709 env[1232]: time="2024-12-13T14:30:28.388712344Z" level=info msg="cleaning up dead shim" Dec 13 14:30:28.389738 env[1232]: time="2024-12-13T14:30:28.389701788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:28.395190 env[1232]: time="2024-12-13T14:30:28.395141303Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:28.395916 env[1232]: time="2024-12-13T14:30:28.395871604Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 14:30:28.399709 env[1232]: time="2024-12-13T14:30:28.399660320Z" level=info msg="CreateContainer within sandbox \"45ef6df6b27f41bf48389413485b7e83106d3407fd145986577b10e635c98efa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:30:28.411543 env[1232]: time="2024-12-13T14:30:28.411476549Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1849 runtime=io.containerd.runc.v2\n" Dec 13 14:30:28.420756 env[1232]: time="2024-12-13T14:30:28.420694500Z" level=info msg="CreateContainer within sandbox \"45ef6df6b27f41bf48389413485b7e83106d3407fd145986577b10e635c98efa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f58f1448dd5b856a95496e581379d8c0a5cf002a284b2f17345f11275ca18531\"" Dec 13 14:30:28.421626 env[1232]: time="2024-12-13T14:30:28.421578699Z" level=info msg="StartContainer for \"f58f1448dd5b856a95496e581379d8c0a5cf002a284b2f17345f11275ca18531\"" Dec 13 14:30:28.446017 systemd[1]: Started cri-containerd-f58f1448dd5b856a95496e581379d8c0a5cf002a284b2f17345f11275ca18531.scope. Dec 13 14:30:28.498663 env[1232]: time="2024-12-13T14:30:28.498585066Z" level=info msg="StartContainer for \"f58f1448dd5b856a95496e581379d8c0a5cf002a284b2f17345f11275ca18531\" returns successfully" Dec 13 14:30:28.791041 kubelet[1544]: E1213 14:30:28.790924 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:29.049948 env[1232]: time="2024-12-13T14:30:29.049812056Z" level=info msg="CreateContainer within sandbox \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:30:29.055726 kubelet[1544]: I1213 14:30:29.055637 1544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rmxq2" podStartSLOduration=3.531574859 podStartE2EDuration="17.055613765s" podCreationTimestamp="2024-12-13 14:30:12 +0000 UTC" firstStartedPulling="2024-12-13 14:30:14.873514394 +0000 UTC m=+2.646204804" lastFinishedPulling="2024-12-13 14:30:28.39755328 +0000 UTC m=+16.170243710" observedRunningTime="2024-12-13 14:30:29.055489585 +0000 UTC m=+16.828180027" watchObservedRunningTime="2024-12-13 14:30:29.055613765 +0000 UTC m=+16.828304201" Dec 13 14:30:29.068702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2227141144.mount: Deactivated successfully. Dec 13 14:30:29.079328 env[1232]: time="2024-12-13T14:30:29.079261612Z" level=info msg="CreateContainer within sandbox \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9\"" Dec 13 14:30:29.080119 env[1232]: time="2024-12-13T14:30:29.080079049Z" level=info msg="StartContainer for \"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9\"" Dec 13 14:30:29.107264 systemd[1]: Started cri-containerd-570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9.scope. Dec 13 14:30:29.147319 systemd[1]: cri-containerd-570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9.scope: Deactivated successfully. Dec 13 14:30:29.151623 env[1232]: time="2024-12-13T14:30:29.151356736Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6b0ebde_e32a_4cff_a94b_b2cb07fdbe8a.slice/cri-containerd-570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9.scope/memory.events\": no such file or directory" Dec 13 14:30:29.154865 env[1232]: time="2024-12-13T14:30:29.154806083Z" level=info msg="StartContainer for \"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9\" returns successfully" Dec 13 14:30:29.182141 env[1232]: time="2024-12-13T14:30:29.182049171Z" level=info msg="shim disconnected" id=570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9 Dec 13 14:30:29.182141 env[1232]: time="2024-12-13T14:30:29.182113750Z" level=warning msg="cleaning up after shim disconnected" id=570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9 namespace=k8s.io Dec 13 14:30:29.182141 env[1232]: time="2024-12-13T14:30:29.182130199Z" level=info msg="cleaning up dead shim" Dec 13 14:30:29.194784 env[1232]: time="2024-12-13T14:30:29.194706336Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2063 runtime=io.containerd.runc.v2\n" Dec 13 14:30:29.694644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9-rootfs.mount: Deactivated successfully. Dec 13 14:30:29.791384 kubelet[1544]: E1213 14:30:29.791311 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:30.056292 env[1232]: time="2024-12-13T14:30:30.056152856Z" level=info msg="CreateContainer within sandbox \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:30:30.083670 env[1232]: time="2024-12-13T14:30:30.083599084Z" level=info msg="CreateContainer within sandbox \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\"" Dec 13 14:30:30.084298 env[1232]: time="2024-12-13T14:30:30.084243888Z" level=info msg="StartContainer for \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\"" Dec 13 14:30:30.122332 systemd[1]: Started cri-containerd-160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43.scope. Dec 13 14:30:30.176105 env[1232]: time="2024-12-13T14:30:30.176041604Z" level=info msg="StartContainer for \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\" returns successfully" Dec 13 14:30:30.386668 kubelet[1544]: I1213 14:30:30.386176 1544 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:30:30.695326 systemd[1]: run-containerd-runc-k8s.io-160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43-runc.tCsrxD.mount: Deactivated successfully. Dec 13 14:30:30.713874 kernel: Initializing XFRM netlink socket Dec 13 14:30:30.791683 kubelet[1544]: E1213 14:30:30.791633 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:31.078664 kubelet[1544]: I1213 14:30:31.078465 1544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gx9r7" podStartSLOduration=9.625887057 podStartE2EDuration="19.078350494s" podCreationTimestamp="2024-12-13 14:30:12 +0000 UTC" firstStartedPulling="2024-12-13 14:30:14.826674337 +0000 UTC m=+2.599364757" lastFinishedPulling="2024-12-13 14:30:24.279137772 +0000 UTC m=+12.051828194" observedRunningTime="2024-12-13 14:30:31.077499914 +0000 UTC m=+18.850190351" watchObservedRunningTime="2024-12-13 14:30:31.078350494 +0000 UTC m=+18.851040932" Dec 13 14:30:31.493309 kubelet[1544]: I1213 14:30:31.492906 1544 topology_manager.go:215] "Topology Admit Handler" podUID="a2aaf7b9-05f8-4f41-9715-6ca91cd897ab" podNamespace="default" podName="nginx-deployment-85f456d6dd-pz2xn" Dec 13 14:30:31.500797 systemd[1]: Created slice kubepods-besteffort-poda2aaf7b9_05f8_4f41_9715_6ca91cd897ab.slice. Dec 13 14:30:31.552806 kubelet[1544]: I1213 14:30:31.552723 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zcrp\" (UniqueName: \"kubernetes.io/projected/a2aaf7b9-05f8-4f41-9715-6ca91cd897ab-kube-api-access-6zcrp\") pod \"nginx-deployment-85f456d6dd-pz2xn\" (UID: \"a2aaf7b9-05f8-4f41-9715-6ca91cd897ab\") " pod="default/nginx-deployment-85f456d6dd-pz2xn" Dec 13 14:30:31.792950 kubelet[1544]: E1213 14:30:31.792753 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:31.804919 env[1232]: time="2024-12-13T14:30:31.804842422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-pz2xn,Uid:a2aaf7b9-05f8-4f41-9715-6ca91cd897ab,Namespace:default,Attempt:0,}" Dec 13 14:30:32.382463 systemd-networkd[1037]: cilium_host: Link UP Dec 13 14:30:32.390602 systemd-networkd[1037]: cilium_net: Link UP Dec 13 14:30:32.391728 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:30:32.399542 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:30:32.399563 systemd-networkd[1037]: cilium_net: Gained carrier Dec 13 14:30:32.400554 systemd-networkd[1037]: cilium_host: Gained carrier Dec 13 14:30:32.405721 systemd-networkd[1037]: cilium_net: Gained IPv6LL Dec 13 14:30:32.529734 systemd-networkd[1037]: cilium_host: Gained IPv6LL Dec 13 14:30:32.540610 systemd-networkd[1037]: cilium_vxlan: Link UP Dec 13 14:30:32.540628 systemd-networkd[1037]: cilium_vxlan: Gained carrier Dec 13 14:30:32.778905 kubelet[1544]: E1213 14:30:32.778828 1544 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:32.793775 kubelet[1544]: E1213 14:30:32.793694 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:32.811461 kernel: NET: Registered PF_ALG protocol family Dec 13 14:30:33.667191 systemd-networkd[1037]: lxc_health: Link UP Dec 13 14:30:33.683358 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:30:33.686548 systemd-networkd[1037]: lxc_health: Gained carrier Dec 13 14:30:33.686804 systemd-networkd[1037]: cilium_vxlan: Gained IPv6LL Dec 13 14:30:33.794496 kubelet[1544]: E1213 14:30:33.794380 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:34.367958 systemd-networkd[1037]: lxcad411104d066: Link UP Dec 13 14:30:34.388557 kernel: eth0: renamed from tmp954aa Dec 13 14:30:34.405657 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcad411104d066: link becomes ready Dec 13 14:30:34.405928 systemd-networkd[1037]: lxcad411104d066: Gained carrier Dec 13 14:30:34.795007 kubelet[1544]: E1213 14:30:34.794906 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:35.282074 systemd-networkd[1037]: lxc_health: Gained IPv6LL Dec 13 14:30:35.602001 systemd-networkd[1037]: lxcad411104d066: Gained IPv6LL Dec 13 14:30:35.795099 kubelet[1544]: E1213 14:30:35.795052 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:36.796500 kubelet[1544]: E1213 14:30:36.796433 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:37.797218 kubelet[1544]: E1213 14:30:37.797156 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:38.782560 update_engine[1221]: I1213 14:30:38.782498 1221 update_attempter.cc:509] Updating boot flags... Dec 13 14:30:38.800662 kubelet[1544]: E1213 14:30:38.800620 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:39.067792 env[1232]: time="2024-12-13T14:30:39.067357634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:39.067792 env[1232]: time="2024-12-13T14:30:39.067437367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:39.068414 env[1232]: time="2024-12-13T14:30:39.067458906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:39.068619 env[1232]: time="2024-12-13T14:30:39.068537158Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/954aa88b9ba522738c14a952b127a06060a7059b637e3dad5ebda3a3786c5371 pid=2596 runtime=io.containerd.runc.v2 Dec 13 14:30:39.100133 systemd[1]: run-containerd-runc-k8s.io-954aa88b9ba522738c14a952b127a06060a7059b637e3dad5ebda3a3786c5371-runc.gm2Olw.mount: Deactivated successfully. Dec 13 14:30:39.105892 systemd[1]: Started cri-containerd-954aa88b9ba522738c14a952b127a06060a7059b637e3dad5ebda3a3786c5371.scope. Dec 13 14:30:39.164080 env[1232]: time="2024-12-13T14:30:39.164022321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-pz2xn,Uid:a2aaf7b9-05f8-4f41-9715-6ca91cd897ab,Namespace:default,Attempt:0,} returns sandbox id \"954aa88b9ba522738c14a952b127a06060a7059b637e3dad5ebda3a3786c5371\"" Dec 13 14:30:39.166823 env[1232]: time="2024-12-13T14:30:39.166782282Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:30:39.804536 kubelet[1544]: E1213 14:30:39.804463 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:40.805251 kubelet[1544]: E1213 14:30:40.805175 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:41.790810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount861180208.mount: Deactivated successfully. Dec 13 14:30:41.805819 kubelet[1544]: E1213 14:30:41.805738 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:42.806675 kubelet[1544]: E1213 14:30:42.806576 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:43.569192 env[1232]: time="2024-12-13T14:30:43.569099912Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:43.572456 env[1232]: time="2024-12-13T14:30:43.572367666Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:43.575525 env[1232]: time="2024-12-13T14:30:43.575465169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:43.578349 env[1232]: time="2024-12-13T14:30:43.578301768Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:43.579851 env[1232]: time="2024-12-13T14:30:43.579806844Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:30:43.584274 env[1232]: time="2024-12-13T14:30:43.584229211Z" level=info msg="CreateContainer within sandbox \"954aa88b9ba522738c14a952b127a06060a7059b637e3dad5ebda3a3786c5371\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:30:43.611499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760754560.mount: Deactivated successfully. Dec 13 14:30:43.624100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166285862.mount: Deactivated successfully. Dec 13 14:30:43.628850 env[1232]: time="2024-12-13T14:30:43.628786593Z" level=info msg="CreateContainer within sandbox \"954aa88b9ba522738c14a952b127a06060a7059b637e3dad5ebda3a3786c5371\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b0cd3509219b3b5af06c7510e3b523e11cc654f1e7254c882d045c4de704ee72\"" Dec 13 14:30:43.629790 env[1232]: time="2024-12-13T14:30:43.629738835Z" level=info msg="StartContainer for \"b0cd3509219b3b5af06c7510e3b523e11cc654f1e7254c882d045c4de704ee72\"" Dec 13 14:30:43.657040 systemd[1]: Started cri-containerd-b0cd3509219b3b5af06c7510e3b523e11cc654f1e7254c882d045c4de704ee72.scope. Dec 13 14:30:43.714617 env[1232]: time="2024-12-13T14:30:43.714557970Z" level=info msg="StartContainer for \"b0cd3509219b3b5af06c7510e3b523e11cc654f1e7254c882d045c4de704ee72\" returns successfully" Dec 13 14:30:43.807144 kubelet[1544]: E1213 14:30:43.807071 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:44.109985 kubelet[1544]: I1213 14:30:44.109896 1544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-pz2xn" podStartSLOduration=8.694326724 podStartE2EDuration="13.109868118s" podCreationTimestamp="2024-12-13 14:30:31 +0000 UTC" firstStartedPulling="2024-12-13 14:30:39.16635395 +0000 UTC m=+26.939044378" lastFinishedPulling="2024-12-13 14:30:43.581895347 +0000 UTC m=+31.354585772" observedRunningTime="2024-12-13 14:30:44.109725852 +0000 UTC m=+31.882416290" watchObservedRunningTime="2024-12-13 14:30:44.109868118 +0000 UTC m=+31.882558549" Dec 13 14:30:44.807860 kubelet[1544]: E1213 14:30:44.807772 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:45.808267 kubelet[1544]: E1213 14:30:45.808187 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:46.809255 kubelet[1544]: E1213 14:30:46.809180 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:47.810384 kubelet[1544]: E1213 14:30:47.810309 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:48.811108 kubelet[1544]: E1213 14:30:48.811037 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:49.811781 kubelet[1544]: E1213 14:30:49.811709 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:50.812800 kubelet[1544]: E1213 14:30:50.812723 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:51.814051 kubelet[1544]: E1213 14:30:51.813978 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:51.953097 kubelet[1544]: I1213 14:30:51.953034 1544 topology_manager.go:215] "Topology Admit Handler" podUID="6ed84e47-9cef-476e-a388-2a803ae82672" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:30:51.960931 systemd[1]: Created slice kubepods-besteffort-pod6ed84e47_9cef_476e_a388_2a803ae82672.slice. Dec 13 14:30:52.006147 kubelet[1544]: I1213 14:30:52.006073 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6ed84e47-9cef-476e-a388-2a803ae82672-data\") pod \"nfs-server-provisioner-0\" (UID: \"6ed84e47-9cef-476e-a388-2a803ae82672\") " pod="default/nfs-server-provisioner-0" Dec 13 14:30:52.006147 kubelet[1544]: I1213 14:30:52.006143 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhhk7\" (UniqueName: \"kubernetes.io/projected/6ed84e47-9cef-476e-a388-2a803ae82672-kube-api-access-zhhk7\") pod \"nfs-server-provisioner-0\" (UID: \"6ed84e47-9cef-476e-a388-2a803ae82672\") " pod="default/nfs-server-provisioner-0" Dec 13 14:30:52.265605 env[1232]: time="2024-12-13T14:30:52.265543273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6ed84e47-9cef-476e-a388-2a803ae82672,Namespace:default,Attempt:0,}" Dec 13 14:30:52.314494 systemd-networkd[1037]: lxcc94625c8c3bb: Link UP Dec 13 14:30:52.323475 kernel: eth0: renamed from tmp888b0 Dec 13 14:30:52.340370 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:30:52.355535 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc94625c8c3bb: link becomes ready Dec 13 14:30:52.356882 systemd-networkd[1037]: lxcc94625c8c3bb: Gained carrier Dec 13 14:30:52.556680 env[1232]: time="2024-12-13T14:30:52.556111922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:52.556680 env[1232]: time="2024-12-13T14:30:52.556244562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:52.556680 env[1232]: time="2024-12-13T14:30:52.556287213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:52.557203 env[1232]: time="2024-12-13T14:30:52.557126820Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/888b0502e12c449fefb62bb21058e803c719c9e5cd756383b28f951febdcabd6 pid=2718 runtime=io.containerd.runc.v2 Dec 13 14:30:52.588786 systemd[1]: Started cri-containerd-888b0502e12c449fefb62bb21058e803c719c9e5cd756383b28f951febdcabd6.scope. Dec 13 14:30:52.655450 env[1232]: time="2024-12-13T14:30:52.655374693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6ed84e47-9cef-476e-a388-2a803ae82672,Namespace:default,Attempt:0,} returns sandbox id \"888b0502e12c449fefb62bb21058e803c719c9e5cd756383b28f951febdcabd6\"" Dec 13 14:30:52.658238 env[1232]: time="2024-12-13T14:30:52.658170932Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:30:52.778956 kubelet[1544]: E1213 14:30:52.778884 1544 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:52.815071 kubelet[1544]: E1213 14:30:52.814841 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:53.123631 systemd[1]: run-containerd-runc-k8s.io-888b0502e12c449fefb62bb21058e803c719c9e5cd756383b28f951febdcabd6-runc.dWdxbo.mount: Deactivated successfully. Dec 13 14:30:53.815828 kubelet[1544]: E1213 14:30:53.815775 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:53.905589 systemd-networkd[1037]: lxcc94625c8c3bb: Gained IPv6LL Dec 13 14:30:54.816302 kubelet[1544]: E1213 14:30:54.816246 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:55.425302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731706827.mount: Deactivated successfully. Dec 13 14:30:55.817442 kubelet[1544]: E1213 14:30:55.817294 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:56.818393 kubelet[1544]: E1213 14:30:56.818316 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:57.818984 kubelet[1544]: E1213 14:30:57.818926 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:57.888742 env[1232]: time="2024-12-13T14:30:57.888667083Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:57.896747 env[1232]: time="2024-12-13T14:30:57.896687686Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:57.899568 env[1232]: time="2024-12-13T14:30:57.899511800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:57.901947 env[1232]: time="2024-12-13T14:30:57.901904248Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:57.902857 env[1232]: time="2024-12-13T14:30:57.902806243Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:30:57.907687 env[1232]: time="2024-12-13T14:30:57.907618067Z" level=info msg="CreateContainer within sandbox \"888b0502e12c449fefb62bb21058e803c719c9e5cd756383b28f951febdcabd6\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:30:57.924925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559276486.mount: Deactivated successfully. Dec 13 14:30:57.930629 env[1232]: time="2024-12-13T14:30:57.930574381Z" level=info msg="CreateContainer within sandbox \"888b0502e12c449fefb62bb21058e803c719c9e5cd756383b28f951febdcabd6\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"80b717e78d75b69e7a4633ead26ee73d5248092e9ecc1d38bb55faad04198901\"" Dec 13 14:30:57.931780 env[1232]: time="2024-12-13T14:30:57.931739080Z" level=info msg="StartContainer for \"80b717e78d75b69e7a4633ead26ee73d5248092e9ecc1d38bb55faad04198901\"" Dec 13 14:30:57.970150 systemd[1]: run-containerd-runc-k8s.io-80b717e78d75b69e7a4633ead26ee73d5248092e9ecc1d38bb55faad04198901-runc.XYYce5.mount: Deactivated successfully. Dec 13 14:30:57.976179 systemd[1]: Started cri-containerd-80b717e78d75b69e7a4633ead26ee73d5248092e9ecc1d38bb55faad04198901.scope. Dec 13 14:30:58.016372 env[1232]: time="2024-12-13T14:30:58.016313017Z" level=info msg="StartContainer for \"80b717e78d75b69e7a4633ead26ee73d5248092e9ecc1d38bb55faad04198901\" returns successfully" Dec 13 14:30:58.178451 kubelet[1544]: I1213 14:30:58.178358 1544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.9310702819999999 podStartE2EDuration="7.178338916s" podCreationTimestamp="2024-12-13 14:30:51 +0000 UTC" firstStartedPulling="2024-12-13 14:30:52.657671371 +0000 UTC m=+40.430361799" lastFinishedPulling="2024-12-13 14:30:57.90494001 +0000 UTC m=+45.677630433" observedRunningTime="2024-12-13 14:30:58.177869343 +0000 UTC m=+45.950559777" watchObservedRunningTime="2024-12-13 14:30:58.178338916 +0000 UTC m=+45.951029354" Dec 13 14:30:58.819233 kubelet[1544]: E1213 14:30:58.819158 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:59.819480 kubelet[1544]: E1213 14:30:59.819407 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:00.819857 kubelet[1544]: E1213 14:31:00.819790 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:01.821015 kubelet[1544]: E1213 14:31:01.820942 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:02.821158 kubelet[1544]: E1213 14:31:02.821091 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:03.821920 kubelet[1544]: E1213 14:31:03.821852 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:04.823112 kubelet[1544]: E1213 14:31:04.823042 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:05.823840 kubelet[1544]: E1213 14:31:05.823770 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:06.824601 kubelet[1544]: E1213 14:31:06.824527 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:07.557044 kubelet[1544]: I1213 14:31:07.556991 1544 topology_manager.go:215] "Topology Admit Handler" podUID="905b0972-acce-4742-ba0c-092bda9175fb" podNamespace="default" podName="test-pod-1" Dec 13 14:31:07.564604 systemd[1]: Created slice kubepods-besteffort-pod905b0972_acce_4742_ba0c_092bda9175fb.slice. Dec 13 14:31:07.615407 kubelet[1544]: I1213 14:31:07.615357 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-61a18537-f1c2-4a66-b27f-3b99ef380632\" (UniqueName: \"kubernetes.io/nfs/905b0972-acce-4742-ba0c-092bda9175fb-pvc-61a18537-f1c2-4a66-b27f-3b99ef380632\") pod \"test-pod-1\" (UID: \"905b0972-acce-4742-ba0c-092bda9175fb\") " pod="default/test-pod-1" Dec 13 14:31:07.615663 kubelet[1544]: I1213 14:31:07.615430 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsknm\" (UniqueName: \"kubernetes.io/projected/905b0972-acce-4742-ba0c-092bda9175fb-kube-api-access-xsknm\") pod \"test-pod-1\" (UID: \"905b0972-acce-4742-ba0c-092bda9175fb\") " pod="default/test-pod-1" Dec 13 14:31:07.765482 kernel: FS-Cache: Loaded Dec 13 14:31:07.825733 kubelet[1544]: E1213 14:31:07.825518 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:07.829469 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:31:07.829602 kernel: RPC: Registered udp transport module. Dec 13 14:31:07.829645 kernel: RPC: Registered tcp transport module. Dec 13 14:31:07.834306 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:31:07.920464 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:31:08.149076 kernel: NFS: Registering the id_resolver key type Dec 13 14:31:08.149273 kernel: Key type id_resolver registered Dec 13 14:31:08.149325 kernel: Key type id_legacy registered Dec 13 14:31:08.224193 nfsidmap[2844]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Dec 13 14:31:08.236567 nfsidmap[2845]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Dec 13 14:31:08.470863 env[1232]: time="2024-12-13T14:31:08.470796348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:905b0972-acce-4742-ba0c-092bda9175fb,Namespace:default,Attempt:0,}" Dec 13 14:31:08.523269 systemd-networkd[1037]: lxcdbda50293231: Link UP Dec 13 14:31:08.536584 kernel: eth0: renamed from tmpc8e5c Dec 13 14:31:08.548583 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:31:08.548713 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdbda50293231: link becomes ready Dec 13 14:31:08.555783 systemd-networkd[1037]: lxcdbda50293231: Gained carrier Dec 13 14:31:08.725484 env[1232]: time="2024-12-13T14:31:08.725358940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:08.725484 env[1232]: time="2024-12-13T14:31:08.725435096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:08.725828 env[1232]: time="2024-12-13T14:31:08.725464552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:08.727394 env[1232]: time="2024-12-13T14:31:08.726118401Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8e5cb022a9db49a2a2c1776314c35e9c183f9fe5ca604c285c189e53c4fe962 pid=2869 runtime=io.containerd.runc.v2 Dec 13 14:31:08.759078 systemd[1]: Started cri-containerd-c8e5cb022a9db49a2a2c1776314c35e9c183f9fe5ca604c285c189e53c4fe962.scope. Dec 13 14:31:08.820880 env[1232]: time="2024-12-13T14:31:08.820240811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:905b0972-acce-4742-ba0c-092bda9175fb,Namespace:default,Attempt:0,} returns sandbox id \"c8e5cb022a9db49a2a2c1776314c35e9c183f9fe5ca604c285c189e53c4fe962\"" Dec 13 14:31:08.823160 env[1232]: time="2024-12-13T14:31:08.823107675Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:31:08.826253 kubelet[1544]: E1213 14:31:08.826187 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:09.022120 env[1232]: time="2024-12-13T14:31:09.022054855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:09.024680 env[1232]: time="2024-12-13T14:31:09.024629423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:09.027498 env[1232]: time="2024-12-13T14:31:09.027452380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:09.030100 env[1232]: time="2024-12-13T14:31:09.030057323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:09.031027 env[1232]: time="2024-12-13T14:31:09.030974434Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:31:09.034470 env[1232]: time="2024-12-13T14:31:09.034394903Z" level=info msg="CreateContainer within sandbox \"c8e5cb022a9db49a2a2c1776314c35e9c183f9fe5ca604c285c189e53c4fe962\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:31:09.058263 env[1232]: time="2024-12-13T14:31:09.058197322Z" level=info msg="CreateContainer within sandbox \"c8e5cb022a9db49a2a2c1776314c35e9c183f9fe5ca604c285c189e53c4fe962\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"928badd632b1dad04bc939456f585c33ab66b8e584bb2364c6d9326b0435515a\"" Dec 13 14:31:09.059411 env[1232]: time="2024-12-13T14:31:09.059361335Z" level=info msg="StartContainer for \"928badd632b1dad04bc939456f585c33ab66b8e584bb2364c6d9326b0435515a\"" Dec 13 14:31:09.088266 systemd[1]: Started cri-containerd-928badd632b1dad04bc939456f585c33ab66b8e584bb2364c6d9326b0435515a.scope. Dec 13 14:31:09.137481 env[1232]: time="2024-12-13T14:31:09.136860866Z" level=info msg="StartContainer for \"928badd632b1dad04bc939456f585c33ab66b8e584bb2364c6d9326b0435515a\" returns successfully" Dec 13 14:31:09.204568 kubelet[1544]: I1213 14:31:09.204412 1544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.994107535 podStartE2EDuration="17.204383873s" podCreationTimestamp="2024-12-13 14:30:52 +0000 UTC" firstStartedPulling="2024-12-13 14:31:08.82244309 +0000 UTC m=+56.595133521" lastFinishedPulling="2024-12-13 14:31:09.032719425 +0000 UTC m=+56.805409859" observedRunningTime="2024-12-13 14:31:09.204006218 +0000 UTC m=+56.976696657" watchObservedRunningTime="2024-12-13 14:31:09.204383873 +0000 UTC m=+56.977074308" Dec 13 14:31:09.827144 kubelet[1544]: E1213 14:31:09.827066 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:10.417874 systemd-networkd[1037]: lxcdbda50293231: Gained IPv6LL Dec 13 14:31:10.828182 kubelet[1544]: E1213 14:31:10.828108 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:11.828540 kubelet[1544]: E1213 14:31:11.828447 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:12.553832 systemd[1]: run-containerd-runc-k8s.io-160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43-runc.y34mUl.mount: Deactivated successfully. Dec 13 14:31:12.579221 env[1232]: time="2024-12-13T14:31:12.579138241Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:31:12.587181 env[1232]: time="2024-12-13T14:31:12.587122344Z" level=info msg="StopContainer for \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\" with timeout 2 (s)" Dec 13 14:31:12.587597 env[1232]: time="2024-12-13T14:31:12.587554382Z" level=info msg="Stop container \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\" with signal terminated" Dec 13 14:31:12.597246 systemd-networkd[1037]: lxc_health: Link DOWN Dec 13 14:31:12.597262 systemd-networkd[1037]: lxc_health: Lost carrier Dec 13 14:31:12.626212 systemd[1]: cri-containerd-160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43.scope: Deactivated successfully. Dec 13 14:31:12.626708 systemd[1]: cri-containerd-160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43.scope: Consumed 8.651s CPU time. Dec 13 14:31:12.656389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43-rootfs.mount: Deactivated successfully. Dec 13 14:31:12.779290 kubelet[1544]: E1213 14:31:12.779220 1544 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:12.829355 kubelet[1544]: E1213 14:31:12.829234 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:12.932465 kubelet[1544]: E1213 14:31:12.932401 1544 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:31:13.831026 kubelet[1544]: E1213 14:31:13.830750 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:14.088529 kubelet[1544]: I1213 14:31:14.088367 1544 setters.go:580] "Node became not ready" node="10.128.0.81" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:31:14Z","lastTransitionTime":"2024-12-13T14:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:31:14.460911 env[1232]: time="2024-12-13T14:31:14.460827446Z" level=info msg="shim disconnected" id=160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43 Dec 13 14:31:14.461543 env[1232]: time="2024-12-13T14:31:14.460915901Z" level=warning msg="cleaning up after shim disconnected" id=160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43 namespace=k8s.io Dec 13 14:31:14.461543 env[1232]: time="2024-12-13T14:31:14.460933200Z" level=info msg="cleaning up dead shim" Dec 13 14:31:14.473078 env[1232]: time="2024-12-13T14:31:14.473019166Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3003 runtime=io.containerd.runc.v2\n" Dec 13 14:31:14.475968 env[1232]: time="2024-12-13T14:31:14.475905052Z" level=info msg="StopContainer for \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\" returns successfully" Dec 13 14:31:14.477000 env[1232]: time="2024-12-13T14:31:14.476940154Z" level=info msg="StopPodSandbox for \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\"" Dec 13 14:31:14.477158 env[1232]: time="2024-12-13T14:31:14.477029151Z" level=info msg="Container to stop \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:14.477158 env[1232]: time="2024-12-13T14:31:14.477054289Z" level=info msg="Container to stop \"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:14.477158 env[1232]: time="2024-12-13T14:31:14.477074672Z" level=info msg="Container to stop \"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:14.477158 env[1232]: time="2024-12-13T14:31:14.477093081Z" level=info msg="Container to stop \"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:14.477158 env[1232]: time="2024-12-13T14:31:14.477114883Z" level=info msg="Container to stop \"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:14.480452 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd-shm.mount: Deactivated successfully. Dec 13 14:31:14.490346 systemd[1]: cri-containerd-a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd.scope: Deactivated successfully. Dec 13 14:31:14.520753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd-rootfs.mount: Deactivated successfully. Dec 13 14:31:14.526796 env[1232]: time="2024-12-13T14:31:14.526736903Z" level=info msg="shim disconnected" id=a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd Dec 13 14:31:14.527678 env[1232]: time="2024-12-13T14:31:14.527642137Z" level=warning msg="cleaning up after shim disconnected" id=a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd namespace=k8s.io Dec 13 14:31:14.527873 env[1232]: time="2024-12-13T14:31:14.527829451Z" level=info msg="cleaning up dead shim" Dec 13 14:31:14.539301 env[1232]: time="2024-12-13T14:31:14.539239800Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3035 runtime=io.containerd.runc.v2\n" Dec 13 14:31:14.539745 env[1232]: time="2024-12-13T14:31:14.539696319Z" level=info msg="TearDown network for sandbox \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" successfully" Dec 13 14:31:14.539745 env[1232]: time="2024-12-13T14:31:14.539737288Z" level=info msg="StopPodSandbox for \"a57b252c95e03752e0744963b3b3973f288440034bd37a2b001704f3be5a51dd\" returns successfully" Dec 13 14:31:14.564175 kubelet[1544]: I1213 14:31:14.564125 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-xtables-lock\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564175 kubelet[1544]: I1213 14:31:14.564180 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-run\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564531 kubelet[1544]: I1213 14:31:14.564209 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-cgroup\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564531 kubelet[1544]: I1213 14:31:14.564234 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-lib-modules\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564531 kubelet[1544]: I1213 14:31:14.564267 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-clustermesh-secrets\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564531 kubelet[1544]: I1213 14:31:14.564291 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-etc-cni-netd\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564531 kubelet[1544]: I1213 14:31:14.564317 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76q96\" (UniqueName: \"kubernetes.io/projected/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-kube-api-access-76q96\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564531 kubelet[1544]: I1213 14:31:14.564346 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-host-proc-sys-net\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564864 kubelet[1544]: I1213 14:31:14.564371 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-hostproc\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564864 kubelet[1544]: I1213 14:31:14.564393 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cni-path\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564864 kubelet[1544]: I1213 14:31:14.564443 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-host-proc-sys-kernel\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564864 kubelet[1544]: I1213 14:31:14.564476 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-hubble-tls\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564864 kubelet[1544]: I1213 14:31:14.564502 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-bpf-maps\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.564864 kubelet[1544]: I1213 14:31:14.564533 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-config-path\") pod \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\" (UID: \"e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a\") " Dec 13 14:31:14.568804 kubelet[1544]: I1213 14:31:14.567628 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:31:14.568804 kubelet[1544]: I1213 14:31:14.567710 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:14.568804 kubelet[1544]: I1213 14:31:14.567749 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-hostproc" (OuterVolumeSpecName: "hostproc") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:14.568804 kubelet[1544]: I1213 14:31:14.567776 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cni-path" (OuterVolumeSpecName: "cni-path") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:14.568804 kubelet[1544]: I1213 14:31:14.567798 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:14.569789 kubelet[1544]: I1213 14:31:14.569287 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:14.569789 kubelet[1544]: I1213 14:31:14.569320 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:14.569789 kubelet[1544]: I1213 14:31:14.569287 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:14.569789 kubelet[1544]: I1213 14:31:14.569355 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:14.569789 kubelet[1544]: I1213 14:31:14.569380 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:14.570211 kubelet[1544]: I1213 14:31:14.569408 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:14.575794 systemd[1]: var-lib-kubelet-pods-e6b0ebde\x2de32a\x2d4cff\x2da94b\x2db2cb07fdbe8a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:31:14.579817 kubelet[1544]: I1213 14:31:14.577256 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:31:14.579817 kubelet[1544]: I1213 14:31:14.577951 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:31:14.581303 systemd[1]: var-lib-kubelet-pods-e6b0ebde\x2de32a\x2d4cff\x2da94b\x2db2cb07fdbe8a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:31:14.585914 systemd[1]: var-lib-kubelet-pods-e6b0ebde\x2de32a\x2d4cff\x2da94b\x2db2cb07fdbe8a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d76q96.mount: Deactivated successfully. Dec 13 14:31:14.588250 kubelet[1544]: I1213 14:31:14.588191 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-kube-api-access-76q96" (OuterVolumeSpecName: "kube-api-access-76q96") pod "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" (UID: "e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a"). InnerVolumeSpecName "kube-api-access-76q96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:31:14.664981 kubelet[1544]: I1213 14:31:14.664926 1544 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-lib-modules\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.664981 kubelet[1544]: I1213 14:31:14.664976 1544 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-clustermesh-secrets\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.664981 kubelet[1544]: I1213 14:31:14.664992 1544 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-etc-cni-netd\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.665302 kubelet[1544]: I1213 14:31:14.665005 1544 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-76q96\" (UniqueName: \"kubernetes.io/projected/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-kube-api-access-76q96\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.665302 kubelet[1544]: I1213 14:31:14.665020 1544 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cni-path\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.665302 kubelet[1544]: I1213 14:31:14.665035 1544 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-host-proc-sys-kernel\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.665302 kubelet[1544]: I1213 14:31:14.665047 1544 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-hubble-tls\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.665302 kubelet[1544]: I1213 14:31:14.665058 1544 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-bpf-maps\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.665302 kubelet[1544]: I1213 14:31:14.665070 1544 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-host-proc-sys-net\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.665302 kubelet[1544]: I1213 14:31:14.665081 1544 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-hostproc\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.665302 kubelet[1544]: I1213 14:31:14.665092 1544 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-config-path\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.665614 kubelet[1544]: I1213 14:31:14.665105 1544 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-xtables-lock\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.665614 kubelet[1544]: I1213 14:31:14.665116 1544 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-run\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.665614 kubelet[1544]: I1213 14:31:14.665127 1544 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a-cilium-cgroup\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:14.831268 kubelet[1544]: E1213 14:31:14.831125 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:15.002732 systemd[1]: Removed slice kubepods-burstable-pode6b0ebde_e32a_4cff_a94b_b2cb07fdbe8a.slice. Dec 13 14:31:15.002903 systemd[1]: kubepods-burstable-pode6b0ebde_e32a_4cff_a94b_b2cb07fdbe8a.slice: Consumed 8.813s CPU time. Dec 13 14:31:15.212583 kubelet[1544]: I1213 14:31:15.212551 1544 scope.go:117] "RemoveContainer" containerID="160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43" Dec 13 14:31:15.216794 env[1232]: time="2024-12-13T14:31:15.216725289Z" level=info msg="RemoveContainer for \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\"" Dec 13 14:31:15.222773 env[1232]: time="2024-12-13T14:31:15.222714335Z" level=info msg="RemoveContainer for \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\" returns successfully" Dec 13 14:31:15.223159 kubelet[1544]: I1213 14:31:15.223095 1544 scope.go:117] "RemoveContainer" containerID="570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9" Dec 13 14:31:15.225357 env[1232]: time="2024-12-13T14:31:15.224873664Z" level=info msg="RemoveContainer for \"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9\"" Dec 13 14:31:15.229211 env[1232]: time="2024-12-13T14:31:15.229153068Z" level=info msg="RemoveContainer for \"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9\" returns successfully" Dec 13 14:31:15.229441 kubelet[1544]: I1213 14:31:15.229392 1544 scope.go:117] "RemoveContainer" containerID="ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e" Dec 13 14:31:15.230999 env[1232]: time="2024-12-13T14:31:15.230943695Z" level=info msg="RemoveContainer for \"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e\"" Dec 13 14:31:15.235369 env[1232]: time="2024-12-13T14:31:15.235324402Z" level=info msg="RemoveContainer for \"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e\" returns successfully" Dec 13 14:31:15.235714 kubelet[1544]: I1213 14:31:15.235686 1544 scope.go:117] "RemoveContainer" containerID="b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd" Dec 13 14:31:15.239701 env[1232]: time="2024-12-13T14:31:15.239253994Z" level=info msg="RemoveContainer for \"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd\"" Dec 13 14:31:15.243246 env[1232]: time="2024-12-13T14:31:15.243159005Z" level=info msg="RemoveContainer for \"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd\" returns successfully" Dec 13 14:31:15.244738 kubelet[1544]: I1213 14:31:15.244710 1544 scope.go:117] "RemoveContainer" containerID="04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d" Dec 13 14:31:15.246516 env[1232]: time="2024-12-13T14:31:15.246474900Z" level=info msg="RemoveContainer for \"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d\"" Dec 13 14:31:15.254902 env[1232]: time="2024-12-13T14:31:15.254841192Z" level=info msg="RemoveContainer for \"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d\" returns successfully" Dec 13 14:31:15.255288 kubelet[1544]: I1213 14:31:15.255247 1544 scope.go:117] "RemoveContainer" containerID="160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43" Dec 13 14:31:15.255698 env[1232]: time="2024-12-13T14:31:15.255605381Z" level=error msg="ContainerStatus for \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\": not found" Dec 13 14:31:15.255902 kubelet[1544]: E1213 14:31:15.255867 1544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\": not found" containerID="160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43" Dec 13 14:31:15.256018 kubelet[1544]: I1213 14:31:15.255912 1544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43"} err="failed to get container status \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\": rpc error: code = NotFound desc = an error occurred when try to find container \"160fd5cc25ed8afc366265ee518fa7cceaf33c2450e1c9ec33fc57c49be43d43\": not found" Dec 13 14:31:15.256082 kubelet[1544]: I1213 14:31:15.256023 1544 scope.go:117] "RemoveContainer" containerID="570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9" Dec 13 14:31:15.256337 env[1232]: time="2024-12-13T14:31:15.256253872Z" level=error msg="ContainerStatus for \"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9\": not found" Dec 13 14:31:15.256688 kubelet[1544]: E1213 14:31:15.256644 1544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9\": not found" containerID="570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9" Dec 13 14:31:15.256822 kubelet[1544]: I1213 14:31:15.256693 1544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9"} err="failed to get container status \"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9\": rpc error: code = NotFound desc = an error occurred when try to find container \"570c20cfe82be5005e622dc35270580a793aca3a52a2567b60319076ea7c1ac9\": not found" Dec 13 14:31:15.256822 kubelet[1544]: I1213 14:31:15.256726 1544 scope.go:117] "RemoveContainer" containerID="ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e" Dec 13 14:31:15.257093 env[1232]: time="2024-12-13T14:31:15.256979396Z" level=error msg="ContainerStatus for \"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e\": not found" Dec 13 14:31:15.257331 kubelet[1544]: E1213 14:31:15.257289 1544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e\": not found" containerID="ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e" Dec 13 14:31:15.257587 kubelet[1544]: I1213 14:31:15.257328 1544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e"} err="failed to get container status \"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba64650d8fe17db1aa1ff7db254226e7429ff550887d9d5d52f8131a2cb1e25e\": not found" Dec 13 14:31:15.257587 kubelet[1544]: I1213 14:31:15.257354 1544 scope.go:117] "RemoveContainer" containerID="b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd" Dec 13 14:31:15.257728 env[1232]: time="2024-12-13T14:31:15.257614957Z" level=error msg="ContainerStatus for \"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd\": not found" Dec 13 14:31:15.257852 kubelet[1544]: E1213 14:31:15.257822 1544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd\": not found" containerID="b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd" Dec 13 14:31:15.257929 kubelet[1544]: I1213 14:31:15.257860 1544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd"} err="failed to get container status \"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd\": rpc error: code = NotFound desc = an error occurred when try to find container \"b36477b479ea10d7afbbc8e86d8b29a334cd7c7efa37a54facc12a73aa26afcd\": not found" Dec 13 14:31:15.257929 kubelet[1544]: I1213 14:31:15.257886 1544 scope.go:117] "RemoveContainer" containerID="04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d" Dec 13 14:31:15.258236 env[1232]: time="2024-12-13T14:31:15.258160804Z" level=error msg="ContainerStatus for \"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d\": not found" Dec 13 14:31:15.258392 kubelet[1544]: E1213 14:31:15.258362 1544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d\": not found" containerID="04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d" Dec 13 14:31:15.258487 kubelet[1544]: I1213 14:31:15.258398 1544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d"} err="failed to get container status \"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d\": rpc error: code = NotFound desc = an error occurred when try to find container \"04e6aa09c66fa25225b0f3768f0386e11e65444a0dbcc6300fa9bd6c8163633d\": not found" Dec 13 14:31:15.788434 kubelet[1544]: I1213 14:31:15.788372 1544 topology_manager.go:215] "Topology Admit Handler" podUID="64409cb0-483f-495b-ac87-c00538d411a2" podNamespace="kube-system" podName="cilium-operator-599987898-b6p4d" Dec 13 14:31:15.788674 kubelet[1544]: E1213 14:31:15.788464 1544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" containerName="mount-bpf-fs" Dec 13 14:31:15.788674 kubelet[1544]: E1213 14:31:15.788482 1544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" containerName="clean-cilium-state" Dec 13 14:31:15.788674 kubelet[1544]: E1213 14:31:15.788492 1544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" containerName="cilium-agent" Dec 13 14:31:15.788674 kubelet[1544]: E1213 14:31:15.788503 1544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" containerName="mount-cgroup" Dec 13 14:31:15.788674 kubelet[1544]: E1213 14:31:15.788513 1544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" containerName="apply-sysctl-overwrites" Dec 13 14:31:15.788674 kubelet[1544]: I1213 14:31:15.788542 1544 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" containerName="cilium-agent" Dec 13 14:31:15.795555 systemd[1]: Created slice kubepods-besteffort-pod64409cb0_483f_495b_ac87_c00538d411a2.slice. Dec 13 14:31:15.809540 kubelet[1544]: I1213 14:31:15.809496 1544 topology_manager.go:215] "Topology Admit Handler" podUID="d5e996e4-0280-4a62-8be5-9c3260ca9a9b" podNamespace="kube-system" podName="cilium-8j4zv" Dec 13 14:31:15.816552 systemd[1]: Created slice kubepods-burstable-podd5e996e4_0280_4a62_8be5_9c3260ca9a9b.slice. Dec 13 14:31:15.831797 kubelet[1544]: E1213 14:31:15.831727 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:15.874633 kubelet[1544]: I1213 14:31:15.874586 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-cgroup\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.874979 kubelet[1544]: I1213 14:31:15.874934 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-run\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875095 kubelet[1544]: I1213 14:31:15.874982 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-clustermesh-secrets\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875095 kubelet[1544]: I1213 14:31:15.875009 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-host-proc-sys-net\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875095 kubelet[1544]: I1213 14:31:15.875036 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-hubble-tls\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875095 kubelet[1544]: I1213 14:31:15.875061 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-lib-modules\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875095 kubelet[1544]: I1213 14:31:15.875085 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cni-path\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875377 kubelet[1544]: I1213 14:31:15.875110 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-xtables-lock\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875377 kubelet[1544]: I1213 14:31:15.875144 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-config-path\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875377 kubelet[1544]: I1213 14:31:15.875176 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-ipsec-secrets\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875377 kubelet[1544]: I1213 14:31:15.875203 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-hostproc\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875377 kubelet[1544]: I1213 14:31:15.875229 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwnrn\" (UniqueName: \"kubernetes.io/projected/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-kube-api-access-dwnrn\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875705 kubelet[1544]: I1213 14:31:15.875261 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-host-proc-sys-kernel\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875705 kubelet[1544]: I1213 14:31:15.875289 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-etc-cni-netd\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:15.875705 kubelet[1544]: I1213 14:31:15.875320 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64409cb0-483f-495b-ac87-c00538d411a2-cilium-config-path\") pod \"cilium-operator-599987898-b6p4d\" (UID: \"64409cb0-483f-495b-ac87-c00538d411a2\") " pod="kube-system/cilium-operator-599987898-b6p4d" Dec 13 14:31:15.875705 kubelet[1544]: I1213 14:31:15.875354 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xzd7\" (UniqueName: \"kubernetes.io/projected/64409cb0-483f-495b-ac87-c00538d411a2-kube-api-access-4xzd7\") pod \"cilium-operator-599987898-b6p4d\" (UID: \"64409cb0-483f-495b-ac87-c00538d411a2\") " pod="kube-system/cilium-operator-599987898-b6p4d" Dec 13 14:31:15.875705 kubelet[1544]: I1213 14:31:15.875383 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-bpf-maps\") pod \"cilium-8j4zv\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " pod="kube-system/cilium-8j4zv" Dec 13 14:31:16.100404 env[1232]: time="2024-12-13T14:31:16.100244452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b6p4d,Uid:64409cb0-483f-495b-ac87-c00538d411a2,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:16.121026 env[1232]: time="2024-12-13T14:31:16.120919147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:16.121263 env[1232]: time="2024-12-13T14:31:16.120978658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:16.121263 env[1232]: time="2024-12-13T14:31:16.120996039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:16.121481 env[1232]: time="2024-12-13T14:31:16.121234088Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39eb808c1bad8f4d815e2336fef7ac38b5dd5940b50c40d1790a05c56d38e238 pid=3063 runtime=io.containerd.runc.v2 Dec 13 14:31:16.125335 env[1232]: time="2024-12-13T14:31:16.125285848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8j4zv,Uid:d5e996e4-0280-4a62-8be5-9c3260ca9a9b,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:16.141411 systemd[1]: Started cri-containerd-39eb808c1bad8f4d815e2336fef7ac38b5dd5940b50c40d1790a05c56d38e238.scope. Dec 13 14:31:16.162593 env[1232]: time="2024-12-13T14:31:16.162490923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:16.163104 env[1232]: time="2024-12-13T14:31:16.163022893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:16.163468 env[1232]: time="2024-12-13T14:31:16.163370928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:16.164061 env[1232]: time="2024-12-13T14:31:16.163996915Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf pid=3089 runtime=io.containerd.runc.v2 Dec 13 14:31:16.189254 systemd[1]: Started cri-containerd-eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf.scope. Dec 13 14:31:16.240822 env[1232]: time="2024-12-13T14:31:16.240763337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b6p4d,Uid:64409cb0-483f-495b-ac87-c00538d411a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"39eb808c1bad8f4d815e2336fef7ac38b5dd5940b50c40d1790a05c56d38e238\"" Dec 13 14:31:16.243351 env[1232]: time="2024-12-13T14:31:16.243302271Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:31:16.253518 env[1232]: time="2024-12-13T14:31:16.253471437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8j4zv,Uid:d5e996e4-0280-4a62-8be5-9c3260ca9a9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf\"" Dec 13 14:31:16.257673 env[1232]: time="2024-12-13T14:31:16.257616190Z" level=info msg="CreateContainer within sandbox \"eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:31:16.272356 env[1232]: time="2024-12-13T14:31:16.272284134Z" level=info msg="CreateContainer within sandbox \"eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3\"" Dec 13 14:31:16.273302 env[1232]: time="2024-12-13T14:31:16.273255074Z" level=info msg="StartContainer for \"8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3\"" Dec 13 14:31:16.295657 systemd[1]: Started cri-containerd-8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3.scope. Dec 13 14:31:16.312225 systemd[1]: cri-containerd-8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3.scope: Deactivated successfully. Dec 13 14:31:16.328904 env[1232]: time="2024-12-13T14:31:16.328822164Z" level=info msg="shim disconnected" id=8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3 Dec 13 14:31:16.328904 env[1232]: time="2024-12-13T14:31:16.328905729Z" level=warning msg="cleaning up after shim disconnected" id=8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3 namespace=k8s.io Dec 13 14:31:16.329273 env[1232]: time="2024-12-13T14:31:16.328920353Z" level=info msg="cleaning up dead shim" Dec 13 14:31:16.347327 env[1232]: time="2024-12-13T14:31:16.347234487Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3164 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:31:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:31:16.347832 env[1232]: time="2024-12-13T14:31:16.347674982Z" level=error msg="copy shim log" error="read /proc/self/fd/67: file already closed" Dec 13 14:31:16.348544 env[1232]: time="2024-12-13T14:31:16.348485161Z" level=error msg="Failed to pipe stdout of container \"8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3\"" error="reading from a closed fifo" Dec 13 14:31:16.354636 env[1232]: time="2024-12-13T14:31:16.354477725Z" level=error msg="Failed to pipe stderr of container \"8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3\"" error="reading from a closed fifo" Dec 13 14:31:16.358397 env[1232]: time="2024-12-13T14:31:16.358314778Z" level=error msg="StartContainer for \"8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:31:16.358870 kubelet[1544]: E1213 14:31:16.358777 1544 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3" Dec 13 14:31:16.359348 kubelet[1544]: E1213 14:31:16.359307 1544 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:31:16.359348 kubelet[1544]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:31:16.359348 kubelet[1544]: rm /hostbin/cilium-mount Dec 13 14:31:16.359614 kubelet[1544]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwnrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-8j4zv_kube-system(d5e996e4-0280-4a62-8be5-9c3260ca9a9b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:31:16.359614 kubelet[1544]: E1213 14:31:16.359364 1544 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8j4zv" podUID="d5e996e4-0280-4a62-8be5-9c3260ca9a9b" Dec 13 14:31:16.832910 kubelet[1544]: E1213 14:31:16.832851 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:17.001855 kubelet[1544]: I1213 14:31:17.000939 1544 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a" path="/var/lib/kubelet/pods/e6b0ebde-e32a-4cff-a94b-b2cb07fdbe8a/volumes" Dec 13 14:31:17.124398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132042464.mount: Deactivated successfully. Dec 13 14:31:17.219909 env[1232]: time="2024-12-13T14:31:17.219853404Z" level=info msg="StopPodSandbox for \"eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf\"" Dec 13 14:31:17.220462 env[1232]: time="2024-12-13T14:31:17.219938846Z" level=info msg="Container to stop \"8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:17.226283 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf-shm.mount: Deactivated successfully. Dec 13 14:31:17.241051 systemd[1]: cri-containerd-eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf.scope: Deactivated successfully. Dec 13 14:31:17.278456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf-rootfs.mount: Deactivated successfully. Dec 13 14:31:17.289108 env[1232]: time="2024-12-13T14:31:17.289047689Z" level=info msg="shim disconnected" id=eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf Dec 13 14:31:17.289518 env[1232]: time="2024-12-13T14:31:17.289479882Z" level=warning msg="cleaning up after shim disconnected" id=eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf namespace=k8s.io Dec 13 14:31:17.289518 env[1232]: time="2024-12-13T14:31:17.289514569Z" level=info msg="cleaning up dead shim" Dec 13 14:31:17.301845 env[1232]: time="2024-12-13T14:31:17.301795317Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3195 runtime=io.containerd.runc.v2\n" Dec 13 14:31:17.302487 env[1232]: time="2024-12-13T14:31:17.302444646Z" level=info msg="TearDown network for sandbox \"eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf\" successfully" Dec 13 14:31:17.302819 env[1232]: time="2024-12-13T14:31:17.302767843Z" level=info msg="StopPodSandbox for \"eadda8216f6d1a2bffd5e543817daa368a11becc73acc000d00590c362881bbf\" returns successfully" Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.389624 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-etc-cni-netd\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.389679 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-run\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.389748 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-hubble-tls\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.389845 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-ipsec-secrets\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.389878 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-hostproc\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.389923 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-host-proc-sys-kernel\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.389957 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwnrn\" (UniqueName: \"kubernetes.io/projected/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-kube-api-access-dwnrn\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.390003 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-cgroup\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.390032 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-bpf-maps\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.390197 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-host-proc-sys-net\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.390332 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cni-path\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.390540 kubelet[1544]: I1213 14:31:17.390371 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-config-path\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.391339 kubelet[1544]: I1213 14:31:17.390560 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-clustermesh-secrets\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.391339 kubelet[1544]: I1213 14:31:17.390588 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-lib-modules\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.391339 kubelet[1544]: I1213 14:31:17.390613 1544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-xtables-lock\") pod \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\" (UID: \"d5e996e4-0280-4a62-8be5-9c3260ca9a9b\") " Dec 13 14:31:17.391339 kubelet[1544]: I1213 14:31:17.390760 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:17.391339 kubelet[1544]: I1213 14:31:17.390825 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:17.391339 kubelet[1544]: I1213 14:31:17.390853 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:17.391705 kubelet[1544]: I1213 14:31:17.391459 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:17.393259 kubelet[1544]: I1213 14:31:17.393218 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-hostproc" (OuterVolumeSpecName: "hostproc") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:17.393392 kubelet[1544]: I1213 14:31:17.393278 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:17.395570 kubelet[1544]: I1213 14:31:17.393548 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:17.395570 kubelet[1544]: I1213 14:31:17.393593 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cni-path" (OuterVolumeSpecName: "cni-path") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:17.396389 kubelet[1544]: I1213 14:31:17.396357 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:17.404970 kubelet[1544]: I1213 14:31:17.404924 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:17.406929 kubelet[1544]: I1213 14:31:17.406894 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:31:17.407467 kubelet[1544]: I1213 14:31:17.407405 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:31:17.407626 kubelet[1544]: I1213 14:31:17.407525 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:31:17.408602 kubelet[1544]: I1213 14:31:17.408571 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-kube-api-access-dwnrn" (OuterVolumeSpecName: "kube-api-access-dwnrn") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "kube-api-access-dwnrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:31:17.411950 kubelet[1544]: I1213 14:31:17.411914 1544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d5e996e4-0280-4a62-8be5-9c3260ca9a9b" (UID: "d5e996e4-0280-4a62-8be5-9c3260ca9a9b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:31:17.491356 kubelet[1544]: I1213 14:31:17.491303 1544 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-hostproc\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491356 kubelet[1544]: I1213 14:31:17.491353 1544 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-host-proc-sys-kernel\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491356 kubelet[1544]: I1213 14:31:17.491369 1544 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-cgroup\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491382 1544 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-bpf-maps\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491395 1544 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-host-proc-sys-net\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491411 1544 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cni-path\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491437 1544 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-config-path\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491452 1544 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dwnrn\" (UniqueName: \"kubernetes.io/projected/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-kube-api-access-dwnrn\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491463 1544 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-clustermesh-secrets\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491475 1544 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-lib-modules\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491486 1544 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-xtables-lock\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491498 1544 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-run\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491509 1544 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-hubble-tls\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491548 1544 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-cilium-ipsec-secrets\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.491697 kubelet[1544]: I1213 14:31:17.491561 1544 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5e996e4-0280-4a62-8be5-9c3260ca9a9b-etc-cni-netd\") on node \"10.128.0.81\" DevicePath \"\"" Dec 13 14:31:17.833305 kubelet[1544]: E1213 14:31:17.833189 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:17.933369 kubelet[1544]: E1213 14:31:17.933269 1544 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:31:17.991583 systemd[1]: var-lib-kubelet-pods-d5e996e4\x2d0280\x2d4a62\x2d8be5\x2d9c3260ca9a9b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddwnrn.mount: Deactivated successfully. Dec 13 14:31:17.991731 systemd[1]: var-lib-kubelet-pods-d5e996e4\x2d0280\x2d4a62\x2d8be5\x2d9c3260ca9a9b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:31:17.991846 systemd[1]: var-lib-kubelet-pods-d5e996e4\x2d0280\x2d4a62\x2d8be5\x2d9c3260ca9a9b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:31:17.991940 systemd[1]: var-lib-kubelet-pods-d5e996e4\x2d0280\x2d4a62\x2d8be5\x2d9c3260ca9a9b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:31:18.234718 kubelet[1544]: I1213 14:31:18.234676 1544 scope.go:117] "RemoveContainer" containerID="8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3" Dec 13 14:31:18.240804 systemd[1]: Removed slice kubepods-burstable-podd5e996e4_0280_4a62_8be5_9c3260ca9a9b.slice. Dec 13 14:31:18.245642 env[1232]: time="2024-12-13T14:31:18.245597093Z" level=info msg="RemoveContainer for \"8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3\"" Dec 13 14:31:18.252490 env[1232]: time="2024-12-13T14:31:18.252409207Z" level=info msg="RemoveContainer for \"8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3\" returns successfully" Dec 13 14:31:18.294048 kubelet[1544]: I1213 14:31:18.293996 1544 topology_manager.go:215] "Topology Admit Handler" podUID="ea141fd2-7463-4622-bee4-10236e73e5dd" podNamespace="kube-system" podName="cilium-f5ngp" Dec 13 14:31:18.294266 kubelet[1544]: E1213 14:31:18.294110 1544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5e996e4-0280-4a62-8be5-9c3260ca9a9b" containerName="mount-cgroup" Dec 13 14:31:18.294266 kubelet[1544]: I1213 14:31:18.294156 1544 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e996e4-0280-4a62-8be5-9c3260ca9a9b" containerName="mount-cgroup" Dec 13 14:31:18.296853 env[1232]: time="2024-12-13T14:31:18.296783799Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:18.300232 env[1232]: time="2024-12-13T14:31:18.300171901Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:18.302224 systemd[1]: Created slice kubepods-burstable-podea141fd2_7463_4622_bee4_10236e73e5dd.slice. Dec 13 14:31:18.305881 env[1232]: time="2024-12-13T14:31:18.305829006Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:18.306153 env[1232]: time="2024-12-13T14:31:18.305817969Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:31:18.310123 env[1232]: time="2024-12-13T14:31:18.310064585Z" level=info msg="CreateContainer within sandbox \"39eb808c1bad8f4d815e2336fef7ac38b5dd5940b50c40d1790a05c56d38e238\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:31:18.328287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3023018204.mount: Deactivated successfully. Dec 13 14:31:18.337915 env[1232]: time="2024-12-13T14:31:18.337867583Z" level=info msg="CreateContainer within sandbox \"39eb808c1bad8f4d815e2336fef7ac38b5dd5940b50c40d1790a05c56d38e238\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ab62d9a5b3dd5e5251a2538ae41ef643d7ff1717e30da23ea7a6ae1007a88b3a\"" Dec 13 14:31:18.338800 env[1232]: time="2024-12-13T14:31:18.338711656Z" level=info msg="StartContainer for \"ab62d9a5b3dd5e5251a2538ae41ef643d7ff1717e30da23ea7a6ae1007a88b3a\"" Dec 13 14:31:18.369652 systemd[1]: Started cri-containerd-ab62d9a5b3dd5e5251a2538ae41ef643d7ff1717e30da23ea7a6ae1007a88b3a.scope. Dec 13 14:31:18.396974 kubelet[1544]: I1213 14:31:18.396917 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea141fd2-7463-4622-bee4-10236e73e5dd-cilium-run\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.396974 kubelet[1544]: I1213 14:31:18.396974 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea141fd2-7463-4622-bee4-10236e73e5dd-bpf-maps\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397210 kubelet[1544]: I1213 14:31:18.397003 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea141fd2-7463-4622-bee4-10236e73e5dd-cilium-cgroup\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397210 kubelet[1544]: I1213 14:31:18.397028 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea141fd2-7463-4622-bee4-10236e73e5dd-cni-path\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397210 kubelet[1544]: I1213 14:31:18.397052 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea141fd2-7463-4622-bee4-10236e73e5dd-xtables-lock\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397210 kubelet[1544]: I1213 14:31:18.397076 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea141fd2-7463-4622-bee4-10236e73e5dd-clustermesh-secrets\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397210 kubelet[1544]: I1213 14:31:18.397102 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea141fd2-7463-4622-bee4-10236e73e5dd-etc-cni-netd\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397210 kubelet[1544]: I1213 14:31:18.397125 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ea141fd2-7463-4622-bee4-10236e73e5dd-cilium-ipsec-secrets\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397210 kubelet[1544]: I1213 14:31:18.397152 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea141fd2-7463-4622-bee4-10236e73e5dd-host-proc-sys-kernel\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397210 kubelet[1544]: I1213 14:31:18.397180 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea141fd2-7463-4622-bee4-10236e73e5dd-host-proc-sys-net\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397210 kubelet[1544]: I1213 14:31:18.397209 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea141fd2-7463-4622-bee4-10236e73e5dd-hubble-tls\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397733 kubelet[1544]: I1213 14:31:18.397234 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea141fd2-7463-4622-bee4-10236e73e5dd-hostproc\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397733 kubelet[1544]: I1213 14:31:18.397261 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea141fd2-7463-4622-bee4-10236e73e5dd-lib-modules\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397733 kubelet[1544]: I1213 14:31:18.397288 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea141fd2-7463-4622-bee4-10236e73e5dd-cilium-config-path\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.397733 kubelet[1544]: I1213 14:31:18.397318 1544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s2wp\" (UniqueName: \"kubernetes.io/projected/ea141fd2-7463-4622-bee4-10236e73e5dd-kube-api-access-6s2wp\") pod \"cilium-f5ngp\" (UID: \"ea141fd2-7463-4622-bee4-10236e73e5dd\") " pod="kube-system/cilium-f5ngp" Dec 13 14:31:18.417701 env[1232]: time="2024-12-13T14:31:18.417627648Z" level=info msg="StartContainer for \"ab62d9a5b3dd5e5251a2538ae41ef643d7ff1717e30da23ea7a6ae1007a88b3a\" returns successfully" Dec 13 14:31:18.610355 env[1232]: time="2024-12-13T14:31:18.610204887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f5ngp,Uid:ea141fd2-7463-4622-bee4-10236e73e5dd,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:18.634883 env[1232]: time="2024-12-13T14:31:18.634773776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:18.635123 env[1232]: time="2024-12-13T14:31:18.634838637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:18.635123 env[1232]: time="2024-12-13T14:31:18.634858161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:18.635123 env[1232]: time="2024-12-13T14:31:18.635048935Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f pid=3262 runtime=io.containerd.runc.v2 Dec 13 14:31:18.665839 systemd[1]: Started cri-containerd-8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f.scope. Dec 13 14:31:18.717109 env[1232]: time="2024-12-13T14:31:18.717053748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f5ngp,Uid:ea141fd2-7463-4622-bee4-10236e73e5dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f\"" Dec 13 14:31:18.721357 env[1232]: time="2024-12-13T14:31:18.721309139Z" level=info msg="CreateContainer within sandbox \"8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:31:18.738594 env[1232]: time="2024-12-13T14:31:18.738531050Z" level=info msg="CreateContainer within sandbox \"8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fdbd3ad7d1d8b2353d1d6866490e2e06d9abdee9f01122133500f40395de5ddd\"" Dec 13 14:31:18.739582 env[1232]: time="2024-12-13T14:31:18.739541553Z" level=info msg="StartContainer for \"fdbd3ad7d1d8b2353d1d6866490e2e06d9abdee9f01122133500f40395de5ddd\"" Dec 13 14:31:18.763258 systemd[1]: Started cri-containerd-fdbd3ad7d1d8b2353d1d6866490e2e06d9abdee9f01122133500f40395de5ddd.scope. Dec 13 14:31:18.804925 env[1232]: time="2024-12-13T14:31:18.804854067Z" level=info msg="StartContainer for \"fdbd3ad7d1d8b2353d1d6866490e2e06d9abdee9f01122133500f40395de5ddd\" returns successfully" Dec 13 14:31:18.814689 systemd[1]: cri-containerd-fdbd3ad7d1d8b2353d1d6866490e2e06d9abdee9f01122133500f40395de5ddd.scope: Deactivated successfully. Dec 13 14:31:18.833941 kubelet[1544]: E1213 14:31:18.833891 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:18.999727 kubelet[1544]: I1213 14:31:18.999668 1544 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e996e4-0280-4a62-8be5-9c3260ca9a9b" path="/var/lib/kubelet/pods/d5e996e4-0280-4a62-8be5-9c3260ca9a9b/volumes" Dec 13 14:31:19.012742 env[1232]: time="2024-12-13T14:31:19.012563919Z" level=info msg="shim disconnected" id=fdbd3ad7d1d8b2353d1d6866490e2e06d9abdee9f01122133500f40395de5ddd Dec 13 14:31:19.012944 env[1232]: time="2024-12-13T14:31:19.012726614Z" level=warning msg="cleaning up after shim disconnected" id=fdbd3ad7d1d8b2353d1d6866490e2e06d9abdee9f01122133500f40395de5ddd namespace=k8s.io Dec 13 14:31:19.012944 env[1232]: time="2024-12-13T14:31:19.012758150Z" level=info msg="cleaning up dead shim" Dec 13 14:31:19.024792 env[1232]: time="2024-12-13T14:31:19.024737126Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3347 runtime=io.containerd.runc.v2\n" Dec 13 14:31:19.246152 env[1232]: time="2024-12-13T14:31:19.246086535Z" level=info msg="CreateContainer within sandbox \"8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:31:19.266270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1722500116.mount: Deactivated successfully. Dec 13 14:31:19.273409 kubelet[1544]: I1213 14:31:19.273336 1544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-b6p4d" podStartSLOduration=2.20844293 podStartE2EDuration="4.273311612s" podCreationTimestamp="2024-12-13 14:31:15 +0000 UTC" firstStartedPulling="2024-12-13 14:31:16.242488107 +0000 UTC m=+64.015178527" lastFinishedPulling="2024-12-13 14:31:18.307356793 +0000 UTC m=+66.080047209" observedRunningTime="2024-12-13 14:31:19.251855088 +0000 UTC m=+67.024545528" watchObservedRunningTime="2024-12-13 14:31:19.273311612 +0000 UTC m=+67.046002050" Dec 13 14:31:19.278009 env[1232]: time="2024-12-13T14:31:19.277950107Z" level=info msg="CreateContainer within sandbox \"8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3f441f6cda2bfed608ac19d28bde47a883f3ad2895a82a420895db7a79f2903d\"" Dec 13 14:31:19.278894 env[1232]: time="2024-12-13T14:31:19.278852182Z" level=info msg="StartContainer for \"3f441f6cda2bfed608ac19d28bde47a883f3ad2895a82a420895db7a79f2903d\"" Dec 13 14:31:19.306570 systemd[1]: Started cri-containerd-3f441f6cda2bfed608ac19d28bde47a883f3ad2895a82a420895db7a79f2903d.scope. Dec 13 14:31:19.352397 env[1232]: time="2024-12-13T14:31:19.350234029Z" level=info msg="StartContainer for \"3f441f6cda2bfed608ac19d28bde47a883f3ad2895a82a420895db7a79f2903d\" returns successfully" Dec 13 14:31:19.358643 systemd[1]: cri-containerd-3f441f6cda2bfed608ac19d28bde47a883f3ad2895a82a420895db7a79f2903d.scope: Deactivated successfully. Dec 13 14:31:19.387015 env[1232]: time="2024-12-13T14:31:19.386957432Z" level=info msg="shim disconnected" id=3f441f6cda2bfed608ac19d28bde47a883f3ad2895a82a420895db7a79f2903d Dec 13 14:31:19.387320 env[1232]: time="2024-12-13T14:31:19.387295880Z" level=warning msg="cleaning up after shim disconnected" id=3f441f6cda2bfed608ac19d28bde47a883f3ad2895a82a420895db7a79f2903d namespace=k8s.io Dec 13 14:31:19.387448 env[1232]: time="2024-12-13T14:31:19.387407339Z" level=info msg="cleaning up dead shim" Dec 13 14:31:19.398740 env[1232]: time="2024-12-13T14:31:19.398675427Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3408 runtime=io.containerd.runc.v2\n" Dec 13 14:31:19.434796 kubelet[1544]: W1213 14:31:19.434707 1544 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5e996e4_0280_4a62_8be5_9c3260ca9a9b.slice/cri-containerd-8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3.scope WatchSource:0}: container "8d888a40e58d8b9762faaa5f1cac65d9d95a35d5216b69f3bfe82b1524665db3" in namespace "k8s.io": not found Dec 13 14:31:19.834589 kubelet[1544]: E1213 14:31:19.834524 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:19.991958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f441f6cda2bfed608ac19d28bde47a883f3ad2895a82a420895db7a79f2903d-rootfs.mount: Deactivated successfully. Dec 13 14:31:20.254895 env[1232]: time="2024-12-13T14:31:20.254826821Z" level=info msg="CreateContainer within sandbox \"8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:31:20.278832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285194019.mount: Deactivated successfully. Dec 13 14:31:20.295609 env[1232]: time="2024-12-13T14:31:20.295542809Z" level=info msg="CreateContainer within sandbox \"8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bef44c2b7e9f5176a6974b2800ec10bb69a15467c9391acb95d3aa6bd0a723b4\"" Dec 13 14:31:20.296399 env[1232]: time="2024-12-13T14:31:20.296351048Z" level=info msg="StartContainer for \"bef44c2b7e9f5176a6974b2800ec10bb69a15467c9391acb95d3aa6bd0a723b4\"" Dec 13 14:31:20.330542 systemd[1]: Started cri-containerd-bef44c2b7e9f5176a6974b2800ec10bb69a15467c9391acb95d3aa6bd0a723b4.scope. Dec 13 14:31:20.375501 env[1232]: time="2024-12-13T14:31:20.374370356Z" level=info msg="StartContainer for \"bef44c2b7e9f5176a6974b2800ec10bb69a15467c9391acb95d3aa6bd0a723b4\" returns successfully" Dec 13 14:31:20.379015 systemd[1]: cri-containerd-bef44c2b7e9f5176a6974b2800ec10bb69a15467c9391acb95d3aa6bd0a723b4.scope: Deactivated successfully. Dec 13 14:31:20.412647 env[1232]: time="2024-12-13T14:31:20.412573314Z" level=info msg="shim disconnected" id=bef44c2b7e9f5176a6974b2800ec10bb69a15467c9391acb95d3aa6bd0a723b4 Dec 13 14:31:20.412647 env[1232]: time="2024-12-13T14:31:20.412630616Z" level=warning msg="cleaning up after shim disconnected" id=bef44c2b7e9f5176a6974b2800ec10bb69a15467c9391acb95d3aa6bd0a723b4 namespace=k8s.io Dec 13 14:31:20.412647 env[1232]: time="2024-12-13T14:31:20.412648475Z" level=info msg="cleaning up dead shim" Dec 13 14:31:20.425532 env[1232]: time="2024-12-13T14:31:20.425400897Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3468 runtime=io.containerd.runc.v2\n" Dec 13 14:31:20.835297 kubelet[1544]: E1213 14:31:20.835225 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:20.992228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bef44c2b7e9f5176a6974b2800ec10bb69a15467c9391acb95d3aa6bd0a723b4-rootfs.mount: Deactivated successfully. Dec 13 14:31:21.260959 env[1232]: time="2024-12-13T14:31:21.260888942Z" level=info msg="CreateContainer within sandbox \"8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:31:21.283336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142287588.mount: Deactivated successfully. Dec 13 14:31:21.295041 env[1232]: time="2024-12-13T14:31:21.294979033Z" level=info msg="CreateContainer within sandbox \"8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cbeee283dbb18477c61db77f5e6d068749067e8f0efda40f76ebee1ebb6264bd\"" Dec 13 14:31:21.296032 env[1232]: time="2024-12-13T14:31:21.295988889Z" level=info msg="StartContainer for \"cbeee283dbb18477c61db77f5e6d068749067e8f0efda40f76ebee1ebb6264bd\"" Dec 13 14:31:21.322711 systemd[1]: Started cri-containerd-cbeee283dbb18477c61db77f5e6d068749067e8f0efda40f76ebee1ebb6264bd.scope. Dec 13 14:31:21.364153 systemd[1]: cri-containerd-cbeee283dbb18477c61db77f5e6d068749067e8f0efda40f76ebee1ebb6264bd.scope: Deactivated successfully. Dec 13 14:31:21.368665 env[1232]: time="2024-12-13T14:31:21.368550594Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea141fd2_7463_4622_bee4_10236e73e5dd.slice/cri-containerd-cbeee283dbb18477c61db77f5e6d068749067e8f0efda40f76ebee1ebb6264bd.scope/memory.events\": no such file or directory" Dec 13 14:31:21.368860 env[1232]: time="2024-12-13T14:31:21.368794345Z" level=info msg="StartContainer for \"cbeee283dbb18477c61db77f5e6d068749067e8f0efda40f76ebee1ebb6264bd\" returns successfully" Dec 13 14:31:21.397396 env[1232]: time="2024-12-13T14:31:21.397334652Z" level=info msg="shim disconnected" id=cbeee283dbb18477c61db77f5e6d068749067e8f0efda40f76ebee1ebb6264bd Dec 13 14:31:21.397396 env[1232]: time="2024-12-13T14:31:21.397395717Z" level=warning msg="cleaning up after shim disconnected" id=cbeee283dbb18477c61db77f5e6d068749067e8f0efda40f76ebee1ebb6264bd namespace=k8s.io Dec 13 14:31:21.397866 env[1232]: time="2024-12-13T14:31:21.397411751Z" level=info msg="cleaning up dead shim" Dec 13 14:31:21.409338 env[1232]: time="2024-12-13T14:31:21.409284590Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3521 runtime=io.containerd.runc.v2\n" Dec 13 14:31:21.836323 kubelet[1544]: E1213 14:31:21.836246 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:21.992473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbeee283dbb18477c61db77f5e6d068749067e8f0efda40f76ebee1ebb6264bd-rootfs.mount: Deactivated successfully. Dec 13 14:31:22.267018 env[1232]: time="2024-12-13T14:31:22.266957436Z" level=info msg="CreateContainer within sandbox \"8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:31:22.300566 env[1232]: time="2024-12-13T14:31:22.300510359Z" level=info msg="CreateContainer within sandbox \"8e6a863fa3545132b136e82164f54c4dea2683edd77f07d843dfac1ef239cc6f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f9a0933a0f7ca4a8643a837ef8d4dec1378d422c5ed3b30fec21b08b7352f6ba\"" Dec 13 14:31:22.301562 env[1232]: time="2024-12-13T14:31:22.301471105Z" level=info msg="StartContainer for \"f9a0933a0f7ca4a8643a837ef8d4dec1378d422c5ed3b30fec21b08b7352f6ba\"" Dec 13 14:31:22.343326 systemd[1]: Started cri-containerd-f9a0933a0f7ca4a8643a837ef8d4dec1378d422c5ed3b30fec21b08b7352f6ba.scope. Dec 13 14:31:22.389366 env[1232]: time="2024-12-13T14:31:22.389310551Z" level=info msg="StartContainer for \"f9a0933a0f7ca4a8643a837ef8d4dec1378d422c5ed3b30fec21b08b7352f6ba\" returns successfully" Dec 13 14:31:22.550667 kubelet[1544]: W1213 14:31:22.550317 1544 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea141fd2_7463_4622_bee4_10236e73e5dd.slice/cri-containerd-fdbd3ad7d1d8b2353d1d6866490e2e06d9abdee9f01122133500f40395de5ddd.scope WatchSource:0}: task fdbd3ad7d1d8b2353d1d6866490e2e06d9abdee9f01122133500f40395de5ddd not found: not found Dec 13 14:31:22.824490 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:31:22.837455 kubelet[1544]: E1213 14:31:22.837357 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:22.996724 systemd[1]: run-containerd-runc-k8s.io-f9a0933a0f7ca4a8643a837ef8d4dec1378d422c5ed3b30fec21b08b7352f6ba-runc.gog7XH.mount: Deactivated successfully. Dec 13 14:31:23.837665 kubelet[1544]: E1213 14:31:23.837592 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:24.838173 kubelet[1544]: E1213 14:31:24.838122 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:25.663454 kubelet[1544]: W1213 14:31:25.661140 1544 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea141fd2_7463_4622_bee4_10236e73e5dd.slice/cri-containerd-3f441f6cda2bfed608ac19d28bde47a883f3ad2895a82a420895db7a79f2903d.scope WatchSource:0}: task 3f441f6cda2bfed608ac19d28bde47a883f3ad2895a82a420895db7a79f2903d not found: not found Dec 13 14:31:25.839193 kubelet[1544]: E1213 14:31:25.839126 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:25.894985 systemd-networkd[1037]: lxc_health: Link UP Dec 13 14:31:25.910460 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:31:25.913016 systemd-networkd[1037]: lxc_health: Gained carrier Dec 13 14:31:26.639932 kubelet[1544]: I1213 14:31:26.639819 1544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f5ngp" podStartSLOduration=8.639792603 podStartE2EDuration="8.639792603s" podCreationTimestamp="2024-12-13 14:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:31:23.307964505 +0000 UTC m=+71.080654942" watchObservedRunningTime="2024-12-13 14:31:26.639792603 +0000 UTC m=+74.412483041" Dec 13 14:31:26.840087 kubelet[1544]: E1213 14:31:26.840022 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:27.185780 systemd-networkd[1037]: lxc_health: Gained IPv6LL Dec 13 14:31:27.841966 kubelet[1544]: E1213 14:31:27.841893 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:27.910806 systemd[1]: run-containerd-runc-k8s.io-f9a0933a0f7ca4a8643a837ef8d4dec1378d422c5ed3b30fec21b08b7352f6ba-runc.i5hXhZ.mount: Deactivated successfully. Dec 13 14:31:28.771764 kubelet[1544]: W1213 14:31:28.771708 1544 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea141fd2_7463_4622_bee4_10236e73e5dd.slice/cri-containerd-bef44c2b7e9f5176a6974b2800ec10bb69a15467c9391acb95d3aa6bd0a723b4.scope WatchSource:0}: task bef44c2b7e9f5176a6974b2800ec10bb69a15467c9391acb95d3aa6bd0a723b4 not found: not found Dec 13 14:31:28.843177 kubelet[1544]: E1213 14:31:28.843072 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:29.843971 kubelet[1544]: E1213 14:31:29.843921 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:30.845087 kubelet[1544]: E1213 14:31:30.845039 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:31.846894 kubelet[1544]: E1213 14:31:31.846825 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:31.883952 kubelet[1544]: W1213 14:31:31.883892 1544 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea141fd2_7463_4622_bee4_10236e73e5dd.slice/cri-containerd-cbeee283dbb18477c61db77f5e6d068749067e8f0efda40f76ebee1ebb6264bd.scope WatchSource:0}: task cbeee283dbb18477c61db77f5e6d068749067e8f0efda40f76ebee1ebb6264bd not found: not found Dec 13 14:31:32.412890 systemd[1]: run-containerd-runc-k8s.io-f9a0933a0f7ca4a8643a837ef8d4dec1378d422c5ed3b30fec21b08b7352f6ba-runc.nDsEWu.mount: Deactivated successfully. Dec 13 14:31:32.778907 kubelet[1544]: E1213 14:31:32.778837 1544 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:32.847044 kubelet[1544]: E1213 14:31:32.846961 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:33.847695 kubelet[1544]: E1213 14:31:33.847625 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:34.848855 kubelet[1544]: E1213 14:31:34.848782 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:35.849625 kubelet[1544]: E1213 14:31:35.849553 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"