Dec 13 14:25:27.134579 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:25:27.134612 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:25:27.134627 kernel: BIOS-provided physical RAM map: Dec 13 14:25:27.134639 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:25:27.134648 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:25:27.134658 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:25:27.134674 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 14:25:27.134685 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 14:25:27.134696 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 14:25:27.134707 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:25:27.134720 kernel: NX (Execute Disable) protection: active Dec 13 14:25:27.134731 kernel: SMBIOS 2.7 present. Dec 13 14:25:27.134742 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 14:25:27.134754 kernel: Hypervisor detected: KVM Dec 13 14:25:27.134772 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:25:27.134785 kernel: kvm-clock: cpu 0, msr 4d19a001, primary cpu clock Dec 13 14:25:27.134798 kernel: kvm-clock: using sched offset of 7791836604 cycles Dec 13 14:25:27.134812 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:25:27.134825 kernel: tsc: Detected 2499.996 MHz processor Dec 13 14:25:27.134838 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:25:27.134854 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:25:27.134867 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 14:25:27.134880 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:25:27.134893 kernel: Using GB pages for direct mapping Dec 13 14:25:27.134906 kernel: ACPI: Early table checksum verification disabled Dec 13 14:25:27.134918 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 14:25:27.134932 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 14:25:27.134945 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:25:27.134958 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 14:25:27.134973 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 14:25:27.134986 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:25:27.134998 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:25:27.135011 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 14:25:27.135024 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:25:27.135037 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 14:25:27.135049 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 14:25:27.135062 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:25:27.135078 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 14:25:27.135091 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 14:25:27.135104 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 14:25:27.135122 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 14:25:27.135152 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 14:25:27.135166 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 14:25:27.135180 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 14:25:27.135197 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 14:25:27.135211 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 14:25:27.135225 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 14:25:27.135239 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:25:27.135252 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:25:27.135266 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 14:25:27.135280 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 14:25:27.135293 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 14:25:27.135310 kernel: Zone ranges: Dec 13 14:25:27.135324 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:25:27.135338 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 14:25:27.135352 kernel: Normal empty Dec 13 14:25:27.135365 kernel: Movable zone start for each node Dec 13 14:25:27.135379 kernel: Early memory node ranges Dec 13 14:25:27.135393 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:25:27.135407 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 14:25:27.135420 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 14:25:27.135437 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:25:27.135450 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:25:27.135464 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 14:25:27.135478 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:25:27.135491 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:25:27.135505 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 14:25:27.135519 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:25:27.135531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:25:27.135545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:25:27.135563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:25:27.135577 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:25:27.135590 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:25:27.135664 kernel: TSC deadline timer available Dec 13 14:25:27.135677 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:25:27.135692 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 14:25:27.135705 kernel: Booting paravirtualized kernel on KVM Dec 13 14:25:27.135720 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:25:27.135734 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:25:27.135751 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:25:27.135766 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:25:27.135780 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:25:27.135793 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 14:25:27.135807 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:25:27.135820 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:25:27.135834 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 14:25:27.135848 kernel: Policy zone: DMA32 Dec 13 14:25:27.135865 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:25:27.135883 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:25:27.135897 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:25:27.135911 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:25:27.135926 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:25:27.135940 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123080K reserved, 0K cma-reserved) Dec 13 14:25:27.135955 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:25:27.135969 kernel: Kernel/User page tables isolation: enabled Dec 13 14:25:27.135983 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:25:27.136000 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:25:27.136014 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:25:27.136029 kernel: rcu: RCU event tracing is enabled. Dec 13 14:25:27.136043 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:25:27.136057 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:25:27.136071 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:25:27.136086 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:25:27.136100 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:25:27.136114 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:25:27.136166 kernel: random: crng init done Dec 13 14:25:27.136181 kernel: Console: colour VGA+ 80x25 Dec 13 14:25:27.136195 kernel: printk: console [ttyS0] enabled Dec 13 14:25:27.136209 kernel: ACPI: Core revision 20210730 Dec 13 14:25:27.136224 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 14:25:27.136238 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:25:27.136252 kernel: x2apic enabled Dec 13 14:25:27.136266 kernel: Switched APIC routing to physical x2apic. Dec 13 14:25:27.136279 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 14:25:27.136295 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 14:25:27.136309 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:25:27.136323 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:25:27.136338 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:25:27.136363 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:25:27.136380 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:25:27.136395 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:25:27.136410 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:25:27.136426 kernel: RETBleed: Vulnerable Dec 13 14:25:27.136440 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:25:27.136454 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:25:27.136469 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:25:27.136484 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:25:27.136499 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:25:27.136516 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:25:27.136531 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:25:27.136545 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 14:25:27.136561 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 14:25:27.136576 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:25:27.136594 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:25:27.136609 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:25:27.136623 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 14:25:27.136638 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:25:27.136653 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 14:25:27.136668 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 14:25:27.136684 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 14:25:27.136698 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 14:25:27.136713 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 14:25:27.136728 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 14:25:27.136742 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 14:25:27.136757 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:25:27.136775 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:25:27.136790 kernel: LSM: Security Framework initializing Dec 13 14:25:27.136805 kernel: SELinux: Initializing. Dec 13 14:25:27.136820 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:25:27.136835 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:25:27.136851 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:25:27.136866 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:25:27.136881 kernel: signal: max sigframe size: 3632 Dec 13 14:25:27.136896 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:25:27.136911 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:25:27.136929 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:25:27.136943 kernel: x86: Booting SMP configuration: Dec 13 14:25:27.137123 kernel: .... node #0, CPUs: #1 Dec 13 14:25:27.137148 kernel: kvm-clock: cpu 1, msr 4d19a041, secondary cpu clock Dec 13 14:25:27.137163 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 14:25:27.137179 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:25:27.137195 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:25:27.137209 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:25:27.137224 kernel: smpboot: Max logical packages: 1 Dec 13 14:25:27.137244 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 14:25:27.137259 kernel: devtmpfs: initialized Dec 13 14:25:27.137274 kernel: x86/mm: Memory block size: 128MB Dec 13 14:25:27.137289 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:25:27.137304 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:25:27.137319 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:25:27.137334 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:25:27.137348 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:25:27.137362 kernel: audit: type=2000 audit(1734099926.878:1): state=initialized audit_enabled=0 res=1 Dec 13 14:25:27.137379 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:25:27.137394 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:25:27.137408 kernel: cpuidle: using governor menu Dec 13 14:25:27.137424 kernel: ACPI: bus type PCI registered Dec 13 14:25:27.137439 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:25:27.137454 kernel: dca service started, version 1.12.1 Dec 13 14:25:27.137470 kernel: PCI: Using configuration type 1 for base access Dec 13 14:25:27.137485 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:25:27.137500 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:25:27.137518 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:25:27.137533 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:25:27.137547 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:25:27.137563 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:25:27.137578 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:25:27.137592 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:25:27.137607 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:25:27.137622 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:25:27.137637 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:25:27.137655 kernel: ACPI: Interpreter enabled Dec 13 14:25:27.137670 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:25:27.137685 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:25:27.137700 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:25:27.137715 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:25:27.137730 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:25:27.137966 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:25:27.138100 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:25:27.138125 kernel: acpiphp: Slot [3] registered Dec 13 14:25:27.138152 kernel: acpiphp: Slot [4] registered Dec 13 14:25:27.138167 kernel: acpiphp: Slot [5] registered Dec 13 14:25:27.138182 kernel: acpiphp: Slot [6] registered Dec 13 14:25:27.138197 kernel: acpiphp: Slot [7] registered Dec 13 14:25:27.138212 kernel: acpiphp: Slot [8] registered Dec 13 14:25:27.138332 kernel: acpiphp: Slot [9] registered Dec 13 14:25:27.138349 kernel: acpiphp: Slot [10] registered Dec 13 14:25:27.138364 kernel: acpiphp: Slot [11] registered Dec 13 14:25:27.138399 kernel: acpiphp: Slot [12] registered Dec 13 14:25:27.138413 kernel: acpiphp: Slot [13] registered Dec 13 14:25:27.138428 kernel: acpiphp: Slot [14] registered Dec 13 14:25:27.138443 kernel: acpiphp: Slot [15] registered Dec 13 14:25:27.138458 kernel: acpiphp: Slot [16] registered Dec 13 14:25:27.138479 kernel: acpiphp: Slot [17] registered Dec 13 14:25:27.138493 kernel: acpiphp: Slot [18] registered Dec 13 14:25:27.138508 kernel: acpiphp: Slot [19] registered Dec 13 14:25:27.138522 kernel: acpiphp: Slot [20] registered Dec 13 14:25:27.138541 kernel: acpiphp: Slot [21] registered Dec 13 14:25:27.138556 kernel: acpiphp: Slot [22] registered Dec 13 14:25:27.138571 kernel: acpiphp: Slot [23] registered Dec 13 14:25:27.138586 kernel: acpiphp: Slot [24] registered Dec 13 14:25:27.138601 kernel: acpiphp: Slot [25] registered Dec 13 14:25:27.138615 kernel: acpiphp: Slot [26] registered Dec 13 14:25:27.138630 kernel: acpiphp: Slot [27] registered Dec 13 14:25:27.138645 kernel: acpiphp: Slot [28] registered Dec 13 14:25:27.138660 kernel: acpiphp: Slot [29] registered Dec 13 14:25:27.138675 kernel: acpiphp: Slot [30] registered Dec 13 14:25:27.138692 kernel: acpiphp: Slot [31] registered Dec 13 14:25:27.138707 kernel: PCI host bridge to bus 0000:00 Dec 13 14:25:27.138852 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:25:27.139078 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:25:27.139217 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:25:27.139331 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:25:27.139440 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:25:27.139585 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:25:27.139722 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:25:27.139857 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 14:25:27.139984 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:25:27.140110 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 14:25:27.140247 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 14:25:27.140454 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 14:25:27.140588 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 14:25:27.140715 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 14:25:27.140843 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 14:25:27.140967 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 14:25:27.141156 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 14:25:27.141588 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 14:25:27.141785 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 14:25:27.141921 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:25:27.142063 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:25:27.142204 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 14:25:27.142393 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:25:27.142532 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 14:25:27.142552 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:25:27.142572 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:25:27.142641 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:25:27.142657 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:25:27.142673 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:25:27.142727 kernel: iommu: Default domain type: Translated Dec 13 14:25:27.142744 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:25:27.142880 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 14:25:27.143086 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:25:27.143284 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 14:25:27.143307 kernel: vgaarb: loaded Dec 13 14:25:27.143321 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:25:27.143334 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:25:27.143348 kernel: PTP clock support registered Dec 13 14:25:27.143360 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:25:27.143375 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:25:27.143389 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:25:27.143403 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 14:25:27.143420 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 14:25:27.143434 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 14:25:27.143448 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:25:27.143462 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:25:27.143477 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:25:27.143491 kernel: pnp: PnP ACPI init Dec 13 14:25:27.143505 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:25:27.143520 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:25:27.143534 kernel: NET: Registered PF_INET protocol family Dec 13 14:25:27.143707 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:25:27.143725 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:25:27.143739 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:25:27.143752 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:25:27.143766 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:25:27.143780 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:25:27.143792 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:25:27.143806 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:25:27.143821 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:25:27.143840 kernel: NET: Registered PF_XDP protocol family Dec 13 14:25:27.143969 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:25:27.144080 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:25:27.144196 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:25:27.144305 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:25:27.144466 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:25:27.148398 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:25:27.148445 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:25:27.148460 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:25:27.148474 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 14:25:27.148488 kernel: clocksource: Switched to clocksource tsc Dec 13 14:25:27.148501 kernel: Initialise system trusted keyrings Dec 13 14:25:27.148513 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:25:27.148526 kernel: Key type asymmetric registered Dec 13 14:25:27.148539 kernel: Asymmetric key parser 'x509' registered Dec 13 14:25:27.148553 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:25:27.148568 kernel: io scheduler mq-deadline registered Dec 13 14:25:27.148801 kernel: io scheduler kyber registered Dec 13 14:25:27.148815 kernel: io scheduler bfq registered Dec 13 14:25:27.148829 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:25:27.148842 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:25:27.148855 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:25:27.148867 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:25:27.148880 kernel: i8042: Warning: Keylock active Dec 13 14:25:27.148893 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:25:27.148910 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:25:27.149062 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:25:27.149187 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:25:27.150558 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:25:26 UTC (1734099926) Dec 13 14:25:27.150673 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:25:27.150688 kernel: intel_pstate: CPU model not supported Dec 13 14:25:27.150701 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:25:27.150712 kernel: Segment Routing with IPv6 Dec 13 14:25:27.150731 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:25:27.150743 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:25:27.150755 kernel: Key type dns_resolver registered Dec 13 14:25:27.150768 kernel: IPI shorthand broadcast: enabled Dec 13 14:25:27.150782 kernel: sched_clock: Marking stable (397013798, 276160865)->(804349588, -131174925) Dec 13 14:25:27.150795 kernel: registered taskstats version 1 Dec 13 14:25:27.150809 kernel: Loading compiled-in X.509 certificates Dec 13 14:25:27.154492 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:25:27.154509 kernel: Key type .fscrypt registered Dec 13 14:25:27.154529 kernel: Key type fscrypt-provisioning registered Dec 13 14:25:27.154543 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:25:27.154555 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:25:27.154568 kernel: ima: No architecture policies found Dec 13 14:25:27.154582 kernel: clk: Disabling unused clocks Dec 13 14:25:27.154595 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:25:27.154610 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:25:27.154624 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:25:27.154637 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:25:27.154655 kernel: Run /init as init process Dec 13 14:25:27.154668 kernel: with arguments: Dec 13 14:25:27.154682 kernel: /init Dec 13 14:25:27.154695 kernel: with environment: Dec 13 14:25:27.154709 kernel: HOME=/ Dec 13 14:25:27.154722 kernel: TERM=linux Dec 13 14:25:27.154735 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:25:27.154753 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:25:27.154776 systemd[1]: Detected virtualization amazon. Dec 13 14:25:27.154792 systemd[1]: Detected architecture x86-64. Dec 13 14:25:27.154808 systemd[1]: Running in initrd. Dec 13 14:25:27.154826 systemd[1]: No hostname configured, using default hostname. Dec 13 14:25:27.154857 systemd[1]: Hostname set to . Dec 13 14:25:27.154880 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:25:27.154896 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:25:27.154912 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:25:27.154928 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:25:27.154945 systemd[1]: Reached target cryptsetup.target. Dec 13 14:25:27.154961 systemd[1]: Reached target paths.target. Dec 13 14:25:27.155476 systemd[1]: Reached target slices.target. Dec 13 14:25:27.155494 systemd[1]: Reached target swap.target. Dec 13 14:25:27.155510 systemd[1]: Reached target timers.target. Dec 13 14:25:27.155533 systemd[1]: Listening on iscsid.socket. Dec 13 14:25:27.155550 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:25:27.155567 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:25:27.155583 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:25:27.155601 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:25:27.155621 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:25:27.155638 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:25:27.155655 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:25:27.155675 systemd[1]: Reached target sockets.target. Dec 13 14:25:27.155691 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:25:27.155707 systemd[1]: Finished network-cleanup.service. Dec 13 14:25:27.155724 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:25:27.155741 systemd[1]: Starting systemd-journald.service... Dec 13 14:25:27.155758 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:25:27.155776 systemd[1]: Starting systemd-resolved.service... Dec 13 14:25:27.155793 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:25:27.156692 systemd-journald[185]: Journal started Dec 13 14:25:27.161983 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2c78f672b69d7abc696efa246d06e8) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:25:27.200338 systemd[1]: Started systemd-journald.service. Dec 13 14:25:27.200417 kernel: audit: type=1130 audit(1734099927.182:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.183691 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:25:27.372278 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:25:27.372361 kernel: Bridge firewalling registered Dec 13 14:25:27.372381 kernel: SCSI subsystem initialized Dec 13 14:25:27.372401 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:25:27.372423 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:25:27.372439 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:25:27.372457 kernel: audit: type=1130 audit(1734099927.366:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.186128 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 14:25:27.247007 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 14:25:27.268085 systemd-resolved[187]: Positive Trust Anchors: Dec 13 14:25:27.381366 kernel: audit: type=1130 audit(1734099927.374:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.268096 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:25:27.388432 kernel: audit: type=1130 audit(1734099927.381:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.268159 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:25:27.284496 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 14:25:27.399996 kernel: audit: type=1130 audit(1734099927.394:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.300819 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 14:25:27.367657 systemd[1]: Started systemd-resolved.service. Dec 13 14:25:27.381323 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:25:27.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.382892 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:25:27.401342 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:25:27.403239 systemd[1]: Reached target nss-lookup.target. Dec 13 14:25:27.409821 kernel: audit: type=1130 audit(1734099927.401:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.411674 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:25:27.414346 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:25:27.415725 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:25:27.427506 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:25:27.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.432149 kernel: audit: type=1130 audit(1734099927.426:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.434025 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:25:27.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.438205 kernel: audit: type=1130 audit(1734099927.432:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.444248 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:25:27.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.446179 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:25:27.450358 kernel: audit: type=1130 audit(1734099927.444:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.457562 dracut-cmdline[208]: dracut-dracut-053 Dec 13 14:25:27.460773 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:25:27.555187 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:25:27.601158 kernel: iscsi: registered transport (tcp) Dec 13 14:25:27.629176 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:25:27.629251 kernel: QLogic iSCSI HBA Driver Dec 13 14:25:27.691289 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:25:27.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:27.694369 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:25:27.752192 kernel: raid6: avx512x4 gen() 16475 MB/s Dec 13 14:25:27.769186 kernel: raid6: avx512x4 xor() 7044 MB/s Dec 13 14:25:27.786191 kernel: raid6: avx512x2 gen() 13896 MB/s Dec 13 14:25:27.803190 kernel: raid6: avx512x2 xor() 21461 MB/s Dec 13 14:25:27.820189 kernel: raid6: avx512x1 gen() 14942 MB/s Dec 13 14:25:27.837191 kernel: raid6: avx512x1 xor() 19469 MB/s Dec 13 14:25:27.854186 kernel: raid6: avx2x4 gen() 16929 MB/s Dec 13 14:25:27.871186 kernel: raid6: avx2x4 xor() 7013 MB/s Dec 13 14:25:27.888190 kernel: raid6: avx2x2 gen() 11416 MB/s Dec 13 14:25:27.905191 kernel: raid6: avx2x2 xor() 13762 MB/s Dec 13 14:25:27.922272 kernel: raid6: avx2x1 gen() 4792 MB/s Dec 13 14:25:27.940254 kernel: raid6: avx2x1 xor() 5839 MB/s Dec 13 14:25:27.959513 kernel: raid6: sse2x4 gen() 5097 MB/s Dec 13 14:25:27.978260 kernel: raid6: sse2x4 xor() 3566 MB/s Dec 13 14:25:27.996526 kernel: raid6: sse2x2 gen() 4220 MB/s Dec 13 14:25:28.016779 kernel: raid6: sse2x2 xor() 3586 MB/s Dec 13 14:25:28.033917 kernel: raid6: sse2x1 gen() 4151 MB/s Dec 13 14:25:28.053573 kernel: raid6: sse2x1 xor() 2233 MB/s Dec 13 14:25:28.053651 kernel: raid6: using algorithm avx2x4 gen() 16929 MB/s Dec 13 14:25:28.053669 kernel: raid6: .... xor() 7013 MB/s, rmw enabled Dec 13 14:25:28.053697 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:25:28.101165 kernel: xor: automatically using best checksumming function avx Dec 13 14:25:28.218161 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:25:28.230200 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:25:28.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.231000 audit: BPF prog-id=7 op=LOAD Dec 13 14:25:28.231000 audit: BPF prog-id=8 op=LOAD Dec 13 14:25:28.232993 systemd[1]: Starting systemd-udevd.service... Dec 13 14:25:28.256783 systemd-udevd[385]: Using default interface naming scheme 'v252'. Dec 13 14:25:28.279749 systemd[1]: Started systemd-udevd.service. Dec 13 14:25:28.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.285694 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:25:28.331877 dracut-pre-trigger[391]: rd.md=0: removing MD RAID activation Dec 13 14:25:28.376832 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:25:28.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.380037 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:25:28.438507 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:25:28.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.531152 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:25:28.542154 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:25:28.561861 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:25:28.562030 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 14:25:28.562731 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:9c:dc:62:9a:57 Dec 13 14:25:28.566755 (udev-worker)[432]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:25:28.807461 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:25:28.807498 kernel: AES CTR mode by8 optimization enabled Dec 13 14:25:28.807515 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:25:28.807743 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:25:28.807764 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:25:28.807917 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:25:28.807936 kernel: GPT:9289727 != 16777215 Dec 13 14:25:28.807954 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:25:28.807971 kernel: GPT:9289727 != 16777215 Dec 13 14:25:28.807986 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:25:28.808006 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:25:28.808023 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (443) Dec 13 14:25:28.714742 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:25:28.829550 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:25:28.840591 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:25:28.860071 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:25:28.867241 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:25:28.875507 systemd[1]: Starting disk-uuid.service... Dec 13 14:25:28.891374 disk-uuid[593]: Primary Header is updated. Dec 13 14:25:28.891374 disk-uuid[593]: Secondary Entries is updated. Dec 13 14:25:28.891374 disk-uuid[593]: Secondary Header is updated. Dec 13 14:25:28.896256 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:25:28.901158 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:25:28.909268 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:25:29.913083 disk-uuid[594]: The operation has completed successfully. Dec 13 14:25:29.914532 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:25:30.108567 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:25:30.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.108771 systemd[1]: Finished disk-uuid.service. Dec 13 14:25:30.124331 systemd[1]: Starting verity-setup.service... Dec 13 14:25:30.156409 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:25:30.282995 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:25:30.287547 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:25:30.290216 systemd[1]: Finished verity-setup.service. Dec 13 14:25:30.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.494091 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:25:30.494543 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:25:30.494934 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:25:30.495988 systemd[1]: Starting ignition-setup.service... Dec 13 14:25:30.507145 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:25:30.532244 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:25:30.532369 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:25:30.532390 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:25:30.559155 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:25:30.577057 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:25:30.602621 systemd[1]: Finished ignition-setup.service. Dec 13 14:25:30.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.605053 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:25:30.635430 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:25:30.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.638000 audit: BPF prog-id=9 op=LOAD Dec 13 14:25:30.640262 systemd[1]: Starting systemd-networkd.service... Dec 13 14:25:30.670326 systemd-networkd[1106]: lo: Link UP Dec 13 14:25:30.670336 systemd-networkd[1106]: lo: Gained carrier Dec 13 14:25:30.670844 systemd-networkd[1106]: Enumeration completed Dec 13 14:25:30.673522 systemd[1]: Started systemd-networkd.service. Dec 13 14:25:30.676115 systemd-networkd[1106]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:25:30.678384 systemd[1]: Reached target network.target. Dec 13 14:25:30.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.680266 systemd[1]: Starting iscsiuio.service... Dec 13 14:25:30.684062 systemd-networkd[1106]: eth0: Link UP Dec 13 14:25:30.684165 systemd-networkd[1106]: eth0: Gained carrier Dec 13 14:25:30.692936 systemd[1]: Started iscsiuio.service. Dec 13 14:25:30.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.697469 systemd[1]: Starting iscsid.service... Dec 13 14:25:30.702602 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:25:30.702602 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:25:30.702602 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:25:30.702602 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:25:30.714103 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:25:30.714103 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:25:30.711493 systemd[1]: Started iscsid.service. Dec 13 14:25:30.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.716280 systemd-networkd[1106]: eth0: DHCPv4 address 172.31.28.77/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:25:30.721083 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:25:30.747376 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:25:30.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.749466 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:25:30.753912 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:25:30.754943 systemd[1]: Reached target remote-fs.target. Dec 13 14:25:30.760246 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:25:30.777523 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:25:30.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.552203 ignition[1082]: Ignition 2.14.0 Dec 13 14:25:31.552221 ignition[1082]: Stage: fetch-offline Dec 13 14:25:31.552363 ignition[1082]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:31.552417 ignition[1082]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:25:31.570881 ignition[1082]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:25:31.571503 ignition[1082]: Ignition finished successfully Dec 13 14:25:31.576541 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:25:31.586273 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 14:25:31.586342 kernel: audit: type=1130 audit(1734099931.577:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.580517 systemd[1]: Starting ignition-fetch.service... Dec 13 14:25:31.598245 ignition[1130]: Ignition 2.14.0 Dec 13 14:25:31.598259 ignition[1130]: Stage: fetch Dec 13 14:25:31.598596 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:31.598632 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:25:31.608538 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:25:31.610309 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:25:31.654357 ignition[1130]: INFO : PUT result: OK Dec 13 14:25:31.658681 ignition[1130]: DEBUG : parsed url from cmdline: "" Dec 13 14:25:31.658681 ignition[1130]: INFO : no config URL provided Dec 13 14:25:31.658681 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:25:31.658681 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:25:31.667416 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:25:31.667416 ignition[1130]: INFO : PUT result: OK Dec 13 14:25:31.667416 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:25:31.672012 ignition[1130]: INFO : GET result: OK Dec 13 14:25:31.673552 ignition[1130]: DEBUG : parsing config with SHA512: b4bad46c14b64ae478e57294a837e92f25884868847ff993beb15aaf21ab2e3a3bda6dc0180fc14b64251e629c2f6e377ee8f2b0b4c51921798fdca5c904d0d1 Dec 13 14:25:31.702870 unknown[1130]: fetched base config from "system" Dec 13 14:25:31.702885 unknown[1130]: fetched base config from "system" Dec 13 14:25:31.702893 unknown[1130]: fetched user config from "aws" Dec 13 14:25:31.711628 ignition[1130]: fetch: fetch complete Dec 13 14:25:31.711642 ignition[1130]: fetch: fetch passed Dec 13 14:25:31.711727 ignition[1130]: Ignition finished successfully Dec 13 14:25:31.715348 systemd[1]: Finished ignition-fetch.service. Dec 13 14:25:31.716313 systemd[1]: Starting ignition-kargs.service... Dec 13 14:25:31.724310 kernel: audit: type=1130 audit(1734099931.714:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.732775 ignition[1136]: Ignition 2.14.0 Dec 13 14:25:31.732788 ignition[1136]: Stage: kargs Dec 13 14:25:31.733209 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:31.733232 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:25:31.744054 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:25:31.746015 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:25:31.747677 ignition[1136]: INFO : PUT result: OK Dec 13 14:25:31.751634 ignition[1136]: kargs: kargs passed Dec 13 14:25:31.751753 ignition[1136]: Ignition finished successfully Dec 13 14:25:31.754316 systemd[1]: Finished ignition-kargs.service. Dec 13 14:25:31.760197 kernel: audit: type=1130 audit(1734099931.754:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.757850 systemd[1]: Starting ignition-disks.service... Dec 13 14:25:31.770714 ignition[1142]: Ignition 2.14.0 Dec 13 14:25:31.770728 ignition[1142]: Stage: disks Dec 13 14:25:31.770935 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:31.770969 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:25:31.780737 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:25:31.782071 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:25:31.784023 ignition[1142]: INFO : PUT result: OK Dec 13 14:25:31.787557 ignition[1142]: disks: disks passed Dec 13 14:25:31.787611 ignition[1142]: Ignition finished successfully Dec 13 14:25:31.793708 systemd[1]: Finished ignition-disks.service. Dec 13 14:25:31.806665 kernel: audit: type=1130 audit(1734099931.800:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.806647 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:25:31.808739 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:25:31.810535 systemd[1]: Reached target local-fs.target. Dec 13 14:25:31.812389 systemd[1]: Reached target sysinit.target. Dec 13 14:25:31.814322 systemd[1]: Reached target basic.target. Dec 13 14:25:31.816252 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:25:31.826465 systemd-networkd[1106]: eth0: Gained IPv6LL Dec 13 14:25:31.860794 systemd-fsck[1150]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:25:31.867282 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:25:31.878781 kernel: audit: type=1130 audit(1734099931.867:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.870418 systemd[1]: Mounting sysroot.mount... Dec 13 14:25:31.894161 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:25:31.895528 systemd[1]: Mounted sysroot.mount. Dec 13 14:25:31.895788 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:25:31.914295 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:25:31.916085 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:25:31.916237 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:25:31.916281 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:25:31.929090 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:25:31.946352 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:25:31.953379 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:25:31.965155 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1167) Dec 13 14:25:31.970653 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:25:31.970783 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:25:31.971032 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:25:31.979168 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:25:31.984993 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:25:31.989678 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:25:32.009997 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:25:32.017288 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:25:32.023955 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:25:32.267330 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:25:32.273293 kernel: audit: type=1130 audit(1734099932.266:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:32.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:32.268679 systemd[1]: Starting ignition-mount.service... Dec 13 14:25:32.276986 systemd[1]: Starting sysroot-boot.service... Dec 13 14:25:32.288037 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:25:32.288177 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:25:32.333262 ignition[1233]: INFO : Ignition 2.14.0 Dec 13 14:25:32.337371 ignition[1233]: INFO : Stage: mount Dec 13 14:25:32.337371 ignition[1233]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:32.337371 ignition[1233]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:25:32.343671 systemd[1]: Finished sysroot-boot.service. Dec 13 14:25:32.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:32.350198 kernel: audit: type=1130 audit(1734099932.344:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:32.352498 ignition[1233]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:25:32.354560 ignition[1233]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:25:32.356747 ignition[1233]: INFO : PUT result: OK Dec 13 14:25:32.360067 ignition[1233]: INFO : mount: mount passed Dec 13 14:25:32.361192 ignition[1233]: INFO : Ignition finished successfully Dec 13 14:25:32.363146 systemd[1]: Finished ignition-mount.service. Dec 13 14:25:32.372234 kernel: audit: type=1130 audit(1734099932.363:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:32.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:32.365328 systemd[1]: Starting ignition-files.service... Dec 13 14:25:32.377446 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:25:32.391154 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1242) Dec 13 14:25:32.395050 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:25:32.395101 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:25:32.395113 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:25:32.400161 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:25:32.403596 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:25:32.416695 ignition[1261]: INFO : Ignition 2.14.0 Dec 13 14:25:32.416695 ignition[1261]: INFO : Stage: files Dec 13 14:25:32.419012 ignition[1261]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:32.419012 ignition[1261]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:25:32.432006 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:25:32.433635 ignition[1261]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:25:32.435303 ignition[1261]: INFO : PUT result: OK Dec 13 14:25:32.466986 ignition[1261]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:25:32.472109 ignition[1261]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:25:32.473941 ignition[1261]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:25:32.501848 ignition[1261]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:25:32.503597 ignition[1261]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:25:32.506205 unknown[1261]: wrote ssh authorized keys file for user: core Dec 13 14:25:32.508972 ignition[1261]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:25:32.518156 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:25:32.522002 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:25:32.547105 ignition[1261]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3639944528" Dec 13 14:25:32.550469 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1266) Dec 13 14:25:32.550522 ignition[1261]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3639944528": device or resource busy Dec 13 14:25:32.550522 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3639944528", trying btrfs: device or resource busy Dec 13 14:25:32.550522 ignition[1261]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3639944528" Dec 13 14:25:32.550522 ignition[1261]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3639944528" Dec 13 14:25:32.560339 ignition[1261]: INFO : op(3): [started] unmounting "/mnt/oem3639944528" Dec 13 14:25:32.560339 ignition[1261]: INFO : op(3): [finished] unmounting "/mnt/oem3639944528" Dec 13 14:25:32.563427 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:25:32.563427 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:25:32.563427 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:25:32.563427 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:25:32.563427 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:25:32.563427 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:32.563427 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:32.563427 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:25:32.563427 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:25:32.642157 ignition[1261]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1023366876" Dec 13 14:25:32.642157 ignition[1261]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1023366876": device or resource busy Dec 13 14:25:32.642157 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1023366876", trying btrfs: device or resource busy Dec 13 14:25:32.642157 ignition[1261]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1023366876" Dec 13 14:25:32.642157 ignition[1261]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1023366876" Dec 13 14:25:32.642157 ignition[1261]: INFO : op(6): [started] unmounting "/mnt/oem1023366876" Dec 13 14:25:32.642157 ignition[1261]: INFO : op(6): [finished] unmounting "/mnt/oem1023366876" Dec 13 14:25:32.642157 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:25:32.642157 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:25:32.658413 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:25:32.658413 ignition[1261]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3338418285" Dec 13 14:25:32.658413 ignition[1261]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3338418285": device or resource busy Dec 13 14:25:32.658413 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3338418285", trying btrfs: device or resource busy Dec 13 14:25:32.658413 ignition[1261]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3338418285" Dec 13 14:25:32.658413 ignition[1261]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3338418285" Dec 13 14:25:32.658413 ignition[1261]: INFO : op(9): [started] unmounting "/mnt/oem3338418285" Dec 13 14:25:32.658413 ignition[1261]: INFO : op(9): [finished] unmounting "/mnt/oem3338418285" Dec 13 14:25:32.658413 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:25:32.658413 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:25:32.658413 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:25:32.697821 ignition[1261]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem532515964" Dec 13 14:25:32.700066 ignition[1261]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem532515964": device or resource busy Dec 13 14:25:32.700066 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem532515964", trying btrfs: device or resource busy Dec 13 14:25:32.700066 ignition[1261]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem532515964" Dec 13 14:25:32.708660 ignition[1261]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem532515964" Dec 13 14:25:32.708660 ignition[1261]: INFO : op(c): [started] unmounting "/mnt/oem532515964" Dec 13 14:25:32.708660 ignition[1261]: INFO : op(c): [finished] unmounting "/mnt/oem532515964" Dec 13 14:25:32.713767 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:25:32.713767 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:32.713767 ignition[1261]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:25:33.206294 ignition[1261]: INFO : GET result: OK Dec 13 14:25:33.613341 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:33.613341 ignition[1261]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:25:33.613341 ignition[1261]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:25:33.613341 ignition[1261]: INFO : files: op(c): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:25:33.622415 ignition[1261]: INFO : files: op(c): op(d): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:25:33.622415 ignition[1261]: INFO : files: op(c): op(d): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:25:33.622415 ignition[1261]: INFO : files: op(c): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:25:33.622415 ignition[1261]: INFO : files: op(e): [started] processing unit "nvidia.service" Dec 13 14:25:33.622415 ignition[1261]: INFO : files: op(e): [finished] processing unit "nvidia.service" Dec 13 14:25:33.622415 ignition[1261]: INFO : files: op(f): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:25:33.622415 ignition[1261]: INFO : files: op(f): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:25:33.622415 ignition[1261]: INFO : files: op(10): [started] setting preset to enabled for "nvidia.service" Dec 13 14:25:33.622415 ignition[1261]: INFO : files: op(10): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:25:33.622415 ignition[1261]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:25:33.622415 ignition[1261]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:25:33.646343 ignition[1261]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:25:33.648569 ignition[1261]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:25:33.648569 ignition[1261]: INFO : files: files passed Dec 13 14:25:33.648569 ignition[1261]: INFO : Ignition finished successfully Dec 13 14:25:33.653926 systemd[1]: Finished ignition-files.service. Dec 13 14:25:33.660349 kernel: audit: type=1130 audit(1734099933.654:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.662767 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:25:33.664054 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:25:33.665147 systemd[1]: Starting ignition-quench.service... Dec 13 14:25:33.681304 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:25:33.681441 systemd[1]: Finished ignition-quench.service. Dec 13 14:25:33.691340 kernel: audit: type=1130 audit(1734099933.684:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.703539 initrd-setup-root-after-ignition[1286]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:25:33.704704 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:25:33.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.707811 systemd[1]: Reached target ignition-complete.target. Dec 13 14:25:33.712346 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:25:33.732298 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:25:33.732429 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:25:33.735558 systemd[1]: Reached target initrd-fs.target. Dec 13 14:25:33.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.737230 systemd[1]: Reached target initrd.target. Dec 13 14:25:33.740025 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:25:33.744646 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:25:33.763849 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:25:33.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.768945 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:25:33.787316 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:25:33.791045 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:25:33.793949 systemd[1]: Stopped target timers.target. Dec 13 14:25:33.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.796590 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:25:33.796749 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:25:33.799113 systemd[1]: Stopped target initrd.target. Dec 13 14:25:33.805951 systemd[1]: Stopped target basic.target. Dec 13 14:25:33.811435 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:25:33.813928 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:25:33.816031 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:25:33.818490 systemd[1]: Stopped target remote-fs.target. Dec 13 14:25:33.820439 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:25:33.821655 systemd[1]: Stopped target sysinit.target. Dec 13 14:25:33.825305 systemd[1]: Stopped target local-fs.target. Dec 13 14:25:33.827530 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:25:33.829630 systemd[1]: Stopped target swap.target. Dec 13 14:25:33.831424 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:25:33.833018 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:25:33.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.835279 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:25:33.837249 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:25:33.838508 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:25:33.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.841033 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:25:33.842939 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:25:33.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.845452 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:25:33.847015 systemd[1]: Stopped ignition-files.service. Dec 13 14:25:33.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.849864 systemd[1]: Stopping ignition-mount.service... Dec 13 14:25:33.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.851155 systemd[1]: Stopping iscsiuio.service... Dec 13 14:25:33.853258 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:25:33.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.860623 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:25:33.861620 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:25:33.867086 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:25:33.867278 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:25:33.877167 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:25:33.880401 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:25:33.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.896227 ignition[1299]: INFO : Ignition 2.14.0 Dec 13 14:25:33.896227 ignition[1299]: INFO : Stage: umount Dec 13 14:25:33.896227 ignition[1299]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:33.896227 ignition[1299]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:25:33.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.897012 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:25:33.897155 systemd[1]: Stopped iscsiuio.service. Dec 13 14:25:33.908728 ignition[1299]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:25:33.908728 ignition[1299]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:25:33.912784 ignition[1299]: INFO : PUT result: OK Dec 13 14:25:33.913791 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:25:33.921257 ignition[1299]: INFO : umount: umount passed Dec 13 14:25:33.921257 ignition[1299]: INFO : Ignition finished successfully Dec 13 14:25:33.923655 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:25:33.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.923748 systemd[1]: Stopped ignition-mount.service. Dec 13 14:25:33.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.926702 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:25:33.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.926766 systemd[1]: Stopped ignition-disks.service. Dec 13 14:25:33.930410 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:25:33.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.931515 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:25:33.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.933628 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:25:33.934742 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:25:33.937308 systemd[1]: Stopped target network.target. Dec 13 14:25:33.938332 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:25:33.938408 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:25:33.939572 systemd[1]: Stopped target paths.target. Dec 13 14:25:33.940540 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:25:33.944986 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:25:33.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.946059 systemd[1]: Stopped target slices.target. Dec 13 14:25:33.947683 systemd[1]: Stopped target sockets.target. Dec 13 14:25:33.949631 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:25:33.949668 systemd[1]: Closed iscsid.socket. Dec 13 14:25:33.952814 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:25:33.952868 systemd[1]: Closed iscsiuio.socket. Dec 13 14:25:33.955892 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:25:33.955951 systemd[1]: Stopped ignition-setup.service. Dec 13 14:25:33.959040 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:25:33.962094 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:25:33.968090 systemd-networkd[1106]: eth0: DHCPv6 lease lost Dec 13 14:25:33.971748 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:25:33.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.972202 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:25:33.975000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:25:33.972469 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:25:33.972499 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:25:33.980415 systemd[1]: Stopping network-cleanup.service... Dec 13 14:25:33.986794 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:25:33.986886 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:25:33.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.989569 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:25:33.989640 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:25:33.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.994079 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:25:33.994180 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:25:33.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.997553 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:25:34.008180 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:25:34.009569 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:25:34.012166 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:25:34.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.017000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:25:34.015297 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:25:34.015424 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:25:34.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.018834 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:25:34.018903 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:25:34.025049 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:25:34.025117 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:25:34.026454 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:25:34.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.026783 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:25:34.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.028102 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:25:34.028224 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:25:34.029347 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:25:34.029415 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:25:34.032606 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:25:34.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.038716 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:25:34.038797 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:25:34.041969 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:25:34.042028 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:25:34.042466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:25:34.042811 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:25:34.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.066708 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:25:34.070810 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:25:34.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.073953 systemd[1]: Stopped network-cleanup.service. Dec 13 14:25:34.087613 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:25:34.087713 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:25:34.249060 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:25:34.249194 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:25:34.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.251451 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:25:34.252690 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:25:34.252776 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:25:34.255873 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:25:34.275049 systemd[1]: Switching root. Dec 13 14:25:34.302185 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 14:25:34.302270 iscsid[1111]: iscsid shutting down. Dec 13 14:25:34.303721 systemd-journald[185]: Journal stopped Dec 13 14:25:42.358827 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:25:42.358914 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:25:42.358936 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:25:42.358958 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:25:42.358977 kernel: SELinux: policy capability open_perms=1 Dec 13 14:25:42.358995 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:25:42.359014 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:25:42.359038 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:25:42.359061 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:25:42.359085 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:25:42.359106 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:25:42.359143 systemd[1]: Successfully loaded SELinux policy in 122.949ms. Dec 13 14:25:42.362841 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.375ms. Dec 13 14:25:42.362869 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:25:42.362892 systemd[1]: Detected virtualization amazon. Dec 13 14:25:42.362919 systemd[1]: Detected architecture x86-64. Dec 13 14:25:42.362960 systemd[1]: Detected first boot. Dec 13 14:25:42.362981 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:25:42.363002 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:25:42.363025 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:25:42.363051 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:42.363080 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:42.363103 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:42.363125 kernel: kauditd_printk_skb: 56 callbacks suppressed Dec 13 14:25:42.365107 kernel: audit: type=1334 audit(1734099941.987:88): prog-id=12 op=LOAD Dec 13 14:25:42.369186 kernel: audit: type=1334 audit(1734099941.987:89): prog-id=3 op=UNLOAD Dec 13 14:25:42.369230 kernel: audit: type=1334 audit(1734099941.988:90): prog-id=13 op=LOAD Dec 13 14:25:42.369250 kernel: audit: type=1334 audit(1734099941.990:91): prog-id=14 op=LOAD Dec 13 14:25:42.369269 kernel: audit: type=1334 audit(1734099941.990:92): prog-id=4 op=UNLOAD Dec 13 14:25:42.369289 kernel: audit: type=1334 audit(1734099941.990:93): prog-id=5 op=UNLOAD Dec 13 14:25:42.369308 kernel: audit: type=1334 audit(1734099941.991:94): prog-id=15 op=LOAD Dec 13 14:25:42.369328 kernel: audit: type=1334 audit(1734099941.991:95): prog-id=12 op=UNLOAD Dec 13 14:25:42.369347 kernel: audit: type=1334 audit(1734099941.995:96): prog-id=16 op=LOAD Dec 13 14:25:42.369366 kernel: audit: type=1334 audit(1734099941.996:97): prog-id=17 op=LOAD Dec 13 14:25:42.369390 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:25:42.369415 systemd[1]: Stopped iscsid.service. Dec 13 14:25:42.369436 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:25:42.369457 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:25:42.369478 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:25:42.369499 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:25:42.369521 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:25:42.369545 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:25:42.369566 systemd[1]: Created slice system-getty.slice. Dec 13 14:25:42.369586 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:25:42.369607 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:25:42.369628 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:25:42.369649 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:25:42.369670 systemd[1]: Created slice user.slice. Dec 13 14:25:42.369692 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:25:42.369713 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:25:42.369738 systemd[1]: Set up automount boot.automount. Dec 13 14:25:42.369759 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:25:42.369784 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:25:42.369806 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:25:42.369831 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:25:42.369852 systemd[1]: Reached target integritysetup.target. Dec 13 14:25:42.369872 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:25:42.369893 systemd[1]: Reached target remote-fs.target. Dec 13 14:25:42.369913 systemd[1]: Reached target slices.target. Dec 13 14:25:42.369933 systemd[1]: Reached target swap.target. Dec 13 14:25:42.369957 systemd[1]: Reached target torcx.target. Dec 13 14:25:42.369978 systemd[1]: Reached target veritysetup.target. Dec 13 14:25:42.369999 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:25:42.370020 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:25:42.370040 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:25:42.370061 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:25:42.370082 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:25:42.370102 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:25:42.370123 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:25:42.372651 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:25:42.372681 systemd[1]: Mounting media.mount... Dec 13 14:25:42.372702 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:42.372724 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:25:42.372746 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:25:42.372768 systemd[1]: Mounting tmp.mount... Dec 13 14:25:42.372785 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:25:42.372804 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:42.372824 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:25:42.372849 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:25:42.372869 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:42.372888 systemd[1]: Starting modprobe@drm.service... Dec 13 14:25:42.372906 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:42.372924 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:25:42.372945 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:42.372962 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:25:42.372979 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:25:42.372996 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:25:42.373017 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:25:42.373036 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:25:42.373054 systemd[1]: Stopped systemd-journald.service. Dec 13 14:25:42.373072 systemd[1]: Starting systemd-journald.service... Dec 13 14:25:42.373089 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:25:42.373107 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:25:42.373125 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:25:42.373158 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:25:42.373177 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:25:42.373199 systemd[1]: Stopped verity-setup.service. Dec 13 14:25:42.373218 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:42.373237 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:25:42.373255 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:25:42.373273 systemd[1]: Mounted media.mount. Dec 13 14:25:42.373290 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:25:42.373308 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:25:42.373337 systemd[1]: Mounted tmp.mount. Dec 13 14:25:42.373360 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:25:42.373378 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:25:42.373397 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:25:42.373415 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:42.373433 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:42.373451 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:25:42.373474 systemd[1]: Finished modprobe@drm.service. Dec 13 14:25:42.373492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:42.373511 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:42.373529 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:25:42.373547 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:25:42.373567 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:25:42.373585 systemd[1]: Reached target network-pre.target. Dec 13 14:25:42.373603 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:25:42.373622 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:25:42.373643 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:25:42.373661 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:42.373679 kernel: loop: module loaded Dec 13 14:25:42.373699 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:25:42.373716 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:25:42.373734 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:42.373756 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:42.373774 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:25:42.373793 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:42.373814 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:25:42.373838 systemd-journald[1410]: Journal started Dec 13 14:25:42.377462 systemd-journald[1410]: Runtime Journal (/run/log/journal/ec2c78f672b69d7abc696efa246d06e8) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:25:42.377550 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:25:35.392000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:25:35.634000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:25:35.634000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:25:35.634000 audit: BPF prog-id=10 op=LOAD Dec 13 14:25:35.634000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:25:35.634000 audit: BPF prog-id=11 op=LOAD Dec 13 14:25:35.634000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:25:36.016000 audit[1333]: AVC avc: denied { associate } for pid=1333 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:25:36.016000 audit[1333]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=1316 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:36.016000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:25:36.018000 audit[1333]: AVC avc: denied { associate } for pid=1333 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:25:36.018000 audit[1333]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b9 a2=1ed a3=0 items=2 ppid=1316 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:36.018000 audit: CWD cwd="/" Dec 13 14:25:36.018000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:36.018000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:36.018000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:25:41.987000 audit: BPF prog-id=12 op=LOAD Dec 13 14:25:41.987000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:25:41.988000 audit: BPF prog-id=13 op=LOAD Dec 13 14:25:41.990000 audit: BPF prog-id=14 op=LOAD Dec 13 14:25:41.990000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:25:41.990000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:25:41.991000 audit: BPF prog-id=15 op=LOAD Dec 13 14:25:41.991000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:25:41.995000 audit: BPF prog-id=16 op=LOAD Dec 13 14:25:41.996000 audit: BPF prog-id=17 op=LOAD Dec 13 14:25:41.997000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:25:41.997000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:25:41.998000 audit: BPF prog-id=18 op=LOAD Dec 13 14:25:41.998000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:25:41.999000 audit: BPF prog-id=19 op=LOAD Dec 13 14:25:42.000000 audit: BPF prog-id=20 op=LOAD Dec 13 14:25:42.000000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:25:42.000000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:25:42.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.009000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:25:42.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.217000 audit: BPF prog-id=21 op=LOAD Dec 13 14:25:42.217000 audit: BPF prog-id=22 op=LOAD Dec 13 14:25:42.217000 audit: BPF prog-id=23 op=LOAD Dec 13 14:25:42.217000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:25:42.217000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:25:42.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.393118 systemd[1]: Started systemd-journald.service. Dec 13 14:25:42.393226 kernel: fuse: init (API version 7.34) Dec 13 14:25:42.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.393468 systemd-journald[1410]: Time spent on flushing to /var/log/journal/ec2c78f672b69d7abc696efa246d06e8 is 76.466ms for 1144 entries. Dec 13 14:25:42.393468 systemd-journald[1410]: System Journal (/var/log/journal/ec2c78f672b69d7abc696efa246d06e8) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:25:42.525826 systemd-journald[1410]: Received client request to flush runtime journal. Dec 13 14:25:42.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.355000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:25:42.355000 audit[1410]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe32d937e0 a2=4000 a3=7ffe32d9387c items=0 ppid=1 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.355000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:25:42.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:36.002407 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:25:41.984678 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:25:36.003049 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:25:42.002394 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:25:36.003079 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:25:42.382415 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:25:36.003124 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:25:42.393008 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:25:36.003154 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:25:42.395056 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:25:36.003201 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:25:42.398307 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:25:36.003220 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:25:42.403153 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:25:36.003495 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:25:42.416013 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:25:36.003548 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:25:42.499083 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:25:36.003568 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:25:42.501929 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:25:36.004368 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:25:42.507005 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:25:36.004426 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:25:42.509730 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:25:36.004456 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:25:42.526997 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:25:36.004479 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:25:36.004504 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:25:36.004525 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:25:41.185346 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:41Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:25:41.185596 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:41Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:25:41.185705 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:41Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:25:41.185888 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:41Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:25:41.185936 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:41Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:25:41.187241 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:25:41Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:25:42.535507 udevadm[1450]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:25:42.709009 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:25:42.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.711829 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:25:42.813206 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:25:42.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:43.225704 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:25:43.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:43.231000 audit: BPF prog-id=24 op=LOAD Dec 13 14:25:43.231000 audit: BPF prog-id=25 op=LOAD Dec 13 14:25:43.231000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:25:43.231000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:25:43.234369 systemd[1]: Starting systemd-udevd.service... Dec 13 14:25:43.259294 systemd-udevd[1453]: Using default interface naming scheme 'v252'. Dec 13 14:25:43.291742 systemd[1]: Started systemd-udevd.service. Dec 13 14:25:43.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:43.293000 audit: BPF prog-id=26 op=LOAD Dec 13 14:25:43.295689 systemd[1]: Starting systemd-networkd.service... Dec 13 14:25:43.319000 audit: BPF prog-id=27 op=LOAD Dec 13 14:25:43.319000 audit: BPF prog-id=28 op=LOAD Dec 13 14:25:43.319000 audit: BPF prog-id=29 op=LOAD Dec 13 14:25:43.321378 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:25:43.357301 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:25:43.407248 systemd[1]: Started systemd-userdbd.service. Dec 13 14:25:43.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:43.425704 (udev-worker)[1455]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:25:43.481257 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 14:25:43.490170 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:25:43.494522 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 14:25:43.494710 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:25:43.482000 audit[1468]: AVC avc: denied { confidentiality } for pid=1468 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:25:43.482000 audit[1468]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c620c16550 a1=337fc a2=7f36d1a4cbc5 a3=5 items=110 ppid=1453 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:43.482000 audit: CWD cwd="/" Dec 13 14:25:43.482000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=1 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=2 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=3 name=(null) inode=15124 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=4 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=5 name=(null) inode=15125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=6 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=7 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=8 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=9 name=(null) inode=15127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=10 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=11 name=(null) inode=15128 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=12 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=13 name=(null) inode=15129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=14 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=15 name=(null) inode=15130 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=16 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=17 name=(null) inode=15131 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=18 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=19 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=20 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=21 name=(null) inode=15133 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=22 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=23 name=(null) inode=15134 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=24 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=25 name=(null) inode=15135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=26 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=27 name=(null) inode=15136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=28 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=29 name=(null) inode=15137 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=30 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=31 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=32 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=33 name=(null) inode=15139 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=34 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=35 name=(null) inode=15140 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=36 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=37 name=(null) inode=15141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=38 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=39 name=(null) inode=15142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=40 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=41 name=(null) inode=15143 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=42 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=43 name=(null) inode=15144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=44 name=(null) inode=15144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=45 name=(null) inode=15145 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=46 name=(null) inode=15144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=47 name=(null) inode=15146 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=48 name=(null) inode=15144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=49 name=(null) inode=15147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=50 name=(null) inode=15144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=51 name=(null) inode=15148 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=52 name=(null) inode=15144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=53 name=(null) inode=15149 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=55 name=(null) inode=15150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=56 name=(null) inode=15150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=57 name=(null) inode=15151 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=58 name=(null) inode=15150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=59 name=(null) inode=15152 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=60 name=(null) inode=15150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=61 name=(null) inode=15153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=62 name=(null) inode=15153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=63 name=(null) inode=15154 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=64 name=(null) inode=15153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=65 name=(null) inode=15155 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=66 name=(null) inode=15153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=67 name=(null) inode=15156 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=68 name=(null) inode=15153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=69 name=(null) inode=15157 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=70 name=(null) inode=15153 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=71 name=(null) inode=15158 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=72 name=(null) inode=15150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=73 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=74 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=75 name=(null) inode=15160 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=76 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=77 name=(null) inode=15161 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=78 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=79 name=(null) inode=15162 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=80 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=81 name=(null) inode=15163 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=82 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=83 name=(null) inode=15164 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=84 name=(null) inode=15150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=85 name=(null) inode=15165 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=86 name=(null) inode=15165 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=87 name=(null) inode=15166 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=88 name=(null) inode=15165 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=89 name=(null) inode=15167 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=90 name=(null) inode=15165 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=91 name=(null) inode=15168 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.518178 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 14:25:43.482000 audit: PATH item=92 name=(null) inode=15165 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=93 name=(null) inode=15169 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=94 name=(null) inode=15165 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=95 name=(null) inode=15170 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=96 name=(null) inode=15150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=97 name=(null) inode=15171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=98 name=(null) inode=15171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=99 name=(null) inode=15172 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=100 name=(null) inode=15171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=101 name=(null) inode=15173 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=102 name=(null) inode=15171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=103 name=(null) inode=15174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=104 name=(null) inode=15171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=105 name=(null) inode=15175 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=106 name=(null) inode=15171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=107 name=(null) inode=15176 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PATH item=109 name=(null) inode=15177 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:43.482000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:25:43.568367 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 14:25:43.569387 systemd-networkd[1462]: lo: Link UP Dec 13 14:25:43.569396 systemd-networkd[1462]: lo: Gained carrier Dec 13 14:25:43.569982 systemd-networkd[1462]: Enumeration completed Dec 13 14:25:43.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:43.570098 systemd[1]: Started systemd-networkd.service. Dec 13 14:25:43.571363 systemd-networkd[1462]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:25:43.574116 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:25:43.580334 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:25:43.581570 systemd-networkd[1462]: eth0: Link UP Dec 13 14:25:43.581817 systemd-networkd[1462]: eth0: Gained carrier Dec 13 14:25:43.595449 systemd-networkd[1462]: eth0: DHCPv4 address 172.31.28.77/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:25:43.650158 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1459) Dec 13 14:25:43.660159 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:25:43.786047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:25:43.875165 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:25:43.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:43.878842 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:25:43.965012 lvm[1567]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:25:43.995576 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:25:43.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:43.996916 systemd[1]: Reached target cryptsetup.target. Dec 13 14:25:44.000172 systemd[1]: Starting lvm2-activation.service... Dec 13 14:25:44.005846 lvm[1568]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:25:44.033267 systemd[1]: Finished lvm2-activation.service. Dec 13 14:25:44.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.035983 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:25:44.037200 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:25:44.037244 systemd[1]: Reached target local-fs.target. Dec 13 14:25:44.039355 systemd[1]: Reached target machines.target. Dec 13 14:25:44.042600 systemd[1]: Starting ldconfig.service... Dec 13 14:25:44.044308 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:44.044379 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:44.046195 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:25:44.050474 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:25:44.054335 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:25:44.069984 systemd[1]: Starting systemd-sysext.service... Dec 13 14:25:44.093436 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1570 (bootctl) Dec 13 14:25:44.096082 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:25:44.114928 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:25:44.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.125458 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:25:44.139179 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:25:44.139677 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:25:44.169167 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:25:44.339156 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:25:44.341739 systemd-fsck[1579]: fsck.fat 4.2 (2021-01-31) Dec 13 14:25:44.341739 systemd-fsck[1579]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 14:25:44.344719 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:25:44.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.347717 systemd[1]: Mounting boot.mount... Dec 13 14:25:44.365183 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:25:44.365557 systemd[1]: Mounted boot.mount. Dec 13 14:25:44.393575 (sd-sysext)[1587]: Using extensions 'kubernetes'. Dec 13 14:25:44.394329 (sd-sysext)[1587]: Merged extensions into '/usr'. Dec 13 14:25:44.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.403042 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:25:44.422233 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:44.424213 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:25:44.425839 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:44.428029 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:44.431317 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:44.441310 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:44.444069 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:44.444439 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:44.444725 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:44.449286 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:25:44.450781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:44.450960 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:44.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.452494 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:44.452918 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:44.454463 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:44.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.454616 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:44.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.456342 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:44.456448 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:44.458123 systemd[1]: Finished systemd-sysext.service. Dec 13 14:25:44.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.460451 systemd[1]: Starting ensure-sysext.service... Dec 13 14:25:44.465170 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:25:44.478691 systemd[1]: Reloading. Dec 13 14:25:44.489190 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:25:44.512309 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:25:44.518770 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:25:44.625096 /usr/lib/systemd/system-generators/torcx-generator[1622]: time="2024-12-13T14:25:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:25:44.625713 /usr/lib/systemd/system-generators/torcx-generator[1622]: time="2024-12-13T14:25:44Z" level=info msg="torcx already run" Dec 13 14:25:44.814331 systemd-networkd[1462]: eth0: Gained IPv6LL Dec 13 14:25:44.847954 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:44.848389 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:44.882848 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:44.959829 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:25:44.964000 audit: BPF prog-id=30 op=LOAD Dec 13 14:25:44.964000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:25:44.964000 audit: BPF prog-id=31 op=LOAD Dec 13 14:25:44.964000 audit: BPF prog-id=32 op=LOAD Dec 13 14:25:44.964000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:25:44.964000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:25:44.966000 audit: BPF prog-id=33 op=LOAD Dec 13 14:25:44.966000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:25:44.968000 audit: BPF prog-id=34 op=LOAD Dec 13 14:25:44.968000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:25:44.969000 audit: BPF prog-id=35 op=LOAD Dec 13 14:25:44.969000 audit: BPF prog-id=36 op=LOAD Dec 13 14:25:44.969000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:25:44.969000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:25:44.970000 audit: BPF prog-id=37 op=LOAD Dec 13 14:25:44.970000 audit: BPF prog-id=38 op=LOAD Dec 13 14:25:44.970000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:25:44.970000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:25:44.974881 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:25:44.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.976728 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:25:44.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.978984 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:25:44.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:44.986811 systemd[1]: Starting audit-rules.service... Dec 13 14:25:44.989125 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:25:44.993000 audit: BPF prog-id=39 op=LOAD Dec 13 14:25:44.992478 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:25:44.996303 systemd[1]: Starting systemd-resolved.service... Dec 13 14:25:44.998000 audit: BPF prog-id=40 op=LOAD Dec 13 14:25:45.000389 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:25:45.004614 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:25:45.030377 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:25:45.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.032021 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:45.035315 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:45.037736 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:45.040962 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:45.043935 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:45.045018 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:45.045234 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:45.045478 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:45.045000 audit[1681]: SYSTEM_BOOT pid=1681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.047087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:45.047295 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:45.048941 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:45.049100 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:45.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.054101 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:45.056041 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:45.056369 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:45.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.058024 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:25:45.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.059646 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:45.062726 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:45.064674 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:45.070736 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:45.073679 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:45.075146 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:45.075465 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:45.075638 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:45.080728 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:45.080917 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:45.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.088323 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:45.090313 systemd[1]: Starting modprobe@drm.service... Dec 13 14:25:45.093048 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:45.094214 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:45.094393 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:45.094629 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:45.106206 systemd[1]: Finished ensure-sysext.service. Dec 13 14:25:45.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.115315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:45.115522 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:45.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.117675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:45.117843 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:45.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.119087 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:45.132685 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:45.132710 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:45.142494 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:45.142697 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:45.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.143976 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:45.145086 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:25:45.145269 systemd[1]: Finished modprobe@drm.service. Dec 13 14:25:45.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.165827 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:25:45.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.180666 systemd-resolved[1679]: Positive Trust Anchors: Dec 13 14:25:45.180690 systemd-resolved[1679]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:25:45.180744 systemd-resolved[1679]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:25:45.217966 systemd-resolved[1679]: Defaulting to hostname 'linux'. Dec 13 14:25:45.218372 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:25:45.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.219680 systemd[1]: Reached target time-set.target. Dec 13 14:25:45.222109 systemd[1]: Started systemd-resolved.service. Dec 13 14:25:45.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:45.223212 systemd[1]: Reached target network.target. Dec 13 14:25:45.224086 systemd[1]: Reached target network-online.target. Dec 13 14:25:45.225205 systemd[1]: Reached target nss-lookup.target. Dec 13 14:25:45.254257 augenrules[1705]: No rules Dec 13 14:25:45.253000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:25:45.253000 audit[1705]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcdc710f20 a2=420 a3=0 items=0 ppid=1676 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:45.253000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:25:45.255371 systemd[1]: Finished audit-rules.service. Dec 13 14:25:46.425069 systemd-timesyncd[1680]: Contacted time server 96.126.122.39:123 (0.flatcar.pool.ntp.org). Dec 13 14:25:46.425070 systemd-resolved[1679]: Clock change detected. Flushing caches. Dec 13 14:25:46.425315 systemd-timesyncd[1680]: Initial clock synchronization to Fri 2024-12-13 14:25:46.424887 UTC. Dec 13 14:25:46.625986 ldconfig[1569]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:25:46.635846 systemd[1]: Finished ldconfig.service. Dec 13 14:25:46.638220 systemd[1]: Starting systemd-update-done.service... Dec 13 14:25:46.663735 systemd[1]: Finished systemd-update-done.service. Dec 13 14:25:46.666460 systemd[1]: Reached target sysinit.target. Dec 13 14:25:46.668546 systemd[1]: Started motdgen.path. Dec 13 14:25:46.672484 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:25:46.676505 systemd[1]: Started logrotate.timer. Dec 13 14:25:46.678094 systemd[1]: Started mdadm.timer. Dec 13 14:25:46.682735 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:25:46.686674 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:25:46.686729 systemd[1]: Reached target paths.target. Dec 13 14:25:46.687679 systemd[1]: Reached target timers.target. Dec 13 14:25:46.688841 systemd[1]: Listening on dbus.socket. Dec 13 14:25:46.690892 systemd[1]: Starting docker.socket... Dec 13 14:25:46.694584 systemd[1]: Listening on sshd.socket. Dec 13 14:25:46.695634 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:46.696130 systemd[1]: Listening on docker.socket. Dec 13 14:25:46.697286 systemd[1]: Reached target sockets.target. Dec 13 14:25:46.698278 systemd[1]: Reached target basic.target. Dec 13 14:25:46.699940 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:25:46.699961 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:25:46.706683 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:25:46.717791 systemd[1]: Starting containerd.service... Dec 13 14:25:46.732301 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:25:46.742516 systemd[1]: Starting dbus.service... Dec 13 14:25:46.750471 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:25:46.761001 systemd[1]: Starting extend-filesystems.service... Dec 13 14:25:46.763364 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:25:46.766505 systemd[1]: Starting kubelet.service... Dec 13 14:25:46.770860 systemd[1]: Starting motdgen.service... Dec 13 14:25:46.776314 systemd[1]: Started nvidia.service. Dec 13 14:25:46.780968 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:25:46.784526 systemd[1]: Starting sshd-keygen.service... Dec 13 14:25:46.793828 systemd[1]: Starting systemd-logind.service... Dec 13 14:25:46.795135 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:46.795254 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:25:46.849948 jq[1717]: false Dec 13 14:25:46.796359 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:25:46.859223 jq[1726]: true Dec 13 14:25:46.797709 systemd[1]: Starting update-engine.service... Dec 13 14:25:46.802335 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:25:46.842021 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:25:46.842304 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:25:46.880686 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:25:46.880908 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:25:47.100581 jq[1734]: true Dec 13 14:25:47.108562 dbus-daemon[1716]: [system] SELinux support is enabled Dec 13 14:25:47.115407 systemd[1]: Started dbus.service. Dec 13 14:25:47.124128 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:25:47.135286 systemd[1]: Reached target system-config.target. Dec 13 14:25:47.145452 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:25:47.145485 systemd[1]: Reached target user-config.target. Dec 13 14:25:47.146872 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:25:47.147093 systemd[1]: Finished motdgen.service. Dec 13 14:25:47.188573 extend-filesystems[1718]: Found loop1 Dec 13 14:25:47.190617 extend-filesystems[1718]: Found nvme0n1 Dec 13 14:25:47.190617 extend-filesystems[1718]: Found nvme0n1p1 Dec 13 14:25:47.190617 extend-filesystems[1718]: Found nvme0n1p2 Dec 13 14:25:47.190617 extend-filesystems[1718]: Found nvme0n1p3 Dec 13 14:25:47.190617 extend-filesystems[1718]: Found usr Dec 13 14:25:47.190617 extend-filesystems[1718]: Found nvme0n1p4 Dec 13 14:25:47.190617 extend-filesystems[1718]: Found nvme0n1p6 Dec 13 14:25:47.190617 extend-filesystems[1718]: Found nvme0n1p7 Dec 13 14:25:47.190617 extend-filesystems[1718]: Found nvme0n1p9 Dec 13 14:25:47.190617 extend-filesystems[1718]: Checking size of /dev/nvme0n1p9 Dec 13 14:25:47.229143 dbus-daemon[1716]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1462 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:25:47.236451 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:25:47.243825 extend-filesystems[1718]: Resized partition /dev/nvme0n1p9 Dec 13 14:25:47.247239 amazon-ssm-agent[1713]: 2024/12/13 14:25:47 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:25:47.276240 extend-filesystems[1774]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:25:47.285493 amazon-ssm-agent[1713]: Initializing new seelog logger Dec 13 14:25:47.285493 amazon-ssm-agent[1713]: New Seelog Logger Creation Complete Dec 13 14:25:47.285493 amazon-ssm-agent[1713]: 2024/12/13 14:25:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:25:47.285493 amazon-ssm-agent[1713]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:25:47.285493 amazon-ssm-agent[1713]: 2024/12/13 14:25:47 processing appconfig overrides Dec 13 14:25:47.295214 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:25:47.331068 update_engine[1725]: I1213 14:25:47.327720 1725 main.cc:92] Flatcar Update Engine starting Dec 13 14:25:47.339559 systemd[1]: Started update-engine.service. Dec 13 14:25:47.344495 systemd[1]: Started locksmithd.service. Dec 13 14:25:47.366320 update_engine[1725]: I1213 14:25:47.345924 1725 update_check_scheduler.cc:74] Next update check in 2m26s Dec 13 14:25:47.368329 env[1731]: time="2024-12-13T14:25:47.368268212Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:25:47.379334 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:25:47.413271 extend-filesystems[1774]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:25:47.413271 extend-filesystems[1774]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:25:47.413271 extend-filesystems[1774]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:25:47.433698 extend-filesystems[1718]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:25:47.416497 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:25:47.435134 bash[1781]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:25:47.427856 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:25:47.428089 systemd[1]: Finished extend-filesystems.service. Dec 13 14:25:47.523444 systemd-logind[1724]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:25:47.523482 systemd-logind[1724]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:25:47.523505 systemd-logind[1724]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:25:47.530032 systemd-logind[1724]: New seat seat0. Dec 13 14:25:47.546121 systemd[1]: Started systemd-logind.service. Dec 13 14:25:47.577228 env[1731]: time="2024-12-13T14:25:47.576984514Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:25:47.577434 env[1731]: time="2024-12-13T14:25:47.577406144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:47.596777 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:25:47.597844 env[1731]: time="2024-12-13T14:25:47.597788183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:47.597963 env[1731]: time="2024-12-13T14:25:47.597858090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:47.685706 env[1731]: time="2024-12-13T14:25:47.685547701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:47.685706 env[1731]: time="2024-12-13T14:25:47.685599131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:47.685706 env[1731]: time="2024-12-13T14:25:47.685634213Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:25:47.685706 env[1731]: time="2024-12-13T14:25:47.685651397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:47.686083 env[1731]: time="2024-12-13T14:25:47.685818860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:47.687649 env[1731]: time="2024-12-13T14:25:47.686338901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:47.687649 env[1731]: time="2024-12-13T14:25:47.686680515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:47.687649 env[1731]: time="2024-12-13T14:25:47.686710084Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:25:47.687649 env[1731]: time="2024-12-13T14:25:47.686813255Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:25:47.687649 env[1731]: time="2024-12-13T14:25:47.686831240Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.695546239Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.695599965Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.695625337Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.695681960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.695702903Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.695776341Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.695891536Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.695922528Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.695981514Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.696004522Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.696023031Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.696041213Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.696182183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:25:47.698210 env[1731]: time="2024-12-13T14:25:47.696299391Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.696784209Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.696824805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.696845792Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.696919348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.696940171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.697013046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.697035302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.697054736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.697075437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.697091202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.697107718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.697127578Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.697298456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.697320930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.698904 env[1731]: time="2024-12-13T14:25:47.697340154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.699529 env[1731]: time="2024-12-13T14:25:47.697361980Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:25:47.699529 env[1731]: time="2024-12-13T14:25:47.697383620Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:25:47.699529 env[1731]: time="2024-12-13T14:25:47.697400367Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:25:47.699529 env[1731]: time="2024-12-13T14:25:47.697426529Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:25:47.699529 env[1731]: time="2024-12-13T14:25:47.697471576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:25:47.699740 env[1731]: time="2024-12-13T14:25:47.697725135Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:25:47.699740 env[1731]: time="2024-12-13T14:25:47.697796452Z" level=info msg="Connect containerd service" Dec 13 14:25:47.699740 env[1731]: time="2024-12-13T14:25:47.697839746Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:25:47.718209 env[1731]: time="2024-12-13T14:25:47.718131431Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:25:47.719702 env[1731]: time="2024-12-13T14:25:47.719651252Z" level=info msg="Start subscribing containerd event" Dec 13 14:25:47.719938 env[1731]: time="2024-12-13T14:25:47.719910001Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:25:47.720132 env[1731]: time="2024-12-13T14:25:47.720057526Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:25:47.720577 systemd[1]: Started containerd.service. Dec 13 14:25:47.722088 env[1731]: time="2024-12-13T14:25:47.722065803Z" level=info msg="containerd successfully booted in 0.354653s" Dec 13 14:25:47.736287 env[1731]: time="2024-12-13T14:25:47.736245570Z" level=info msg="Start recovering state" Dec 13 14:25:47.738140 env[1731]: time="2024-12-13T14:25:47.738099762Z" level=info msg="Start event monitor" Dec 13 14:25:47.739722 env[1731]: time="2024-12-13T14:25:47.739690026Z" level=info msg="Start snapshots syncer" Dec 13 14:25:47.739854 env[1731]: time="2024-12-13T14:25:47.739830930Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:25:47.741132 env[1731]: time="2024-12-13T14:25:47.741104425Z" level=info msg="Start streaming server" Dec 13 14:25:47.862763 dbus-daemon[1716]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:25:47.862951 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:25:47.863403 dbus-daemon[1716]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1771 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:25:47.868635 systemd[1]: Starting polkit.service... Dec 13 14:25:47.899699 polkitd[1828]: Started polkitd version 121 Dec 13 14:25:47.943522 polkitd[1828]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:25:47.943606 polkitd[1828]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:25:47.945198 polkitd[1828]: Finished loading, compiling and executing 2 rules Dec 13 14:25:47.946714 dbus-daemon[1716]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:25:47.946923 systemd[1]: Started polkit.service. Dec 13 14:25:47.947234 polkitd[1828]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:25:47.958332 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO Create new startup processor Dec 13 14:25:47.958693 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:25:47.958805 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO Initializing bookkeeping folders Dec 13 14:25:47.958905 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO removing the completed state files Dec 13 14:25:47.958987 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:25:47.959073 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:25:47.959155 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO Initializing healthcheck folders for long running plugins Dec 13 14:25:47.959267 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO Initializing locations for inventory plugin Dec 13 14:25:47.959353 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO Initializing default location for custom inventory Dec 13 14:25:47.959439 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO Initializing default location for file inventory Dec 13 14:25:47.959532 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO Initializing default location for role inventory Dec 13 14:25:47.959622 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO Init the cloudwatchlogs publisher Dec 13 14:25:47.959704 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [instanceID=i-09bfd449b21ece4db] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:25:47.959787 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [instanceID=i-09bfd449b21ece4db] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:25:47.959874 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [instanceID=i-09bfd449b21ece4db] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:25:47.959958 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [instanceID=i-09bfd449b21ece4db] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:25:47.960037 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [instanceID=i-09bfd449b21ece4db] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:25:47.960125 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [instanceID=i-09bfd449b21ece4db] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:25:47.960231 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [instanceID=i-09bfd449b21ece4db] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:25:47.960320 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [instanceID=i-09bfd449b21ece4db] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:25:47.960400 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [instanceID=i-09bfd449b21ece4db] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:25:47.960480 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [instanceID=i-09bfd449b21ece4db] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:25:47.960576 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:25:47.960663 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO OS: linux, Arch: amd64 Dec 13 14:25:47.962319 amazon-ssm-agent[1713]: datastore file /var/lib/amazon/ssm/i-09bfd449b21ece4db/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:25:47.966723 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:25:47.985333 systemd-hostnamed[1771]: Hostname set to (transient) Dec 13 14:25:47.986646 systemd-resolved[1679]: System hostname changed to 'ip-172-31-28-77'. Dec 13 14:25:48.050387 coreos-metadata[1715]: Dec 13 14:25:48.050 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:25:48.055339 coreos-metadata[1715]: Dec 13 14:25:48.055 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:25:48.055915 coreos-metadata[1715]: Dec 13 14:25:48.055 INFO Fetch successful Dec 13 14:25:48.056048 coreos-metadata[1715]: Dec 13 14:25:48.055 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:25:48.056481 coreos-metadata[1715]: Dec 13 14:25:48.056 INFO Fetch successful Dec 13 14:25:48.064231 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:25:48.069831 unknown[1715]: wrote ssh authorized keys file for user: core Dec 13 14:25:48.098789 update-ssh-keys[1878]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:25:48.099421 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:25:48.165764 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:25:48.260412 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:25:48.355069 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:25:48.449934 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [instanceID=i-09bfd449b21ece4db] Starting association polling Dec 13 14:25:48.546328 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:25:48.640413 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:25:48.736774 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:25:48.833645 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:25:48.837665 locksmithd[1791]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:25:48.927934 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:25:49.024220 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:25:49.099092 systemd[1]: Started kubelet.service. Dec 13 14:25:49.121222 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:25:49.216820 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:25:49.313510 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-09bfd449b21ece4db, requestId: e8db6002-a9aa-4093-92b2-3763e22441e0 Dec 13 14:25:49.412536 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [OfflineService] Starting document processing engine... Dec 13 14:25:49.509640 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:25:49.607144 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:25:49.704680 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [OfflineService] Starting message polling Dec 13 14:25:49.802302 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [OfflineService] Starting send replies to MDS Dec 13 14:25:49.900639 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:25:49.999842 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:25:50.098368 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:25:50.198108 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:25:50.279908 kubelet[1921]: E1213 14:25:50.279749 1921 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:25:50.283138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:25:50.283348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:25:50.283736 systemd[1]: kubelet.service: Consumed 1.357s CPU time. Dec 13 14:25:50.296838 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [MessageGatewayService] listening reply. Dec 13 14:25:50.395744 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:25:50.494787 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:25:50.594124 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:25:50.693549 amazon-ssm-agent[1713]: 2024-12-13 14:25:47 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:25:50.793200 amazon-ssm-agent[1713]: 2024-12-13 14:25:48 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-09bfd449b21ece4db?role=subscribe&stream=input Dec 13 14:25:50.893109 amazon-ssm-agent[1713]: 2024-12-13 14:25:48 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-09bfd449b21ece4db?role=subscribe&stream=input Dec 13 14:25:50.993074 amazon-ssm-agent[1713]: 2024-12-13 14:25:48 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:25:51.093494 amazon-ssm-agent[1713]: 2024-12-13 14:25:48 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:25:51.772330 sshd_keygen[1738]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:25:51.805970 systemd[1]: Finished sshd-keygen.service. Dec 13 14:25:51.810255 systemd[1]: Starting issuegen.service... Dec 13 14:25:51.820471 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:25:51.820681 systemd[1]: Finished issuegen.service. Dec 13 14:25:51.823953 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:25:51.844333 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:25:51.852686 systemd[1]: Started getty@tty1.service. Dec 13 14:25:51.864920 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:25:51.868055 systemd[1]: Reached target getty.target. Dec 13 14:25:51.869555 systemd[1]: Reached target multi-user.target. Dec 13 14:25:51.874296 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:25:51.886324 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:25:51.886536 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:25:51.887998 systemd[1]: Startup finished in 722ms (kernel) + 8.407s (initrd) + 15.540s (userspace) = 24.670s. Dec 13 14:25:55.228684 systemd[1]: Created slice system-sshd.slice. Dec 13 14:25:55.230252 systemd[1]: Started sshd@0-172.31.28.77:22-139.178.89.65:60028.service. Dec 13 14:25:55.492766 amazon-ssm-agent[1713]: 2024-12-13 14:25:55 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:25:55.563485 sshd[1942]: Accepted publickey for core from 139.178.89.65 port 60028 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:25:55.566384 sshd[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:55.593545 systemd[1]: Created slice user-500.slice. Dec 13 14:25:55.598683 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:25:55.611461 systemd-logind[1724]: New session 1 of user core. Dec 13 14:25:55.628079 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:25:55.633259 systemd[1]: Starting user@500.service... Dec 13 14:25:55.645339 (systemd)[1945]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:55.758382 systemd[1945]: Queued start job for default target default.target. Dec 13 14:25:55.759085 systemd[1945]: Reached target paths.target. Dec 13 14:25:55.759120 systemd[1945]: Reached target sockets.target. Dec 13 14:25:55.759139 systemd[1945]: Reached target timers.target. Dec 13 14:25:55.759156 systemd[1945]: Reached target basic.target. Dec 13 14:25:55.759304 systemd[1]: Started user@500.service. Dec 13 14:25:55.760526 systemd[1]: Started session-1.scope. Dec 13 14:25:55.761153 systemd[1945]: Reached target default.target. Dec 13 14:25:55.761458 systemd[1945]: Startup finished in 101ms. Dec 13 14:25:55.908355 systemd[1]: Started sshd@1-172.31.28.77:22-139.178.89.65:60044.service. Dec 13 14:25:56.070621 sshd[1954]: Accepted publickey for core from 139.178.89.65 port 60044 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:25:56.072198 sshd[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:56.077789 systemd[1]: Started session-2.scope. Dec 13 14:25:56.078300 systemd-logind[1724]: New session 2 of user core. Dec 13 14:25:56.216670 sshd[1954]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:56.220046 systemd[1]: sshd@1-172.31.28.77:22-139.178.89.65:60044.service: Deactivated successfully. Dec 13 14:25:56.220952 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:25:56.221678 systemd-logind[1724]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:25:56.222834 systemd-logind[1724]: Removed session 2. Dec 13 14:25:56.243118 systemd[1]: Started sshd@2-172.31.28.77:22-139.178.89.65:60054.service. Dec 13 14:25:56.409115 sshd[1960]: Accepted publickey for core from 139.178.89.65 port 60054 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:25:56.411170 sshd[1960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:56.418804 systemd-logind[1724]: New session 3 of user core. Dec 13 14:25:56.420804 systemd[1]: Started session-3.scope. Dec 13 14:25:56.562772 sshd[1960]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:56.566829 systemd[1]: sshd@2-172.31.28.77:22-139.178.89.65:60054.service: Deactivated successfully. Dec 13 14:25:56.584399 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:25:56.589811 systemd-logind[1724]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:25:56.611774 systemd[1]: Started sshd@3-172.31.28.77:22-139.178.89.65:60060.service. Dec 13 14:25:56.613243 systemd-logind[1724]: Removed session 3. Dec 13 14:25:56.795784 sshd[1966]: Accepted publickey for core from 139.178.89.65 port 60060 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:25:56.796832 sshd[1966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:56.819317 systemd-logind[1724]: New session 4 of user core. Dec 13 14:25:56.819973 systemd[1]: Started session-4.scope. Dec 13 14:25:57.000396 sshd[1966]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:57.003302 systemd[1]: sshd@3-172.31.28.77:22-139.178.89.65:60060.service: Deactivated successfully. Dec 13 14:25:57.004165 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:25:57.004866 systemd-logind[1724]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:25:57.005801 systemd-logind[1724]: Removed session 4. Dec 13 14:25:57.038786 systemd[1]: Started sshd@4-172.31.28.77:22-139.178.89.65:60076.service. Dec 13 14:25:57.214368 sshd[1972]: Accepted publickey for core from 139.178.89.65 port 60076 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:25:57.216092 sshd[1972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:57.221265 systemd-logind[1724]: New session 5 of user core. Dec 13 14:25:57.221333 systemd[1]: Started session-5.scope. Dec 13 14:25:57.365470 sudo[1975]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:25:57.365794 sudo[1975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:25:57.380585 systemd[1]: Starting coreos-metadata.service... Dec 13 14:25:57.464231 coreos-metadata[1979]: Dec 13 14:25:57.464 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:25:57.465074 coreos-metadata[1979]: Dec 13 14:25:57.464 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Dec 13 14:25:57.465681 coreos-metadata[1979]: Dec 13 14:25:57.465 INFO Fetch successful Dec 13 14:25:57.465788 coreos-metadata[1979]: Dec 13 14:25:57.465 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Dec 13 14:25:57.466164 coreos-metadata[1979]: Dec 13 14:25:57.466 INFO Fetch successful Dec 13 14:25:57.466262 coreos-metadata[1979]: Dec 13 14:25:57.466 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Dec 13 14:25:57.467800 coreos-metadata[1979]: Dec 13 14:25:57.467 INFO Fetch successful Dec 13 14:25:57.467800 coreos-metadata[1979]: Dec 13 14:25:57.467 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Dec 13 14:25:57.468336 coreos-metadata[1979]: Dec 13 14:25:57.468 INFO Fetch successful Dec 13 14:25:57.468336 coreos-metadata[1979]: Dec 13 14:25:57.468 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Dec 13 14:25:57.468880 coreos-metadata[1979]: Dec 13 14:25:57.468 INFO Fetch successful Dec 13 14:25:57.468958 coreos-metadata[1979]: Dec 13 14:25:57.468 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Dec 13 14:25:57.469436 coreos-metadata[1979]: Dec 13 14:25:57.469 INFO Fetch successful Dec 13 14:25:57.469436 coreos-metadata[1979]: Dec 13 14:25:57.469 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Dec 13 14:25:57.470025 coreos-metadata[1979]: Dec 13 14:25:57.469 INFO Fetch successful Dec 13 14:25:57.470166 coreos-metadata[1979]: Dec 13 14:25:57.470 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Dec 13 14:25:57.470703 coreos-metadata[1979]: Dec 13 14:25:57.470 INFO Fetch successful Dec 13 14:25:57.479641 systemd[1]: Finished coreos-metadata.service. Dec 13 14:25:58.907549 systemd[1]: Stopped kubelet.service. Dec 13 14:25:58.907888 systemd[1]: kubelet.service: Consumed 1.357s CPU time. Dec 13 14:25:58.911208 systemd[1]: Starting kubelet.service... Dec 13 14:25:58.945808 systemd[1]: Reloading. Dec 13 14:25:59.143973 /usr/lib/systemd/system-generators/torcx-generator[2045]: time="2024-12-13T14:25:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:25:59.144020 /usr/lib/systemd/system-generators/torcx-generator[2045]: time="2024-12-13T14:25:59Z" level=info msg="torcx already run" Dec 13 14:25:59.278431 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:59.278454 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:59.309154 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:59.460167 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:25:59.460293 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:25:59.460563 systemd[1]: Stopped kubelet.service. Dec 13 14:25:59.468356 systemd[1]: Starting kubelet.service... Dec 13 14:26:00.108213 systemd[1]: Started kubelet.service. Dec 13 14:26:00.229861 kubelet[2095]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:26:00.230218 kubelet[2095]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:26:00.230271 kubelet[2095]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:26:00.230406 kubelet[2095]: I1213 14:26:00.230379 2095 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:26:00.921348 kubelet[2095]: I1213 14:26:00.921264 2095 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:26:00.921348 kubelet[2095]: I1213 14:26:00.921348 2095 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:26:00.922094 kubelet[2095]: I1213 14:26:00.922065 2095 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:26:00.972130 kubelet[2095]: I1213 14:26:00.972067 2095 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:26:01.001444 kubelet[2095]: I1213 14:26:01.001417 2095 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:26:01.001898 kubelet[2095]: I1213 14:26:01.001877 2095 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:26:01.002098 kubelet[2095]: I1213 14:26:01.002076 2095 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:26:01.002235 kubelet[2095]: I1213 14:26:01.002110 2095 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:26:01.002235 kubelet[2095]: I1213 14:26:01.002125 2095 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:26:01.004010 kubelet[2095]: I1213 14:26:01.003977 2095 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:26:01.004145 kubelet[2095]: I1213 14:26:01.004126 2095 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:26:01.004222 kubelet[2095]: I1213 14:26:01.004152 2095 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:26:01.004222 kubelet[2095]: I1213 14:26:01.004207 2095 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:26:01.004316 kubelet[2095]: I1213 14:26:01.004227 2095 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:26:01.004786 kubelet[2095]: E1213 14:26:01.004767 2095 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:01.004930 kubelet[2095]: E1213 14:26:01.004919 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:01.006102 kubelet[2095]: I1213 14:26:01.006060 2095 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:26:01.009346 kubelet[2095]: I1213 14:26:01.009318 2095 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:26:01.012132 kubelet[2095]: W1213 14:26:01.012106 2095 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:26:01.013254 kubelet[2095]: W1213 14:26:01.013234 2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.28.77" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:26:01.013402 kubelet[2095]: E1213 14:26:01.013390 2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.28.77" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:26:01.013511 kubelet[2095]: I1213 14:26:01.013487 2095 server.go:1256] "Started kubelet" Dec 13 14:26:01.013681 kubelet[2095]: W1213 14:26:01.013668 2095 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:26:01.013777 kubelet[2095]: E1213 14:26:01.013765 2095 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:26:01.013935 kubelet[2095]: I1213 14:26:01.013908 2095 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:26:01.017645 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:26:01.018218 kubelet[2095]: I1213 14:26:01.017791 2095 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:26:01.018707 kubelet[2095]: I1213 14:26:01.018680 2095 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:26:01.018983 kubelet[2095]: I1213 14:26:01.018962 2095 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:26:01.022136 kubelet[2095]: I1213 14:26:01.021948 2095 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:26:01.028097 kubelet[2095]: I1213 14:26:01.027729 2095 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:26:01.028097 kubelet[2095]: I1213 14:26:01.027862 2095 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:26:01.028097 kubelet[2095]: I1213 14:26:01.027919 2095 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:26:01.028887 kubelet[2095]: E1213 14:26:01.028326 2095 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:26:01.031818 kubelet[2095]: E1213 14:26:01.031786 2095 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.77.1810c2ba1539dfb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.77,UID:172.31.28.77,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.28.77,},FirstTimestamp:2024-12-13 14:26:01.013460921 +0000 UTC m=+0.873767207,LastTimestamp:2024-12-13 14:26:01.013460921 +0000 UTC m=+0.873767207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.77,}" Dec 13 14:26:01.034224 kubelet[2095]: I1213 14:26:01.034176 2095 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:26:01.036103 kubelet[2095]: I1213 14:26:01.036080 2095 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:26:01.036103 kubelet[2095]: I1213 14:26:01.036100 2095 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:26:01.064356 kubelet[2095]: E1213 14:26:01.062964 2095 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.28.77\" not found" node="172.31.28.77" Dec 13 14:26:01.067844 kubelet[2095]: I1213 14:26:01.067815 2095 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:26:01.067844 kubelet[2095]: I1213 14:26:01.067843 2095 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:26:01.068029 kubelet[2095]: I1213 14:26:01.067861 2095 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:26:01.099533 kubelet[2095]: I1213 14:26:01.099181 2095 policy_none.go:49] "None policy: Start" Dec 13 14:26:01.103370 kubelet[2095]: I1213 14:26:01.103345 2095 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:26:01.103739 kubelet[2095]: I1213 14:26:01.103728 2095 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:26:01.125734 systemd[1]: Created slice kubepods.slice. Dec 13 14:26:01.129920 kubelet[2095]: I1213 14:26:01.128946 2095 kubelet_node_status.go:73] "Attempting to register node" node="172.31.28.77" Dec 13 14:26:01.133155 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:26:01.136837 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:26:01.141230 kubelet[2095]: I1213 14:26:01.141203 2095 kubelet_node_status.go:76] "Successfully registered node" node="172.31.28.77" Dec 13 14:26:01.143887 kubelet[2095]: I1213 14:26:01.143858 2095 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:26:01.144135 kubelet[2095]: I1213 14:26:01.144119 2095 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:26:01.146378 kubelet[2095]: E1213 14:26:01.146359 2095 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.28.77\" not found" Dec 13 14:26:01.164692 kubelet[2095]: E1213 14:26:01.164662 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:01.265128 kubelet[2095]: E1213 14:26:01.265002 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:01.298553 kubelet[2095]: I1213 14:26:01.298512 2095 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:26:01.304308 kubelet[2095]: I1213 14:26:01.304277 2095 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:26:01.304308 kubelet[2095]: I1213 14:26:01.304318 2095 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:26:01.304508 kubelet[2095]: I1213 14:26:01.304339 2095 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:26:01.304508 kubelet[2095]: E1213 14:26:01.304411 2095 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:26:01.366872 kubelet[2095]: E1213 14:26:01.366828 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:01.468021 kubelet[2095]: E1213 14:26:01.467968 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:01.568783 kubelet[2095]: E1213 14:26:01.568711 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:01.665675 sudo[1975]: pam_unix(sudo:session): session closed for user root Dec 13 14:26:01.669795 kubelet[2095]: E1213 14:26:01.669670 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:01.690894 sshd[1972]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:01.698143 systemd[1]: sshd@4-172.31.28.77:22-139.178.89.65:60076.service: Deactivated successfully. Dec 13 14:26:01.699133 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:26:01.700161 systemd-logind[1724]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:26:01.706384 systemd-logind[1724]: Removed session 5. Dec 13 14:26:01.770416 kubelet[2095]: E1213 14:26:01.770350 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:01.871212 kubelet[2095]: E1213 14:26:01.871067 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:01.927349 kubelet[2095]: I1213 14:26:01.927297 2095 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:26:01.927547 kubelet[2095]: W1213 14:26:01.927511 2095 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:26:01.927621 kubelet[2095]: W1213 14:26:01.927555 2095 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:26:01.972234 kubelet[2095]: E1213 14:26:01.972163 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:02.005519 kubelet[2095]: E1213 14:26:02.005463 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:02.072999 kubelet[2095]: E1213 14:26:02.072945 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:02.174214 kubelet[2095]: E1213 14:26:02.174063 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:02.274810 kubelet[2095]: E1213 14:26:02.274752 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:02.375045 kubelet[2095]: E1213 14:26:02.374998 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:02.476087 kubelet[2095]: E1213 14:26:02.475969 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:02.576932 kubelet[2095]: E1213 14:26:02.576881 2095 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.77\" not found" Dec 13 14:26:02.679198 kubelet[2095]: I1213 14:26:02.679151 2095 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.2.0/24" Dec 13 14:26:02.679643 env[1731]: time="2024-12-13T14:26:02.679588103Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:26:02.680071 kubelet[2095]: I1213 14:26:02.679812 2095 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.2.0/24" Dec 13 14:26:03.005813 kubelet[2095]: E1213 14:26:03.005772 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:03.006001 kubelet[2095]: I1213 14:26:03.005777 2095 apiserver.go:52] "Watching apiserver" Dec 13 14:26:03.017877 kubelet[2095]: I1213 14:26:03.017825 2095 topology_manager.go:215] "Topology Admit Handler" podUID="ccdebb72-4477-4c73-a618-5761dc2e9620" podNamespace="kube-system" podName="cilium-nfbtf" Dec 13 14:26:03.018061 kubelet[2095]: I1213 14:26:03.017969 2095 topology_manager.go:215] "Topology Admit Handler" podUID="fa9acc6a-4295-48c6-988a-ee551d91ab4f" podNamespace="kube-system" podName="kube-proxy-9djzn" Dec 13 14:26:03.029938 kubelet[2095]: I1213 14:26:03.029355 2095 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:26:03.041754 kubelet[2095]: I1213 14:26:03.041723 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-xtables-lock\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.041993 kubelet[2095]: I1213 14:26:03.041976 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ccdebb72-4477-4c73-a618-5761dc2e9620-clustermesh-secrets\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.042143 kubelet[2095]: I1213 14:26:03.042132 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa9acc6a-4295-48c6-988a-ee551d91ab4f-lib-modules\") pod \"kube-proxy-9djzn\" (UID: \"fa9acc6a-4295-48c6-988a-ee551d91ab4f\") " pod="kube-system/kube-proxy-9djzn" Dec 13 14:26:03.042297 kubelet[2095]: I1213 14:26:03.042286 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-bpf-maps\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.042421 kubelet[2095]: I1213 14:26:03.042411 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-hostproc\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.043151 kubelet[2095]: I1213 14:26:03.043132 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-cgroup\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.043364 kubelet[2095]: I1213 14:26:03.043332 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-run\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.043493 kubelet[2095]: I1213 14:26:03.043481 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-etc-cni-netd\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.043741 kubelet[2095]: I1213 14:26:03.043610 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ccdebb72-4477-4c73-a618-5761dc2e9620-hubble-tls\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.043878 kubelet[2095]: I1213 14:26:03.043864 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-host-proc-sys-net\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.044004 kubelet[2095]: I1213 14:26:03.043993 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-host-proc-sys-kernel\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.044146 kubelet[2095]: I1213 14:26:03.044135 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rnw2\" (UniqueName: \"kubernetes.io/projected/ccdebb72-4477-4c73-a618-5761dc2e9620-kube-api-access-6rnw2\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.044328 kubelet[2095]: I1213 14:26:03.044292 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa9acc6a-4295-48c6-988a-ee551d91ab4f-kube-proxy\") pod \"kube-proxy-9djzn\" (UID: \"fa9acc6a-4295-48c6-988a-ee551d91ab4f\") " pod="kube-system/kube-proxy-9djzn" Dec 13 14:26:03.044631 systemd[1]: Created slice kubepods-besteffort-podfa9acc6a_4295_48c6_988a_ee551d91ab4f.slice. Dec 13 14:26:03.052742 kubelet[2095]: I1213 14:26:03.052716 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cni-path\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.053021 kubelet[2095]: I1213 14:26:03.052987 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-lib-modules\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.053180 kubelet[2095]: I1213 14:26:03.053167 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-config-path\") pod \"cilium-nfbtf\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " pod="kube-system/cilium-nfbtf" Dec 13 14:26:03.053409 kubelet[2095]: I1213 14:26:03.053394 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa9acc6a-4295-48c6-988a-ee551d91ab4f-xtables-lock\") pod \"kube-proxy-9djzn\" (UID: \"fa9acc6a-4295-48c6-988a-ee551d91ab4f\") " pod="kube-system/kube-proxy-9djzn" Dec 13 14:26:03.053556 kubelet[2095]: I1213 14:26:03.053545 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntzt7\" (UniqueName: \"kubernetes.io/projected/fa9acc6a-4295-48c6-988a-ee551d91ab4f-kube-api-access-ntzt7\") pod \"kube-proxy-9djzn\" (UID: \"fa9acc6a-4295-48c6-988a-ee551d91ab4f\") " pod="kube-system/kube-proxy-9djzn" Dec 13 14:26:03.067363 systemd[1]: Created slice kubepods-burstable-podccdebb72_4477_4c73_a618_5761dc2e9620.slice. Dec 13 14:26:03.365749 env[1731]: time="2024-12-13T14:26:03.365489836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9djzn,Uid:fa9acc6a-4295-48c6-988a-ee551d91ab4f,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:03.391967 env[1731]: time="2024-12-13T14:26:03.390777124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nfbtf,Uid:ccdebb72-4477-4c73-a618-5761dc2e9620,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:04.008097 kubelet[2095]: E1213 14:26:04.008052 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:04.024454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2746208409.mount: Deactivated successfully. Dec 13 14:26:04.036260 env[1731]: time="2024-12-13T14:26:04.036201293Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:04.038038 env[1731]: time="2024-12-13T14:26:04.037992058Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:04.043172 env[1731]: time="2024-12-13T14:26:04.043113532Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:04.044705 env[1731]: time="2024-12-13T14:26:04.044668744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:04.053140 env[1731]: time="2024-12-13T14:26:04.053085921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:04.054955 env[1731]: time="2024-12-13T14:26:04.054906535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:04.058088 env[1731]: time="2024-12-13T14:26:04.058039032Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:04.061954 env[1731]: time="2024-12-13T14:26:04.061906601Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:04.133835 env[1731]: time="2024-12-13T14:26:04.132811620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:04.133835 env[1731]: time="2024-12-13T14:26:04.133513575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:04.133835 env[1731]: time="2024-12-13T14:26:04.133559749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:04.134135 env[1731]: time="2024-12-13T14:26:04.133908785Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d pid=2156 runtime=io.containerd.runc.v2 Dec 13 14:26:04.134506 env[1731]: time="2024-12-13T14:26:04.134451094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:04.134662 env[1731]: time="2024-12-13T14:26:04.134632840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:04.137349 env[1731]: time="2024-12-13T14:26:04.137256022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:04.137775 env[1731]: time="2024-12-13T14:26:04.137701806Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc25fca941bfeb9e77bb55c0200633c725548f76e24aa8e9d6337587856b5172 pid=2154 runtime=io.containerd.runc.v2 Dec 13 14:26:04.187943 systemd[1]: Started cri-containerd-bc25fca941bfeb9e77bb55c0200633c725548f76e24aa8e9d6337587856b5172.scope. Dec 13 14:26:04.216072 systemd[1]: Started cri-containerd-6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d.scope. Dec 13 14:26:04.239764 env[1731]: time="2024-12-13T14:26:04.239718412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9djzn,Uid:fa9acc6a-4295-48c6-988a-ee551d91ab4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc25fca941bfeb9e77bb55c0200633c725548f76e24aa8e9d6337587856b5172\"" Dec 13 14:26:04.243347 env[1731]: time="2024-12-13T14:26:04.243306756Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:26:04.250482 env[1731]: time="2024-12-13T14:26:04.250427272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nfbtf,Uid:ccdebb72-4477-4c73-a618-5761dc2e9620,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\"" Dec 13 14:26:05.009032 kubelet[2095]: E1213 14:26:05.008974 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:05.500728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2690285994.mount: Deactivated successfully. Dec 13 14:26:06.010015 kubelet[2095]: E1213 14:26:06.009944 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:06.371139 env[1731]: time="2024-12-13T14:26:06.371083002Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:06.373683 env[1731]: time="2024-12-13T14:26:06.373641216Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:06.375920 env[1731]: time="2024-12-13T14:26:06.375881166Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:06.377828 env[1731]: time="2024-12-13T14:26:06.377778762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:06.378408 env[1731]: time="2024-12-13T14:26:06.378374395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:26:06.380063 env[1731]: time="2024-12-13T14:26:06.380031624Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:26:06.381458 env[1731]: time="2024-12-13T14:26:06.381421386Z" level=info msg="CreateContainer within sandbox \"bc25fca941bfeb9e77bb55c0200633c725548f76e24aa8e9d6337587856b5172\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:26:06.401768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133135624.mount: Deactivated successfully. Dec 13 14:26:06.410684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198164704.mount: Deactivated successfully. Dec 13 14:26:06.440179 env[1731]: time="2024-12-13T14:26:06.440126658Z" level=info msg="CreateContainer within sandbox \"bc25fca941bfeb9e77bb55c0200633c725548f76e24aa8e9d6337587856b5172\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ff99bc4d920cf7650ef5fa2efd7aafc484c98f9b02cdede64e142894fd317ffc\"" Dec 13 14:26:06.441505 env[1731]: time="2024-12-13T14:26:06.441471518Z" level=info msg="StartContainer for \"ff99bc4d920cf7650ef5fa2efd7aafc484c98f9b02cdede64e142894fd317ffc\"" Dec 13 14:26:06.467086 systemd[1]: Started cri-containerd-ff99bc4d920cf7650ef5fa2efd7aafc484c98f9b02cdede64e142894fd317ffc.scope. Dec 13 14:26:06.524615 env[1731]: time="2024-12-13T14:26:06.524344648Z" level=info msg="StartContainer for \"ff99bc4d920cf7650ef5fa2efd7aafc484c98f9b02cdede64e142894fd317ffc\" returns successfully" Dec 13 14:26:07.010259 kubelet[2095]: E1213 14:26:07.010219 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:08.011359 kubelet[2095]: E1213 14:26:08.011320 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:08.096161 env[1731]: time="2024-12-13T14:26:08.087601109Z" level=error msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" failed" error="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/c4/c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T142608Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=2a3afc894b7653bd1101e7d67ffaa0ace5b0e027caea84f9f4493121072169b4®ion=us-east-1&namespace=cilium&repo_name=cilium&akamai_signature=exp=1734100868~hmac=6f9c640169e64c238400a06e520132b4eaf8c76050c901f992a82d7c73828d4e\": dial tcp: lookup cdn01.quay.io: no such host" Dec 13 14:26:08.096695 kubelet[2095]: E1213 14:26:08.096449 2095 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/c4/c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T142608Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=2a3afc894b7653bd1101e7d67ffaa0ace5b0e027caea84f9f4493121072169b4®ion=us-east-1&namespace=cilium&repo_name=cilium&akamai_signature=exp=1734100868~hmac=6f9c640169e64c238400a06e520132b4eaf8c76050c901f992a82d7c73828d4e\": dial tcp: lookup cdn01.quay.io: no such host" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Dec 13 14:26:08.096695 kubelet[2095]: E1213 14:26:08.096547 2095 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/c4/c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T142608Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=2a3afc894b7653bd1101e7d67ffaa0ace5b0e027caea84f9f4493121072169b4®ion=us-east-1&namespace=cilium&repo_name=cilium&akamai_signature=exp=1734100868~hmac=6f9c640169e64c238400a06e520132b4eaf8c76050c901f992a82d7c73828d4e\": dial tcp: lookup cdn01.quay.io: no such host" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Dec 13 14:26:08.096999 kubelet[2095]: E1213 14:26:08.096980 2095 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:26:08.096999 kubelet[2095]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:26:08.096999 kubelet[2095]: rm /hostbin/cilium-mount Dec 13 14:26:08.097119 kubelet[2095]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6rnw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:unconfined_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-nfbtf_kube-system(ccdebb72-4477-4c73-a618-5761dc2e9620): ErrImagePull: failed to pull and unpack image "quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5": failed to copy: httpReadSeeker: failed open: failed to do request: Get "https://cdn01.quay.io/quayio-production-s3/sha256/c4/c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T142608Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=2a3afc894b7653bd1101e7d67ffaa0ace5b0e027caea84f9f4493121072169b4®ion=us-east-1&namespace=cilium&repo_name=cilium&akamai_signature=exp=1734100868~hmac=6f9c640169e64c238400a06e520132b4eaf8c76050c901f992a82d7c73828d4e": dial tcp: lookup cdn01.quay.io: no such host Dec 13 14:26:08.097119 kubelet[2095]: E1213 14:26:08.097090 2095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/c4/c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T142608Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=2a3afc894b7653bd1101e7d67ffaa0ace5b0e027caea84f9f4493121072169b4®ion=us-east-1&namespace=cilium&repo_name=cilium&akamai_signature=exp=1734100868~hmac=6f9c640169e64c238400a06e520132b4eaf8c76050c901f992a82d7c73828d4e\\\": dial tcp: lookup cdn01.quay.io: no such host\"" pod="kube-system/cilium-nfbtf" podUID="ccdebb72-4477-4c73-a618-5761dc2e9620" Dec 13 14:26:08.325952 kubelet[2095]: E1213 14:26:08.325912 2095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"\"" pod="kube-system/cilium-nfbtf" podUID="ccdebb72-4477-4c73-a618-5761dc2e9620" Dec 13 14:26:08.353158 kubelet[2095]: I1213 14:26:08.353125 2095 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9djzn" podStartSLOduration=5.216476841 podStartE2EDuration="7.353038536s" podCreationTimestamp="2024-12-13 14:26:01 +0000 UTC" firstStartedPulling="2024-12-13 14:26:04.242309794 +0000 UTC m=+4.102616002" lastFinishedPulling="2024-12-13 14:26:06.378871479 +0000 UTC m=+6.239177697" observedRunningTime="2024-12-13 14:26:07.351132627 +0000 UTC m=+7.211438848" watchObservedRunningTime="2024-12-13 14:26:08.353038536 +0000 UTC m=+8.213344760" Dec 13 14:26:09.011495 kubelet[2095]: E1213 14:26:09.011438 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:10.012863 kubelet[2095]: E1213 14:26:10.012779 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:11.013001 kubelet[2095]: E1213 14:26:11.012958 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:12.013263 kubelet[2095]: E1213 14:26:12.013226 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:13.013794 kubelet[2095]: E1213 14:26:13.013756 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:14.014649 kubelet[2095]: E1213 14:26:14.014606 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:15.015129 kubelet[2095]: E1213 14:26:15.015076 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:16.015509 kubelet[2095]: E1213 14:26:16.015466 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:17.016347 kubelet[2095]: E1213 14:26:17.016305 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:17.994600 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:26:18.016721 kubelet[2095]: E1213 14:26:18.016666 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:19.017314 kubelet[2095]: E1213 14:26:19.017270 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:19.307997 env[1731]: time="2024-12-13T14:26:19.307949449Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:26:20.017678 kubelet[2095]: E1213 14:26:20.017628 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:21.005041 kubelet[2095]: E1213 14:26:21.004760 2095 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:21.018219 kubelet[2095]: E1213 14:26:21.018041 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:22.018881 kubelet[2095]: E1213 14:26:22.018728 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:23.020003 kubelet[2095]: E1213 14:26:23.019906 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:24.020432 kubelet[2095]: E1213 14:26:24.020372 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:25.020830 kubelet[2095]: E1213 14:26:25.020754 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:25.523152 amazon-ssm-agent[1713]: 2024-12-13 14:26:25 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:26:26.021255 kubelet[2095]: E1213 14:26:26.021099 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:27.022445 kubelet[2095]: E1213 14:26:27.022321 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:27.721298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2371002211.mount: Deactivated successfully. Dec 13 14:26:28.023829 kubelet[2095]: E1213 14:26:28.023346 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:29.023591 kubelet[2095]: E1213 14:26:29.023552 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:30.024710 kubelet[2095]: E1213 14:26:30.024670 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:31.025692 kubelet[2095]: E1213 14:26:31.025606 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:31.380773 env[1731]: time="2024-12-13T14:26:31.380723134Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:31.383786 env[1731]: time="2024-12-13T14:26:31.383744266Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:31.386470 env[1731]: time="2024-12-13T14:26:31.386291543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:31.387222 env[1731]: time="2024-12-13T14:26:31.387173046Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:26:31.390521 env[1731]: time="2024-12-13T14:26:31.390482835Z" level=info msg="CreateContainer within sandbox \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:26:31.412785 env[1731]: time="2024-12-13T14:26:31.412662074Z" level=info msg="CreateContainer within sandbox \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e\"" Dec 13 14:26:31.414340 env[1731]: time="2024-12-13T14:26:31.414291046Z" level=info msg="StartContainer for \"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e\"" Dec 13 14:26:31.453159 systemd[1]: run-containerd-runc-k8s.io-93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e-runc.3vci4r.mount: Deactivated successfully. Dec 13 14:26:31.462075 systemd[1]: Started cri-containerd-93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e.scope. Dec 13 14:26:31.493127 env[1731]: time="2024-12-13T14:26:31.493070702Z" level=info msg="StartContainer for \"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e\" returns successfully" Dec 13 14:26:31.505000 systemd[1]: cri-containerd-93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e.scope: Deactivated successfully. Dec 13 14:26:31.997924 env[1731]: time="2024-12-13T14:26:31.997852294Z" level=info msg="shim disconnected" id=93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e Dec 13 14:26:31.997924 env[1731]: time="2024-12-13T14:26:31.997920200Z" level=warning msg="cleaning up after shim disconnected" id=93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e namespace=k8s.io Dec 13 14:26:31.998279 env[1731]: time="2024-12-13T14:26:31.997934087Z" level=info msg="cleaning up dead shim" Dec 13 14:26:32.026614 kubelet[2095]: E1213 14:26:32.026470 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:32.027328 env[1731]: time="2024-12-13T14:26:32.026678058Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2435 runtime=io.containerd.runc.v2\n" Dec 13 14:26:32.218097 update_engine[1725]: I1213 14:26:32.218024 1725 update_attempter.cc:509] Updating boot flags... Dec 13 14:26:32.381980 env[1731]: time="2024-12-13T14:26:32.381932075Z" level=info msg="CreateContainer within sandbox \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:26:32.409003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e-rootfs.mount: Deactivated successfully. Dec 13 14:26:32.420240 env[1731]: time="2024-12-13T14:26:32.419545377Z" level=info msg="CreateContainer within sandbox \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f\"" Dec 13 14:26:32.424447 env[1731]: time="2024-12-13T14:26:32.422073801Z" level=info msg="StartContainer for \"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f\"" Dec 13 14:26:32.544829 systemd[1]: run-containerd-runc-k8s.io-e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f-runc.yUOHr5.mount: Deactivated successfully. Dec 13 14:26:32.614565 systemd[1]: Started cri-containerd-e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f.scope. Dec 13 14:26:32.678172 env[1731]: time="2024-12-13T14:26:32.677673478Z" level=info msg="StartContainer for \"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f\" returns successfully" Dec 13 14:26:32.684647 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:26:32.684984 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:26:32.685168 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:26:32.688904 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:26:32.692463 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:26:32.693658 systemd[1]: cri-containerd-e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f.scope: Deactivated successfully. Dec 13 14:26:32.727097 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:26:32.758440 env[1731]: time="2024-12-13T14:26:32.756951936Z" level=info msg="shim disconnected" id=e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f Dec 13 14:26:32.758440 env[1731]: time="2024-12-13T14:26:32.757105586Z" level=warning msg="cleaning up after shim disconnected" id=e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f namespace=k8s.io Dec 13 14:26:32.758440 env[1731]: time="2024-12-13T14:26:32.757122345Z" level=info msg="cleaning up dead shim" Dec 13 14:26:32.781463 env[1731]: time="2024-12-13T14:26:32.780881117Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2648 runtime=io.containerd.runc.v2\n" Dec 13 14:26:33.027373 kubelet[2095]: E1213 14:26:33.027162 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:33.386905 env[1731]: time="2024-12-13T14:26:33.386857812Z" level=info msg="CreateContainer within sandbox \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:26:33.403367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f-rootfs.mount: Deactivated successfully. Dec 13 14:26:33.421052 env[1731]: time="2024-12-13T14:26:33.420997951Z" level=info msg="CreateContainer within sandbox \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b\"" Dec 13 14:26:33.421828 env[1731]: time="2024-12-13T14:26:33.421791024Z" level=info msg="StartContainer for \"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b\"" Dec 13 14:26:33.467538 systemd[1]: run-containerd-runc-k8s.io-825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b-runc.vJfpS6.mount: Deactivated successfully. Dec 13 14:26:33.474017 systemd[1]: Started cri-containerd-825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b.scope. Dec 13 14:26:33.525401 systemd[1]: cri-containerd-825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b.scope: Deactivated successfully. Dec 13 14:26:33.534525 env[1731]: time="2024-12-13T14:26:33.534410223Z" level=info msg="StartContainer for \"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b\" returns successfully" Dec 13 14:26:33.575841 env[1731]: time="2024-12-13T14:26:33.575749765Z" level=info msg="shim disconnected" id=825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b Dec 13 14:26:33.575841 env[1731]: time="2024-12-13T14:26:33.575823994Z" level=warning msg="cleaning up after shim disconnected" id=825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b namespace=k8s.io Dec 13 14:26:33.575841 env[1731]: time="2024-12-13T14:26:33.575838536Z" level=info msg="cleaning up dead shim" Dec 13 14:26:33.592750 env[1731]: time="2024-12-13T14:26:33.592700480Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2738 runtime=io.containerd.runc.v2\n" Dec 13 14:26:34.028080 kubelet[2095]: E1213 14:26:34.028027 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:34.392056 env[1731]: time="2024-12-13T14:26:34.391950665Z" level=info msg="CreateContainer within sandbox \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:26:34.402730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b-rootfs.mount: Deactivated successfully. Dec 13 14:26:34.422057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208227351.mount: Deactivated successfully. Dec 13 14:26:34.434204 env[1731]: time="2024-12-13T14:26:34.434153484Z" level=info msg="CreateContainer within sandbox \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd\"" Dec 13 14:26:34.436452 env[1731]: time="2024-12-13T14:26:34.436414990Z" level=info msg="StartContainer for \"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd\"" Dec 13 14:26:34.474860 systemd[1]: Started cri-containerd-4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd.scope. Dec 13 14:26:34.520493 systemd[1]: cri-containerd-4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd.scope: Deactivated successfully. Dec 13 14:26:34.523352 env[1731]: time="2024-12-13T14:26:34.523038427Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccdebb72_4477_4c73_a618_5761dc2e9620.slice/cri-containerd-4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd.scope/memory.events\": no such file or directory" Dec 13 14:26:34.527112 env[1731]: time="2024-12-13T14:26:34.527056349Z" level=info msg="StartContainer for \"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd\" returns successfully" Dec 13 14:26:34.568541 env[1731]: time="2024-12-13T14:26:34.568479593Z" level=info msg="shim disconnected" id=4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd Dec 13 14:26:34.568541 env[1731]: time="2024-12-13T14:26:34.568539783Z" level=warning msg="cleaning up after shim disconnected" id=4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd namespace=k8s.io Dec 13 14:26:34.569299 env[1731]: time="2024-12-13T14:26:34.568552626Z" level=info msg="cleaning up dead shim" Dec 13 14:26:34.583030 env[1731]: time="2024-12-13T14:26:34.582609614Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2794 runtime=io.containerd.runc.v2\n" Dec 13 14:26:35.028867 kubelet[2095]: E1213 14:26:35.028824 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:35.398448 env[1731]: time="2024-12-13T14:26:35.398398914Z" level=info msg="CreateContainer within sandbox \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:26:35.402696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd-rootfs.mount: Deactivated successfully. Dec 13 14:26:35.443469 env[1731]: time="2024-12-13T14:26:35.443414787Z" level=info msg="CreateContainer within sandbox \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\"" Dec 13 14:26:35.444703 env[1731]: time="2024-12-13T14:26:35.444637353Z" level=info msg="StartContainer for \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\"" Dec 13 14:26:35.478369 systemd[1]: Started cri-containerd-89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51.scope. Dec 13 14:26:35.542561 env[1731]: time="2024-12-13T14:26:35.542496516Z" level=info msg="StartContainer for \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\" returns successfully" Dec 13 14:26:35.770849 kubelet[2095]: I1213 14:26:35.767546 2095 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:26:36.030276 kubelet[2095]: E1213 14:26:36.029494 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:36.179214 kernel: Initializing XFRM netlink socket Dec 13 14:26:36.402304 systemd[1]: run-containerd-runc-k8s.io-89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51-runc.ynZlzI.mount: Deactivated successfully. Dec 13 14:26:36.745255 kubelet[2095]: I1213 14:26:36.744866 2095 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-nfbtf" podStartSLOduration=8.609137953 podStartE2EDuration="35.744820631s" podCreationTimestamp="2024-12-13 14:26:01 +0000 UTC" firstStartedPulling="2024-12-13 14:26:04.251784413 +0000 UTC m=+4.112090614" lastFinishedPulling="2024-12-13 14:26:31.387467083 +0000 UTC m=+31.247773292" observedRunningTime="2024-12-13 14:26:36.418027332 +0000 UTC m=+36.278333557" watchObservedRunningTime="2024-12-13 14:26:36.744820631 +0000 UTC m=+36.605126854" Dec 13 14:26:36.745255 kubelet[2095]: I1213 14:26:36.745222 2095 topology_manager.go:215] "Topology Admit Handler" podUID="3650e3c3-2735-408d-b9f4-6f864fd47ab8" podNamespace="default" podName="nginx-deployment-6d5f899847-tzg27" Dec 13 14:26:36.750753 systemd[1]: Created slice kubepods-besteffort-pod3650e3c3_2735_408d_b9f4_6f864fd47ab8.slice. Dec 13 14:26:36.840382 kubelet[2095]: I1213 14:26:36.840332 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht4rf\" (UniqueName: \"kubernetes.io/projected/3650e3c3-2735-408d-b9f4-6f864fd47ab8-kube-api-access-ht4rf\") pod \"nginx-deployment-6d5f899847-tzg27\" (UID: \"3650e3c3-2735-408d-b9f4-6f864fd47ab8\") " pod="default/nginx-deployment-6d5f899847-tzg27" Dec 13 14:26:37.031499 kubelet[2095]: E1213 14:26:37.031452 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:37.054682 env[1731]: time="2024-12-13T14:26:37.054632735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-tzg27,Uid:3650e3c3-2735-408d-b9f4-6f864fd47ab8,Namespace:default,Attempt:0,}" Dec 13 14:26:37.505403 (udev-worker)[2892]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:26:37.507947 (udev-worker)[2952]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:26:37.510412 systemd-networkd[1462]: cilium_host: Link UP Dec 13 14:26:37.514809 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:26:37.514929 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:26:37.512360 systemd-networkd[1462]: cilium_net: Link UP Dec 13 14:26:37.513516 systemd-networkd[1462]: cilium_net: Gained carrier Dec 13 14:26:37.515395 systemd-networkd[1462]: cilium_host: Gained carrier Dec 13 14:26:37.663334 (udev-worker)[2962]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:26:37.673995 systemd-networkd[1462]: cilium_vxlan: Link UP Dec 13 14:26:37.674005 systemd-networkd[1462]: cilium_vxlan: Gained carrier Dec 13 14:26:37.864304 systemd-networkd[1462]: cilium_host: Gained IPv6LL Dec 13 14:26:37.905211 kernel: NET: Registered PF_ALG protocol family Dec 13 14:26:38.031677 kubelet[2095]: E1213 14:26:38.031629 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:38.463400 systemd-networkd[1462]: cilium_net: Gained IPv6LL Dec 13 14:26:38.679464 systemd-networkd[1462]: lxc_health: Link UP Dec 13 14:26:38.698834 systemd-networkd[1462]: lxc_health: Gained carrier Dec 13 14:26:38.699239 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:26:39.032606 kubelet[2095]: E1213 14:26:39.032510 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:39.155417 systemd-networkd[1462]: lxc63fde6202355: Link UP Dec 13 14:26:39.165363 kernel: eth0: renamed from tmp75c3e Dec 13 14:26:39.172082 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc63fde6202355: link becomes ready Dec 13 14:26:39.171439 systemd-networkd[1462]: lxc63fde6202355: Gained carrier Dec 13 14:26:39.297007 systemd-networkd[1462]: cilium_vxlan: Gained IPv6LL Dec 13 14:26:40.033110 kubelet[2095]: E1213 14:26:40.033048 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:40.063494 systemd-networkd[1462]: lxc_health: Gained IPv6LL Dec 13 14:26:40.767429 systemd-networkd[1462]: lxc63fde6202355: Gained IPv6LL Dec 13 14:26:41.005119 kubelet[2095]: E1213 14:26:41.005046 2095 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:41.034361 kubelet[2095]: E1213 14:26:41.034288 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:42.035727 kubelet[2095]: E1213 14:26:42.035691 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:43.037004 kubelet[2095]: E1213 14:26:43.036968 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:44.038617 kubelet[2095]: E1213 14:26:44.038580 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:44.183536 env[1731]: time="2024-12-13T14:26:44.183253059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:44.183536 env[1731]: time="2024-12-13T14:26:44.183297874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:44.183536 env[1731]: time="2024-12-13T14:26:44.183309610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:44.184095 env[1731]: time="2024-12-13T14:26:44.183559742Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75c3e6445df11bae220bb5b2557155c5e98e074d22e6ec7b57307769d8bee579 pid=3316 runtime=io.containerd.runc.v2 Dec 13 14:26:44.209085 systemd[1]: run-containerd-runc-k8s.io-75c3e6445df11bae220bb5b2557155c5e98e074d22e6ec7b57307769d8bee579-runc.rrWF5v.mount: Deactivated successfully. Dec 13 14:26:44.218373 systemd[1]: Started cri-containerd-75c3e6445df11bae220bb5b2557155c5e98e074d22e6ec7b57307769d8bee579.scope. Dec 13 14:26:44.268138 env[1731]: time="2024-12-13T14:26:44.268097491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-tzg27,Uid:3650e3c3-2735-408d-b9f4-6f864fd47ab8,Namespace:default,Attempt:0,} returns sandbox id \"75c3e6445df11bae220bb5b2557155c5e98e074d22e6ec7b57307769d8bee579\"" Dec 13 14:26:44.270351 env[1731]: time="2024-12-13T14:26:44.270251631Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:26:45.039996 kubelet[2095]: E1213 14:26:45.039947 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:46.040895 kubelet[2095]: E1213 14:26:46.040805 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:47.041211 kubelet[2095]: E1213 14:26:47.041138 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:47.965895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281809229.mount: Deactivated successfully. Dec 13 14:26:48.043526 kubelet[2095]: E1213 14:26:48.043161 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:49.044170 kubelet[2095]: E1213 14:26:49.044127 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:50.009555 env[1731]: time="2024-12-13T14:26:50.009438086Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:50.013108 env[1731]: time="2024-12-13T14:26:50.013062164Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:50.015639 env[1731]: time="2024-12-13T14:26:50.015599521Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:50.017952 env[1731]: time="2024-12-13T14:26:50.017908255Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:50.020180 env[1731]: time="2024-12-13T14:26:50.020136696Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:26:50.023255 env[1731]: time="2024-12-13T14:26:50.023217045Z" level=info msg="CreateContainer within sandbox \"75c3e6445df11bae220bb5b2557155c5e98e074d22e6ec7b57307769d8bee579\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:26:50.045445 kubelet[2095]: E1213 14:26:50.045324 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:50.061071 env[1731]: time="2024-12-13T14:26:50.061012721Z" level=info msg="CreateContainer within sandbox \"75c3e6445df11bae220bb5b2557155c5e98e074d22e6ec7b57307769d8bee579\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"59de04b618933bc06a14aebc09289fd4e3cfc5712f1c2c39b65fb3e18604cb1e\"" Dec 13 14:26:50.061890 env[1731]: time="2024-12-13T14:26:50.061815700Z" level=info msg="StartContainer for \"59de04b618933bc06a14aebc09289fd4e3cfc5712f1c2c39b65fb3e18604cb1e\"" Dec 13 14:26:50.100991 systemd[1]: Started cri-containerd-59de04b618933bc06a14aebc09289fd4e3cfc5712f1c2c39b65fb3e18604cb1e.scope. Dec 13 14:26:50.151756 env[1731]: time="2024-12-13T14:26:50.149830711Z" level=info msg="StartContainer for \"59de04b618933bc06a14aebc09289fd4e3cfc5712f1c2c39b65fb3e18604cb1e\" returns successfully" Dec 13 14:26:50.457210 kubelet[2095]: I1213 14:26:50.457140 2095 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-tzg27" podStartSLOduration=8.705897468 podStartE2EDuration="14.457097217s" podCreationTimestamp="2024-12-13 14:26:36 +0000 UTC" firstStartedPulling="2024-12-13 14:26:44.269542648 +0000 UTC m=+44.129848863" lastFinishedPulling="2024-12-13 14:26:50.020742391 +0000 UTC m=+49.881048612" observedRunningTime="2024-12-13 14:26:50.456542576 +0000 UTC m=+50.316848797" watchObservedRunningTime="2024-12-13 14:26:50.457097217 +0000 UTC m=+50.317403421" Dec 13 14:26:51.035055 systemd[1]: run-containerd-runc-k8s.io-59de04b618933bc06a14aebc09289fd4e3cfc5712f1c2c39b65fb3e18604cb1e-runc.gZqPuf.mount: Deactivated successfully. Dec 13 14:26:51.046263 kubelet[2095]: E1213 14:26:51.046219 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:52.046398 kubelet[2095]: E1213 14:26:52.046342 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:53.048500 kubelet[2095]: E1213 14:26:53.047769 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:54.048473 kubelet[2095]: E1213 14:26:54.048415 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:55.049070 kubelet[2095]: E1213 14:26:55.049015 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:56.049974 kubelet[2095]: E1213 14:26:56.049923 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:57.051110 kubelet[2095]: E1213 14:26:57.051057 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:57.309214 kubelet[2095]: I1213 14:26:57.308849 2095 topology_manager.go:215] "Topology Admit Handler" podUID="915ce282-0a5b-4e44-86c1-8bf563295d73" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:26:57.315750 systemd[1]: Created slice kubepods-besteffort-pod915ce282_0a5b_4e44_86c1_8bf563295d73.slice. Dec 13 14:26:57.400866 kubelet[2095]: I1213 14:26:57.400821 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/915ce282-0a5b-4e44-86c1-8bf563295d73-data\") pod \"nfs-server-provisioner-0\" (UID: \"915ce282-0a5b-4e44-86c1-8bf563295d73\") " pod="default/nfs-server-provisioner-0" Dec 13 14:26:57.400866 kubelet[2095]: I1213 14:26:57.400884 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvw6\" (UniqueName: \"kubernetes.io/projected/915ce282-0a5b-4e44-86c1-8bf563295d73-kube-api-access-qgvw6\") pod \"nfs-server-provisioner-0\" (UID: \"915ce282-0a5b-4e44-86c1-8bf563295d73\") " pod="default/nfs-server-provisioner-0" Dec 13 14:26:57.619100 env[1731]: time="2024-12-13T14:26:57.618968542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:915ce282-0a5b-4e44-86c1-8bf563295d73,Namespace:default,Attempt:0,}" Dec 13 14:26:57.706252 systemd-networkd[1462]: lxc10390a094d1b: Link UP Dec 13 14:26:57.714223 kernel: eth0: renamed from tmpf6c83 Dec 13 14:26:57.724617 (udev-worker)[3426]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:26:57.729863 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:26:57.730043 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc10390a094d1b: link becomes ready Dec 13 14:26:57.729303 systemd-networkd[1462]: lxc10390a094d1b: Gained carrier Dec 13 14:26:57.948404 env[1731]: time="2024-12-13T14:26:57.948248251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:57.948670 env[1731]: time="2024-12-13T14:26:57.948291247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:57.948670 env[1731]: time="2024-12-13T14:26:57.948312532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:57.949003 env[1731]: time="2024-12-13T14:26:57.948953971Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6c83ad8571ea03815888440001061fcc81c22a05b7bf895b35ba72f0b989b96 pid=3441 runtime=io.containerd.runc.v2 Dec 13 14:26:57.971431 systemd[1]: Started cri-containerd-f6c83ad8571ea03815888440001061fcc81c22a05b7bf895b35ba72f0b989b96.scope. Dec 13 14:26:58.051657 kubelet[2095]: E1213 14:26:58.051594 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:58.063212 env[1731]: time="2024-12-13T14:26:58.063159914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:915ce282-0a5b-4e44-86c1-8bf563295d73,Namespace:default,Attempt:0,} returns sandbox id \"f6c83ad8571ea03815888440001061fcc81c22a05b7bf895b35ba72f0b989b96\"" Dec 13 14:26:58.065480 env[1731]: time="2024-12-13T14:26:58.065450463Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:26:59.052574 kubelet[2095]: E1213 14:26:59.052481 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:59.583541 systemd-networkd[1462]: lxc10390a094d1b: Gained IPv6LL Dec 13 14:27:00.052899 kubelet[2095]: E1213 14:27:00.052833 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:01.004716 kubelet[2095]: E1213 14:27:01.004601 2095 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:01.053484 kubelet[2095]: E1213 14:27:01.053441 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:01.754417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126566580.mount: Deactivated successfully. Dec 13 14:27:02.054538 kubelet[2095]: E1213 14:27:02.054460 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:03.054821 kubelet[2095]: E1213 14:27:03.054775 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:04.057699 kubelet[2095]: E1213 14:27:04.057651 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:05.058564 kubelet[2095]: E1213 14:27:05.058521 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:05.575805 env[1731]: time="2024-12-13T14:27:05.575748110Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:05.579784 env[1731]: time="2024-12-13T14:27:05.579734889Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:05.583098 env[1731]: time="2024-12-13T14:27:05.583056143Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:05.585520 env[1731]: time="2024-12-13T14:27:05.585481269Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:05.586224 env[1731]: time="2024-12-13T14:27:05.586177203Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:27:05.588983 env[1731]: time="2024-12-13T14:27:05.588947523Z" level=info msg="CreateContainer within sandbox \"f6c83ad8571ea03815888440001061fcc81c22a05b7bf895b35ba72f0b989b96\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:27:05.607123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3375583278.mount: Deactivated successfully. Dec 13 14:27:05.622723 env[1731]: time="2024-12-13T14:27:05.622666504Z" level=info msg="CreateContainer within sandbox \"f6c83ad8571ea03815888440001061fcc81c22a05b7bf895b35ba72f0b989b96\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"fa0f2283dcea0422e570b48578ec9c749d419ba5e6b934aeb1786ae00bed4ea4\"" Dec 13 14:27:05.623776 env[1731]: time="2024-12-13T14:27:05.623739066Z" level=info msg="StartContainer for \"fa0f2283dcea0422e570b48578ec9c749d419ba5e6b934aeb1786ae00bed4ea4\"" Dec 13 14:27:05.656894 systemd[1]: Started cri-containerd-fa0f2283dcea0422e570b48578ec9c749d419ba5e6b934aeb1786ae00bed4ea4.scope. Dec 13 14:27:05.731980 env[1731]: time="2024-12-13T14:27:05.730364058Z" level=info msg="StartContainer for \"fa0f2283dcea0422e570b48578ec9c749d419ba5e6b934aeb1786ae00bed4ea4\" returns successfully" Dec 13 14:27:06.059630 kubelet[2095]: E1213 14:27:06.059580 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:06.498983 kubelet[2095]: I1213 14:27:06.498808 2095 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.977190223 podStartE2EDuration="9.498763135s" podCreationTimestamp="2024-12-13 14:26:57 +0000 UTC" firstStartedPulling="2024-12-13 14:26:58.064997972 +0000 UTC m=+57.925304186" lastFinishedPulling="2024-12-13 14:27:05.586570888 +0000 UTC m=+65.446877098" observedRunningTime="2024-12-13 14:27:06.498238439 +0000 UTC m=+66.358544663" watchObservedRunningTime="2024-12-13 14:27:06.498763135 +0000 UTC m=+66.359069362" Dec 13 14:27:07.060586 kubelet[2095]: E1213 14:27:07.060529 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:08.061202 kubelet[2095]: E1213 14:27:08.061135 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:09.062005 kubelet[2095]: E1213 14:27:09.061945 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:10.062978 kubelet[2095]: E1213 14:27:10.062925 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:11.063285 kubelet[2095]: E1213 14:27:11.063228 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:12.064340 kubelet[2095]: E1213 14:27:12.064291 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:13.065111 kubelet[2095]: E1213 14:27:13.065055 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:14.065285 kubelet[2095]: E1213 14:27:14.065234 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:15.066416 kubelet[2095]: E1213 14:27:15.066362 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:15.935465 kubelet[2095]: I1213 14:27:15.935429 2095 topology_manager.go:215] "Topology Admit Handler" podUID="8515cd16-6bf3-4cbd-85f6-9d505ed639e2" podNamespace="default" podName="test-pod-1" Dec 13 14:27:15.941302 systemd[1]: Created slice kubepods-besteffort-pod8515cd16_6bf3_4cbd_85f6_9d505ed639e2.slice. Dec 13 14:27:15.950921 kubelet[2095]: I1213 14:27:15.950874 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-da7bd26f-224d-42c6-889d-7c5fc6933d02\" (UniqueName: \"kubernetes.io/nfs/8515cd16-6bf3-4cbd-85f6-9d505ed639e2-pvc-da7bd26f-224d-42c6-889d-7c5fc6933d02\") pod \"test-pod-1\" (UID: \"8515cd16-6bf3-4cbd-85f6-9d505ed639e2\") " pod="default/test-pod-1" Dec 13 14:27:15.950921 kubelet[2095]: I1213 14:27:15.950927 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sg9w\" (UniqueName: \"kubernetes.io/projected/8515cd16-6bf3-4cbd-85f6-9d505ed639e2-kube-api-access-8sg9w\") pod \"test-pod-1\" (UID: \"8515cd16-6bf3-4cbd-85f6-9d505ed639e2\") " pod="default/test-pod-1" Dec 13 14:27:16.072788 kubelet[2095]: E1213 14:27:16.071876 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:16.113432 kernel: FS-Cache: Loaded Dec 13 14:27:16.169878 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:27:16.170015 kernel: RPC: Registered udp transport module. Dec 13 14:27:16.170046 kernel: RPC: Registered tcp transport module. Dec 13 14:27:16.170074 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:27:16.254380 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:27:16.516593 kernel: NFS: Registering the id_resolver key type Dec 13 14:27:16.516746 kernel: Key type id_resolver registered Dec 13 14:27:16.516787 kernel: Key type id_legacy registered Dec 13 14:27:16.573629 nfsidmap[3634]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 14:27:16.578168 nfsidmap[3635]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 14:27:16.845483 env[1731]: time="2024-12-13T14:27:16.845434223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8515cd16-6bf3-4cbd-85f6-9d505ed639e2,Namespace:default,Attempt:0,}" Dec 13 14:27:16.899992 (udev-worker)[3630]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:16.901804 (udev-worker)[3621]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:16.904596 systemd-networkd[1462]: lxc5fdfad8748cf: Link UP Dec 13 14:27:16.916510 kernel: eth0: renamed from tmp7c604 Dec 13 14:27:16.927461 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:27:16.927806 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5fdfad8748cf: link becomes ready Dec 13 14:27:16.928641 systemd-networkd[1462]: lxc5fdfad8748cf: Gained carrier Dec 13 14:27:17.072945 kubelet[2095]: E1213 14:27:17.072867 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:17.226041 env[1731]: time="2024-12-13T14:27:17.225751118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:17.226041 env[1731]: time="2024-12-13T14:27:17.225805295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:17.226291 env[1731]: time="2024-12-13T14:27:17.225822194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:17.247735 env[1731]: time="2024-12-13T14:27:17.226679088Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c60496d605557dcdb3270943b6d3226b4ed724c559dae50445be6fe30b69196 pid=3661 runtime=io.containerd.runc.v2 Dec 13 14:27:17.267166 systemd[1]: Started cri-containerd-7c60496d605557dcdb3270943b6d3226b4ed724c559dae50445be6fe30b69196.scope. Dec 13 14:27:17.355414 env[1731]: time="2024-12-13T14:27:17.355361627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8515cd16-6bf3-4cbd-85f6-9d505ed639e2,Namespace:default,Attempt:0,} returns sandbox id \"7c60496d605557dcdb3270943b6d3226b4ed724c559dae50445be6fe30b69196\"" Dec 13 14:27:17.358375 env[1731]: time="2024-12-13T14:27:17.358111499Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:27:17.675692 env[1731]: time="2024-12-13T14:27:17.675582777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:17.679318 env[1731]: time="2024-12-13T14:27:17.679269703Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:17.682135 env[1731]: time="2024-12-13T14:27:17.682094736Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:17.684490 env[1731]: time="2024-12-13T14:27:17.684446912Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:17.685211 env[1731]: time="2024-12-13T14:27:17.685161501Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:27:17.687943 env[1731]: time="2024-12-13T14:27:17.687906135Z" level=info msg="CreateContainer within sandbox \"7c60496d605557dcdb3270943b6d3226b4ed724c559dae50445be6fe30b69196\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:27:17.715580 env[1731]: time="2024-12-13T14:27:17.715532187Z" level=info msg="CreateContainer within sandbox \"7c60496d605557dcdb3270943b6d3226b4ed724c559dae50445be6fe30b69196\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d5399db03541409df7bfe67bc76614fb9b5ec658ce5dc9ffa2104c9e3e054737\"" Dec 13 14:27:17.716302 env[1731]: time="2024-12-13T14:27:17.716226949Z" level=info msg="StartContainer for \"d5399db03541409df7bfe67bc76614fb9b5ec658ce5dc9ffa2104c9e3e054737\"" Dec 13 14:27:17.740476 systemd[1]: Started cri-containerd-d5399db03541409df7bfe67bc76614fb9b5ec658ce5dc9ffa2104c9e3e054737.scope. Dec 13 14:27:17.794493 env[1731]: time="2024-12-13T14:27:17.792917225Z" level=info msg="StartContainer for \"d5399db03541409df7bfe67bc76614fb9b5ec658ce5dc9ffa2104c9e3e054737\" returns successfully" Dec 13 14:27:18.073575 kubelet[2095]: E1213 14:27:18.073503 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:18.335583 systemd-networkd[1462]: lxc5fdfad8748cf: Gained IPv6LL Dec 13 14:27:18.526617 kubelet[2095]: I1213 14:27:18.526578 2095 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=20.198338032 podStartE2EDuration="20.526475951s" podCreationTimestamp="2024-12-13 14:26:58 +0000 UTC" firstStartedPulling="2024-12-13 14:27:17.357355524 +0000 UTC m=+77.217661730" lastFinishedPulling="2024-12-13 14:27:17.685493433 +0000 UTC m=+77.545799649" observedRunningTime="2024-12-13 14:27:18.526114414 +0000 UTC m=+78.386420629" watchObservedRunningTime="2024-12-13 14:27:18.526475951 +0000 UTC m=+78.386782209" Dec 13 14:27:19.074390 kubelet[2095]: E1213 14:27:19.074329 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:20.075474 kubelet[2095]: E1213 14:27:20.075284 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:21.004981 kubelet[2095]: E1213 14:27:21.004800 2095 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:21.076202 kubelet[2095]: E1213 14:27:21.076125 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:22.077205 kubelet[2095]: E1213 14:27:22.077141 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:22.697608 env[1731]: time="2024-12-13T14:27:22.697479307Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:27:22.704481 env[1731]: time="2024-12-13T14:27:22.704434395Z" level=info msg="StopContainer for \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\" with timeout 2 (s)" Dec 13 14:27:22.704977 env[1731]: time="2024-12-13T14:27:22.704895369Z" level=info msg="Stop container \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\" with signal terminated" Dec 13 14:27:22.712969 systemd-networkd[1462]: lxc_health: Link DOWN Dec 13 14:27:22.712979 systemd-networkd[1462]: lxc_health: Lost carrier Dec 13 14:27:22.831888 systemd[1]: cri-containerd-89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51.scope: Deactivated successfully. Dec 13 14:27:22.832252 systemd[1]: cri-containerd-89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51.scope: Consumed 7.893s CPU time. Dec 13 14:27:22.881793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51-rootfs.mount: Deactivated successfully. Dec 13 14:27:22.901381 env[1731]: time="2024-12-13T14:27:22.901325826Z" level=info msg="shim disconnected" id=89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51 Dec 13 14:27:22.901701 env[1731]: time="2024-12-13T14:27:22.901386846Z" level=warning msg="cleaning up after shim disconnected" id=89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51 namespace=k8s.io Dec 13 14:27:22.901701 env[1731]: time="2024-12-13T14:27:22.901399808Z" level=info msg="cleaning up dead shim" Dec 13 14:27:22.912802 env[1731]: time="2024-12-13T14:27:22.912744679Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3790 runtime=io.containerd.runc.v2\n" Dec 13 14:27:22.915420 env[1731]: time="2024-12-13T14:27:22.915372177Z" level=info msg="StopContainer for \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\" returns successfully" Dec 13 14:27:22.916118 env[1731]: time="2024-12-13T14:27:22.916084839Z" level=info msg="StopPodSandbox for \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\"" Dec 13 14:27:22.916274 env[1731]: time="2024-12-13T14:27:22.916160239Z" level=info msg="Container to stop \"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:22.916334 env[1731]: time="2024-12-13T14:27:22.916271127Z" level=info msg="Container to stop \"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:22.916334 env[1731]: time="2024-12-13T14:27:22.916293262Z" level=info msg="Container to stop \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:22.916334 env[1731]: time="2024-12-13T14:27:22.916312325Z" level=info msg="Container to stop \"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:22.916522 env[1731]: time="2024-12-13T14:27:22.916328830Z" level=info msg="Container to stop \"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:22.921551 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d-shm.mount: Deactivated successfully. Dec 13 14:27:22.929518 systemd[1]: cri-containerd-6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d.scope: Deactivated successfully. Dec 13 14:27:22.958311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d-rootfs.mount: Deactivated successfully. Dec 13 14:27:22.967690 env[1731]: time="2024-12-13T14:27:22.967635560Z" level=info msg="shim disconnected" id=6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d Dec 13 14:27:22.968717 env[1731]: time="2024-12-13T14:27:22.968682693Z" level=warning msg="cleaning up after shim disconnected" id=6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d namespace=k8s.io Dec 13 14:27:22.969002 env[1731]: time="2024-12-13T14:27:22.968977677Z" level=info msg="cleaning up dead shim" Dec 13 14:27:22.980318 env[1731]: time="2024-12-13T14:27:22.980261000Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3821 runtime=io.containerd.runc.v2\n" Dec 13 14:27:22.981352 env[1731]: time="2024-12-13T14:27:22.981315196Z" level=info msg="TearDown network for sandbox \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" successfully" Dec 13 14:27:22.981352 env[1731]: time="2024-12-13T14:27:22.981348012Z" level=info msg="StopPodSandbox for \"6d01257bde97f98174b4fd953f41dcafc834718776cd0c1eae99626edae9926d\" returns successfully" Dec 13 14:27:23.077860 kubelet[2095]: E1213 14:27:23.077816 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100082 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-xtables-lock\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100145 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ccdebb72-4477-4c73-a618-5761dc2e9620-hubble-tls\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100153 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100206 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100176 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-host-proc-sys-net\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100268 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-lib-modules\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100295 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cni-path\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100342 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ccdebb72-4477-4c73-a618-5761dc2e9620-clustermesh-secrets\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100373 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-host-proc-sys-kernel\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100401 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rnw2\" (UniqueName: \"kubernetes.io/projected/ccdebb72-4477-4c73-a618-5761dc2e9620-kube-api-access-6rnw2\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100444 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-run\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100477 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-config-path\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100530 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-etc-cni-netd\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100557 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-bpf-maps\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100596 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-hostproc\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.100602 kubelet[2095]: I1213 14:27:23.100618 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-cgroup\") pod \"ccdebb72-4477-4c73-a618-5761dc2e9620\" (UID: \"ccdebb72-4477-4c73-a618-5761dc2e9620\") " Dec 13 14:27:23.101812 kubelet[2095]: I1213 14:27:23.100672 2095 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-host-proc-sys-net\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.101812 kubelet[2095]: I1213 14:27:23.100688 2095 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-xtables-lock\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.101812 kubelet[2095]: I1213 14:27:23.100712 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.101812 kubelet[2095]: I1213 14:27:23.100759 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.101812 kubelet[2095]: I1213 14:27:23.100778 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cni-path" (OuterVolumeSpecName: "cni-path") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.102195 kubelet[2095]: I1213 14:27:23.102067 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.104204 kubelet[2095]: I1213 14:27:23.102427 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.104204 kubelet[2095]: I1213 14:27:23.102478 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.106680 kubelet[2095]: I1213 14:27:23.106648 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:27:23.111894 kubelet[2095]: I1213 14:27:23.111857 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.112110 kubelet[2095]: I1213 14:27:23.112092 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-hostproc" (OuterVolumeSpecName: "hostproc") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.112133 systemd[1]: var-lib-kubelet-pods-ccdebb72\x2d4477\x2d4c73\x2da618\x2d5761dc2e9620-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:27:23.114294 kubelet[2095]: I1213 14:27:23.114262 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdebb72-4477-4c73-a618-5761dc2e9620-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:27:23.114415 kubelet[2095]: I1213 14:27:23.114360 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccdebb72-4477-4c73-a618-5761dc2e9620-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:27:23.118738 kubelet[2095]: I1213 14:27:23.118694 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdebb72-4477-4c73-a618-5761dc2e9620-kube-api-access-6rnw2" (OuterVolumeSpecName: "kube-api-access-6rnw2") pod "ccdebb72-4477-4c73-a618-5761dc2e9620" (UID: "ccdebb72-4477-4c73-a618-5761dc2e9620"). InnerVolumeSpecName "kube-api-access-6rnw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:27:23.201160 kubelet[2095]: I1213 14:27:23.201114 2095 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ccdebb72-4477-4c73-a618-5761dc2e9620-hubble-tls\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.201160 kubelet[2095]: I1213 14:27:23.201153 2095 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-lib-modules\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.201160 kubelet[2095]: I1213 14:27:23.201168 2095 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cni-path\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.201830 kubelet[2095]: I1213 14:27:23.201196 2095 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ccdebb72-4477-4c73-a618-5761dc2e9620-clustermesh-secrets\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.201830 kubelet[2095]: I1213 14:27:23.201209 2095 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-host-proc-sys-kernel\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.201830 kubelet[2095]: I1213 14:27:23.201227 2095 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6rnw2\" (UniqueName: \"kubernetes.io/projected/ccdebb72-4477-4c73-a618-5761dc2e9620-kube-api-access-6rnw2\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.201830 kubelet[2095]: I1213 14:27:23.201241 2095 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-run\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.201830 kubelet[2095]: I1213 14:27:23.201255 2095 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-config-path\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.201830 kubelet[2095]: I1213 14:27:23.201267 2095 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-etc-cni-netd\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.201830 kubelet[2095]: I1213 14:27:23.201280 2095 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-bpf-maps\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.201830 kubelet[2095]: I1213 14:27:23.201292 2095 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-hostproc\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.201830 kubelet[2095]: I1213 14:27:23.201305 2095 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ccdebb72-4477-4c73-a618-5761dc2e9620-cilium-cgroup\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:23.312141 systemd[1]: Removed slice kubepods-burstable-podccdebb72_4477_4c73_a618_5761dc2e9620.slice. Dec 13 14:27:23.312304 systemd[1]: kubepods-burstable-podccdebb72_4477_4c73_a618_5761dc2e9620.slice: Consumed 7.992s CPU time. Dec 13 14:27:23.532930 kubelet[2095]: I1213 14:27:23.532749 2095 scope.go:117] "RemoveContainer" containerID="89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51" Dec 13 14:27:23.547738 env[1731]: time="2024-12-13T14:27:23.546016545Z" level=info msg="RemoveContainer for \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\"" Dec 13 14:27:23.556162 env[1731]: time="2024-12-13T14:27:23.555973092Z" level=info msg="RemoveContainer for \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\" returns successfully" Dec 13 14:27:23.556821 kubelet[2095]: I1213 14:27:23.556784 2095 scope.go:117] "RemoveContainer" containerID="4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd" Dec 13 14:27:23.567965 env[1731]: time="2024-12-13T14:27:23.563372795Z" level=info msg="RemoveContainer for \"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd\"" Dec 13 14:27:23.573636 env[1731]: time="2024-12-13T14:27:23.573583023Z" level=info msg="RemoveContainer for \"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd\" returns successfully" Dec 13 14:27:23.574065 kubelet[2095]: I1213 14:27:23.574021 2095 scope.go:117] "RemoveContainer" containerID="825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b" Dec 13 14:27:23.578050 env[1731]: time="2024-12-13T14:27:23.577999853Z" level=info msg="RemoveContainer for \"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b\"" Dec 13 14:27:23.582202 env[1731]: time="2024-12-13T14:27:23.582147653Z" level=info msg="RemoveContainer for \"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b\" returns successfully" Dec 13 14:27:23.582840 kubelet[2095]: I1213 14:27:23.582747 2095 scope.go:117] "RemoveContainer" containerID="e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f" Dec 13 14:27:23.584148 env[1731]: time="2024-12-13T14:27:23.584110578Z" level=info msg="RemoveContainer for \"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f\"" Dec 13 14:27:23.588440 env[1731]: time="2024-12-13T14:27:23.588394110Z" level=info msg="RemoveContainer for \"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f\" returns successfully" Dec 13 14:27:23.588733 kubelet[2095]: I1213 14:27:23.588708 2095 scope.go:117] "RemoveContainer" containerID="93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e" Dec 13 14:27:23.590258 env[1731]: time="2024-12-13T14:27:23.590216941Z" level=info msg="RemoveContainer for \"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e\"" Dec 13 14:27:23.594218 env[1731]: time="2024-12-13T14:27:23.594156813Z" level=info msg="RemoveContainer for \"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e\" returns successfully" Dec 13 14:27:23.594715 kubelet[2095]: I1213 14:27:23.594690 2095 scope.go:117] "RemoveContainer" containerID="89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51" Dec 13 14:27:23.595162 env[1731]: time="2024-12-13T14:27:23.595083256Z" level=error msg="ContainerStatus for \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\": not found" Dec 13 14:27:23.595367 kubelet[2095]: E1213 14:27:23.595347 2095 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\": not found" containerID="89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51" Dec 13 14:27:23.595597 kubelet[2095]: I1213 14:27:23.595581 2095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51"} err="failed to get container status \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\": rpc error: code = NotFound desc = an error occurred when try to find container \"89832b97db3ef0e531144ca47b26667d3f78280f16be754e8813971dcf953b51\": not found" Dec 13 14:27:23.595671 kubelet[2095]: I1213 14:27:23.595607 2095 scope.go:117] "RemoveContainer" containerID="4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd" Dec 13 14:27:23.596003 env[1731]: time="2024-12-13T14:27:23.595937519Z" level=error msg="ContainerStatus for \"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd\": not found" Dec 13 14:27:23.596177 kubelet[2095]: E1213 14:27:23.596152 2095 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd\": not found" containerID="4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd" Dec 13 14:27:23.596286 kubelet[2095]: I1213 14:27:23.596215 2095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd"} err="failed to get container status \"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ead03930261d0bf65091eb9302df104ed0fbeb85d893d60ff775856ff9644bd\": not found" Dec 13 14:27:23.596286 kubelet[2095]: I1213 14:27:23.596233 2095 scope.go:117] "RemoveContainer" containerID="825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b" Dec 13 14:27:23.596687 env[1731]: time="2024-12-13T14:27:23.596548773Z" level=error msg="ContainerStatus for \"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b\": not found" Dec 13 14:27:23.596806 kubelet[2095]: E1213 14:27:23.596789 2095 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b\": not found" containerID="825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b" Dec 13 14:27:23.596880 kubelet[2095]: I1213 14:27:23.596823 2095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b"} err="failed to get container status \"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b\": rpc error: code = NotFound desc = an error occurred when try to find container \"825e85d9f33bbae7434f028558cc84564106e599e7ab567a553966a4088f343b\": not found" Dec 13 14:27:23.596880 kubelet[2095]: I1213 14:27:23.596838 2095 scope.go:117] "RemoveContainer" containerID="e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f" Dec 13 14:27:23.597256 env[1731]: time="2024-12-13T14:27:23.597205120Z" level=error msg="ContainerStatus for \"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f\": not found" Dec 13 14:27:23.597416 kubelet[2095]: E1213 14:27:23.597398 2095 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f\": not found" containerID="e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f" Dec 13 14:27:23.597508 kubelet[2095]: I1213 14:27:23.597430 2095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f"} err="failed to get container status \"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e583f2955f7d2cadcd69a4ce3223947bdc9683a1fea900889fc8fb2a09cd135f\": not found" Dec 13 14:27:23.597508 kubelet[2095]: I1213 14:27:23.597442 2095 scope.go:117] "RemoveContainer" containerID="93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e" Dec 13 14:27:23.597791 env[1731]: time="2024-12-13T14:27:23.597740571Z" level=error msg="ContainerStatus for \"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e\": not found" Dec 13 14:27:23.597975 kubelet[2095]: E1213 14:27:23.597957 2095 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e\": not found" containerID="93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e" Dec 13 14:27:23.598052 kubelet[2095]: I1213 14:27:23.597989 2095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e"} err="failed to get container status \"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e\": rpc error: code = NotFound desc = an error occurred when try to find container \"93df7fb8078e990efe5a5959848752e7ba0d10d2fcabd6875526e3192e52e90e\": not found" Dec 13 14:27:23.671534 systemd[1]: var-lib-kubelet-pods-ccdebb72\x2d4477\x2d4c73\x2da618\x2d5761dc2e9620-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6rnw2.mount: Deactivated successfully. Dec 13 14:27:23.671681 systemd[1]: var-lib-kubelet-pods-ccdebb72\x2d4477\x2d4c73\x2da618\x2d5761dc2e9620-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:27:24.078279 kubelet[2095]: E1213 14:27:24.078224 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:25.079149 kubelet[2095]: E1213 14:27:25.079106 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:25.307179 kubelet[2095]: I1213 14:27:25.307138 2095 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ccdebb72-4477-4c73-a618-5761dc2e9620" path="/var/lib/kubelet/pods/ccdebb72-4477-4c73-a618-5761dc2e9620/volumes" Dec 13 14:27:25.828441 kubelet[2095]: I1213 14:27:25.828405 2095 topology_manager.go:215] "Topology Admit Handler" podUID="bfe2ee49-6fbf-4f01-81c5-e6520a41fe62" podNamespace="kube-system" podName="cilium-operator-5cc964979-2z95v" Dec 13 14:27:25.828643 kubelet[2095]: E1213 14:27:25.828464 2095 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ccdebb72-4477-4c73-a618-5761dc2e9620" containerName="apply-sysctl-overwrites" Dec 13 14:27:25.828643 kubelet[2095]: E1213 14:27:25.828478 2095 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ccdebb72-4477-4c73-a618-5761dc2e9620" containerName="mount-bpf-fs" Dec 13 14:27:25.828643 kubelet[2095]: E1213 14:27:25.828487 2095 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ccdebb72-4477-4c73-a618-5761dc2e9620" containerName="mount-cgroup" Dec 13 14:27:25.828643 kubelet[2095]: E1213 14:27:25.828499 2095 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ccdebb72-4477-4c73-a618-5761dc2e9620" containerName="clean-cilium-state" Dec 13 14:27:25.828643 kubelet[2095]: E1213 14:27:25.828508 2095 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ccdebb72-4477-4c73-a618-5761dc2e9620" containerName="cilium-agent" Dec 13 14:27:25.828643 kubelet[2095]: I1213 14:27:25.828532 2095 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccdebb72-4477-4c73-a618-5761dc2e9620" containerName="cilium-agent" Dec 13 14:27:25.837938 systemd[1]: Created slice kubepods-besteffort-podbfe2ee49_6fbf_4f01_81c5_e6520a41fe62.slice. Dec 13 14:27:25.854150 kubelet[2095]: I1213 14:27:25.854105 2095 topology_manager.go:215] "Topology Admit Handler" podUID="da378891-8de6-4882-9052-2655e0998d64" podNamespace="kube-system" podName="cilium-7cxr9" Dec 13 14:27:25.860260 systemd[1]: Created slice kubepods-burstable-podda378891_8de6_4882_9052_2655e0998d64.slice. Dec 13 14:27:25.919111 kubelet[2095]: I1213 14:27:25.919063 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp9t7\" (UniqueName: \"kubernetes.io/projected/bfe2ee49-6fbf-4f01-81c5-e6520a41fe62-kube-api-access-hp9t7\") pod \"cilium-operator-5cc964979-2z95v\" (UID: \"bfe2ee49-6fbf-4f01-81c5-e6520a41fe62\") " pod="kube-system/cilium-operator-5cc964979-2z95v" Dec 13 14:27:25.919111 kubelet[2095]: I1213 14:27:25.919117 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfe2ee49-6fbf-4f01-81c5-e6520a41fe62-cilium-config-path\") pod \"cilium-operator-5cc964979-2z95v\" (UID: \"bfe2ee49-6fbf-4f01-81c5-e6520a41fe62\") " pod="kube-system/cilium-operator-5cc964979-2z95v" Dec 13 14:27:26.019420 kubelet[2095]: I1213 14:27:26.019365 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/da378891-8de6-4882-9052-2655e0998d64-cilium-ipsec-secrets\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019420 kubelet[2095]: I1213 14:27:26.019421 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da378891-8de6-4882-9052-2655e0998d64-cilium-config-path\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019653 kubelet[2095]: I1213 14:27:26.019451 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk6h7\" (UniqueName: \"kubernetes.io/projected/da378891-8de6-4882-9052-2655e0998d64-kube-api-access-sk6h7\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019653 kubelet[2095]: I1213 14:27:26.019477 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cilium-run\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019653 kubelet[2095]: I1213 14:27:26.019514 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cni-path\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019653 kubelet[2095]: I1213 14:27:26.019537 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-bpf-maps\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019653 kubelet[2095]: I1213 14:27:26.019563 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cilium-cgroup\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019653 kubelet[2095]: I1213 14:27:26.019588 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-etc-cni-netd\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019653 kubelet[2095]: I1213 14:27:26.019614 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-xtables-lock\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019653 kubelet[2095]: I1213 14:27:26.019645 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da378891-8de6-4882-9052-2655e0998d64-hubble-tls\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019984 kubelet[2095]: I1213 14:27:26.019697 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da378891-8de6-4882-9052-2655e0998d64-clustermesh-secrets\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019984 kubelet[2095]: I1213 14:27:26.019748 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-host-proc-sys-net\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019984 kubelet[2095]: I1213 14:27:26.019779 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-host-proc-sys-kernel\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019984 kubelet[2095]: I1213 14:27:26.019808 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-hostproc\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.019984 kubelet[2095]: I1213 14:27:26.019837 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-lib-modules\") pod \"cilium-7cxr9\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " pod="kube-system/cilium-7cxr9" Dec 13 14:27:26.079664 kubelet[2095]: E1213 14:27:26.079536 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:26.144702 env[1731]: time="2024-12-13T14:27:26.144654898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-2z95v,Uid:bfe2ee49-6fbf-4f01-81c5-e6520a41fe62,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:26.205748 kubelet[2095]: E1213 14:27:26.205723 2095 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:27:26.219938 env[1731]: time="2024-12-13T14:27:26.219841955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:26.219938 env[1731]: time="2024-12-13T14:27:26.219891314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:26.220405 env[1731]: time="2024-12-13T14:27:26.219907564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:26.220405 env[1731]: time="2024-12-13T14:27:26.220147253Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c85cefc4d08ec77458deda0672ef44080e5c4d196a3f8f404e3460e768019784 pid=3849 runtime=io.containerd.runc.v2 Dec 13 14:27:26.242399 systemd[1]: Started cri-containerd-c85cefc4d08ec77458deda0672ef44080e5c4d196a3f8f404e3460e768019784.scope. Dec 13 14:27:26.317451 env[1731]: time="2024-12-13T14:27:26.317405917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-2z95v,Uid:bfe2ee49-6fbf-4f01-81c5-e6520a41fe62,Namespace:kube-system,Attempt:0,} returns sandbox id \"c85cefc4d08ec77458deda0672ef44080e5c4d196a3f8f404e3460e768019784\"" Dec 13 14:27:26.320175 env[1731]: time="2024-12-13T14:27:26.320135908Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:27:26.335026 env[1731]: time="2024-12-13T14:27:26.334931588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7cxr9,Uid:da378891-8de6-4882-9052-2655e0998d64,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:26.362089 env[1731]: time="2024-12-13T14:27:26.362001406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:26.362089 env[1731]: time="2024-12-13T14:27:26.362049598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:26.362374 env[1731]: time="2024-12-13T14:27:26.362065234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:26.362374 env[1731]: time="2024-12-13T14:27:26.362308464Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009 pid=3888 runtime=io.containerd.runc.v2 Dec 13 14:27:26.391204 systemd[1]: Started cri-containerd-59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009.scope. Dec 13 14:27:26.440496 env[1731]: time="2024-12-13T14:27:26.440445442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7cxr9,Uid:da378891-8de6-4882-9052-2655e0998d64,Namespace:kube-system,Attempt:0,} returns sandbox id \"59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009\"" Dec 13 14:27:26.458400 env[1731]: time="2024-12-13T14:27:26.458356873Z" level=info msg="CreateContainer within sandbox \"59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:27:26.479764 env[1731]: time="2024-12-13T14:27:26.479709907Z" level=info msg="CreateContainer within sandbox \"59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19\"" Dec 13 14:27:26.481049 env[1731]: time="2024-12-13T14:27:26.481012176Z" level=info msg="StartContainer for \"8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19\"" Dec 13 14:27:26.505536 systemd[1]: Started cri-containerd-8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19.scope. Dec 13 14:27:26.521818 systemd[1]: cri-containerd-8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19.scope: Deactivated successfully. Dec 13 14:27:26.554890 env[1731]: time="2024-12-13T14:27:26.554832795Z" level=info msg="shim disconnected" id=8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19 Dec 13 14:27:26.554890 env[1731]: time="2024-12-13T14:27:26.554887808Z" level=warning msg="cleaning up after shim disconnected" id=8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19 namespace=k8s.io Dec 13 14:27:26.554890 env[1731]: time="2024-12-13T14:27:26.554899278Z" level=info msg="cleaning up dead shim" Dec 13 14:27:26.567771 env[1731]: time="2024-12-13T14:27:26.567482981Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3948 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:27:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:27:26.568304 env[1731]: time="2024-12-13T14:27:26.567999324Z" level=error msg="copy shim log" error="read /proc/self/fd/86: file already closed" Dec 13 14:27:26.569514 env[1731]: time="2024-12-13T14:27:26.569098747Z" level=error msg="Failed to pipe stderr of container \"8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19\"" error="reading from a closed fifo" Dec 13 14:27:26.574898 env[1731]: time="2024-12-13T14:27:26.574822626Z" level=error msg="Failed to pipe stdout of container \"8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19\"" error="reading from a closed fifo" Dec 13 14:27:26.577417 env[1731]: time="2024-12-13T14:27:26.577358283Z" level=error msg="StartContainer for \"8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:27:26.577938 kubelet[2095]: E1213 14:27:26.577755 2095 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19" Dec 13 14:27:26.578070 kubelet[2095]: E1213 14:27:26.578043 2095 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:27:26.578070 kubelet[2095]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:27:26.578070 kubelet[2095]: rm /hostbin/cilium-mount Dec 13 14:27:26.578070 kubelet[2095]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sk6h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-7cxr9_kube-system(da378891-8de6-4882-9052-2655e0998d64): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:27:26.578320 kubelet[2095]: E1213 14:27:26.578103 2095 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7cxr9" podUID="da378891-8de6-4882-9052-2655e0998d64" Dec 13 14:27:27.080901 kubelet[2095]: E1213 14:27:27.080570 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:27.556072 env[1731]: time="2024-12-13T14:27:27.555785750Z" level=info msg="StopPodSandbox for \"59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009\"" Dec 13 14:27:27.559713 env[1731]: time="2024-12-13T14:27:27.556101490Z" level=info msg="Container to stop \"8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:27.559236 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009-shm.mount: Deactivated successfully. Dec 13 14:27:27.573282 systemd[1]: cri-containerd-59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009.scope: Deactivated successfully. Dec 13 14:27:27.628652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009-rootfs.mount: Deactivated successfully. Dec 13 14:27:27.642555 env[1731]: time="2024-12-13T14:27:27.642496616Z" level=info msg="shim disconnected" id=59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009 Dec 13 14:27:27.642983 env[1731]: time="2024-12-13T14:27:27.642561712Z" level=warning msg="cleaning up after shim disconnected" id=59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009 namespace=k8s.io Dec 13 14:27:27.642983 env[1731]: time="2024-12-13T14:27:27.642574577Z" level=info msg="cleaning up dead shim" Dec 13 14:27:27.655218 env[1731]: time="2024-12-13T14:27:27.655062757Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3980 runtime=io.containerd.runc.v2\n" Dec 13 14:27:27.655607 env[1731]: time="2024-12-13T14:27:27.655570328Z" level=info msg="TearDown network for sandbox \"59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009\" successfully" Dec 13 14:27:27.655607 env[1731]: time="2024-12-13T14:27:27.655601294Z" level=info msg="StopPodSandbox for \"59baa18b6107e0474d0040b726773b9f7692d7f604ebf30e655cd92b62afa009\" returns successfully" Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833177 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/da378891-8de6-4882-9052-2655e0998d64-cilium-ipsec-secrets\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833246 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-xtables-lock\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833281 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da378891-8de6-4882-9052-2655e0998d64-clustermesh-secrets\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833306 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-hostproc\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833338 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-host-proc-sys-kernel\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833367 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cni-path\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833394 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cilium-run\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833418 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-etc-cni-netd\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833451 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da378891-8de6-4882-9052-2655e0998d64-cilium-config-path\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833478 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-bpf-maps\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833503 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-lib-modules\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833527 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cilium-cgroup\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833555 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-host-proc-sys-net\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833595 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk6h7\" (UniqueName: \"kubernetes.io/projected/da378891-8de6-4882-9052-2655e0998d64-kube-api-access-sk6h7\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833629 2095 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da378891-8de6-4882-9052-2655e0998d64-hubble-tls\") pod \"da378891-8de6-4882-9052-2655e0998d64\" (UID: \"da378891-8de6-4882-9052-2655e0998d64\") " Dec 13 14:27:27.837223 kubelet[2095]: I1213 14:27:27.833966 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:27.838534 kubelet[2095]: I1213 14:27:27.834018 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:27.838679 kubelet[2095]: I1213 14:27:27.838646 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:27.838804 kubelet[2095]: I1213 14:27:27.838786 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:27.838896 kubelet[2095]: I1213 14:27:27.838882 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:27.838990 kubelet[2095]: I1213 14:27:27.838975 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:27.846973 kubelet[2095]: I1213 14:27:27.846803 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da378891-8de6-4882-9052-2655e0998d64-kube-api-access-sk6h7" (OuterVolumeSpecName: "kube-api-access-sk6h7") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "kube-api-access-sk6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:27:27.847474 systemd[1]: var-lib-kubelet-pods-da378891\x2d8de6\x2d4882\x2d9052\x2d2655e0998d64-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:27:27.852074 systemd[1]: var-lib-kubelet-pods-da378891\x2d8de6\x2d4882\x2d9052\x2d2655e0998d64-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsk6h7.mount: Deactivated successfully. Dec 13 14:27:27.856435 kubelet[2095]: I1213 14:27:27.856391 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-hostproc" (OuterVolumeSpecName: "hostproc") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:27.856558 kubelet[2095]: I1213 14:27:27.856458 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:27.856558 kubelet[2095]: I1213 14:27:27.856480 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cni-path" (OuterVolumeSpecName: "cni-path") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:27.856558 kubelet[2095]: I1213 14:27:27.856504 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:27.856706 kubelet[2095]: I1213 14:27:27.856593 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da378891-8de6-4882-9052-2655e0998d64-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:27:27.856974 kubelet[2095]: I1213 14:27:27.856938 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da378891-8de6-4882-9052-2655e0998d64-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:27:27.858419 kubelet[2095]: I1213 14:27:27.858387 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da378891-8de6-4882-9052-2655e0998d64-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:27:27.862546 kubelet[2095]: I1213 14:27:27.862497 2095 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da378891-8de6-4882-9052-2655e0998d64-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "da378891-8de6-4882-9052-2655e0998d64" (UID: "da378891-8de6-4882-9052-2655e0998d64"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:27:27.934839 kubelet[2095]: I1213 14:27:27.934791 2095 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da378891-8de6-4882-9052-2655e0998d64-clustermesh-secrets\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.934839 kubelet[2095]: I1213 14:27:27.934830 2095 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-hostproc\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.934839 kubelet[2095]: I1213 14:27:27.934845 2095 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-host-proc-sys-kernel\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.934858 2095 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cni-path\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.934872 2095 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cilium-run\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.934884 2095 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-etc-cni-netd\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.934896 2095 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da378891-8de6-4882-9052-2655e0998d64-cilium-config-path\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.934908 2095 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-bpf-maps\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.934925 2095 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-lib-modules\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.934937 2095 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-cilium-cgroup\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.934949 2095 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-host-proc-sys-net\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.934963 2095 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sk6h7\" (UniqueName: \"kubernetes.io/projected/da378891-8de6-4882-9052-2655e0998d64-kube-api-access-sk6h7\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.934976 2095 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da378891-8de6-4882-9052-2655e0998d64-hubble-tls\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.935083 2095 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/da378891-8de6-4882-9052-2655e0998d64-cilium-ipsec-secrets\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:27.935247 kubelet[2095]: I1213 14:27:27.935104 2095 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da378891-8de6-4882-9052-2655e0998d64-xtables-lock\") on node \"172.31.28.77\" DevicePath \"\"" Dec 13 14:27:28.050078 systemd[1]: var-lib-kubelet-pods-da378891\x2d8de6\x2d4882\x2d9052\x2d2655e0998d64-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:27:28.050494 systemd[1]: var-lib-kubelet-pods-da378891\x2d8de6\x2d4882\x2d9052\x2d2655e0998d64-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:27:28.081113 kubelet[2095]: E1213 14:27:28.080999 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:28.559793 kubelet[2095]: I1213 14:27:28.559760 2095 scope.go:117] "RemoveContainer" containerID="8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19" Dec 13 14:27:28.563377 env[1731]: time="2024-12-13T14:27:28.563331229Z" level=info msg="RemoveContainer for \"8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19\"" Dec 13 14:27:28.573293 env[1731]: time="2024-12-13T14:27:28.573236202Z" level=info msg="RemoveContainer for \"8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19\" returns successfully" Dec 13 14:27:28.573734 systemd[1]: Removed slice kubepods-burstable-podda378891_8de6_4882_9052_2655e0998d64.slice. Dec 13 14:27:28.698747 kubelet[2095]: I1213 14:27:28.698710 2095 topology_manager.go:215] "Topology Admit Handler" podUID="8f37f9f2-7402-4205-8c9c-7127304d979f" podNamespace="kube-system" podName="cilium-ntk74" Dec 13 14:27:28.699040 kubelet[2095]: E1213 14:27:28.698770 2095 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da378891-8de6-4882-9052-2655e0998d64" containerName="mount-cgroup" Dec 13 14:27:28.699040 kubelet[2095]: I1213 14:27:28.698802 2095 memory_manager.go:354] "RemoveStaleState removing state" podUID="da378891-8de6-4882-9052-2655e0998d64" containerName="mount-cgroup" Dec 13 14:27:28.708045 systemd[1]: Created slice kubepods-burstable-pod8f37f9f2_7402_4205_8c9c_7127304d979f.slice. Dec 13 14:27:28.843130 kubelet[2095]: I1213 14:27:28.842531 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f37f9f2-7402-4205-8c9c-7127304d979f-cni-path\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.843130 kubelet[2095]: I1213 14:27:28.842588 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f37f9f2-7402-4205-8c9c-7127304d979f-hubble-tls\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.843130 kubelet[2095]: I1213 14:27:28.842719 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghz9m\" (UniqueName: \"kubernetes.io/projected/8f37f9f2-7402-4205-8c9c-7127304d979f-kube-api-access-ghz9m\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.843130 kubelet[2095]: I1213 14:27:28.842754 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f37f9f2-7402-4205-8c9c-7127304d979f-bpf-maps\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.843130 kubelet[2095]: I1213 14:27:28.842780 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f37f9f2-7402-4205-8c9c-7127304d979f-cilium-cgroup\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.843130 kubelet[2095]: I1213 14:27:28.842807 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f37f9f2-7402-4205-8c9c-7127304d979f-lib-modules\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.843130 kubelet[2095]: I1213 14:27:28.842835 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f37f9f2-7402-4205-8c9c-7127304d979f-cilium-run\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.843130 kubelet[2095]: I1213 14:27:28.842865 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f37f9f2-7402-4205-8c9c-7127304d979f-xtables-lock\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.843130 kubelet[2095]: I1213 14:27:28.842894 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f37f9f2-7402-4205-8c9c-7127304d979f-cilium-config-path\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.843130 kubelet[2095]: I1213 14:27:28.842930 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8f37f9f2-7402-4205-8c9c-7127304d979f-cilium-ipsec-secrets\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.843130 kubelet[2095]: I1213 14:27:28.842958 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f37f9f2-7402-4205-8c9c-7127304d979f-host-proc-sys-net\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.844038 kubelet[2095]: I1213 14:27:28.843170 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f37f9f2-7402-4205-8c9c-7127304d979f-host-proc-sys-kernel\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.844038 kubelet[2095]: I1213 14:27:28.843292 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f37f9f2-7402-4205-8c9c-7127304d979f-clustermesh-secrets\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.844038 kubelet[2095]: I1213 14:27:28.843325 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f37f9f2-7402-4205-8c9c-7127304d979f-hostproc\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:28.844038 kubelet[2095]: I1213 14:27:28.843354 2095 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f37f9f2-7402-4205-8c9c-7127304d979f-etc-cni-netd\") pod \"cilium-ntk74\" (UID: \"8f37f9f2-7402-4205-8c9c-7127304d979f\") " pod="kube-system/cilium-ntk74" Dec 13 14:27:29.016156 env[1731]: time="2024-12-13T14:27:29.016044127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ntk74,Uid:8f37f9f2-7402-4205-8c9c-7127304d979f,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:29.037114 env[1731]: time="2024-12-13T14:27:29.037036714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:29.037114 env[1731]: time="2024-12-13T14:27:29.037079119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:29.037114 env[1731]: time="2024-12-13T14:27:29.037094948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:29.037667 env[1731]: time="2024-12-13T14:27:29.037544417Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a pid=4009 runtime=io.containerd.runc.v2 Dec 13 14:27:29.079806 systemd[1]: run-containerd-runc-k8s.io-f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a-runc.ZPfFXf.mount: Deactivated successfully. Dec 13 14:27:29.082824 kubelet[2095]: E1213 14:27:29.082421 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:29.087042 systemd[1]: Started cri-containerd-f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a.scope. Dec 13 14:27:29.125641 env[1731]: time="2024-12-13T14:27:29.125403801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ntk74,Uid:8f37f9f2-7402-4205-8c9c-7127304d979f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a\"" Dec 13 14:27:29.131471 env[1731]: time="2024-12-13T14:27:29.131423580Z" level=info msg="CreateContainer within sandbox \"f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:27:29.149722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2020176194.mount: Deactivated successfully. Dec 13 14:27:29.161672 env[1731]: time="2024-12-13T14:27:29.161558964Z" level=info msg="CreateContainer within sandbox \"f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4048c17f9b231e1e6f20b60fe06698208ec4c53a9ae9cc8d7f8bdceeb9887ad4\"" Dec 13 14:27:29.162965 env[1731]: time="2024-12-13T14:27:29.162939601Z" level=info msg="StartContainer for \"4048c17f9b231e1e6f20b60fe06698208ec4c53a9ae9cc8d7f8bdceeb9887ad4\"" Dec 13 14:27:29.195246 systemd[1]: Started cri-containerd-4048c17f9b231e1e6f20b60fe06698208ec4c53a9ae9cc8d7f8bdceeb9887ad4.scope. Dec 13 14:27:29.247467 env[1731]: time="2024-12-13T14:27:29.247343735Z" level=info msg="StartContainer for \"4048c17f9b231e1e6f20b60fe06698208ec4c53a9ae9cc8d7f8bdceeb9887ad4\" returns successfully" Dec 13 14:27:29.274319 systemd[1]: cri-containerd-4048c17f9b231e1e6f20b60fe06698208ec4c53a9ae9cc8d7f8bdceeb9887ad4.scope: Deactivated successfully. Dec 13 14:27:29.310423 kubelet[2095]: I1213 14:27:29.310157 2095 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="da378891-8de6-4882-9052-2655e0998d64" path="/var/lib/kubelet/pods/da378891-8de6-4882-9052-2655e0998d64/volumes" Dec 13 14:27:29.323413 env[1731]: time="2024-12-13T14:27:29.323142312Z" level=info msg="shim disconnected" id=4048c17f9b231e1e6f20b60fe06698208ec4c53a9ae9cc8d7f8bdceeb9887ad4 Dec 13 14:27:29.323413 env[1731]: time="2024-12-13T14:27:29.323408029Z" level=warning msg="cleaning up after shim disconnected" id=4048c17f9b231e1e6f20b60fe06698208ec4c53a9ae9cc8d7f8bdceeb9887ad4 namespace=k8s.io Dec 13 14:27:29.323792 env[1731]: time="2024-12-13T14:27:29.323435106Z" level=info msg="cleaning up dead shim" Dec 13 14:27:29.334848 env[1731]: time="2024-12-13T14:27:29.334790312Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4097 runtime=io.containerd.runc.v2\n" Dec 13 14:27:29.571969 env[1731]: time="2024-12-13T14:27:29.571915601Z" level=info msg="CreateContainer within sandbox \"f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:27:29.592969 env[1731]: time="2024-12-13T14:27:29.592888592Z" level=info msg="CreateContainer within sandbox \"f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba5267554ebf9fbac5d25badb43fd6cdba9a533e7bfa734d9093febd7f3b58f9\"" Dec 13 14:27:29.594025 env[1731]: time="2024-12-13T14:27:29.593981706Z" level=info msg="StartContainer for \"ba5267554ebf9fbac5d25badb43fd6cdba9a533e7bfa734d9093febd7f3b58f9\"" Dec 13 14:27:29.638626 systemd[1]: Started cri-containerd-ba5267554ebf9fbac5d25badb43fd6cdba9a533e7bfa734d9093febd7f3b58f9.scope. Dec 13 14:27:29.673337 kubelet[2095]: W1213 14:27:29.673286 2095 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda378891_8de6_4882_9052_2655e0998d64.slice/cri-containerd-8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19.scope WatchSource:0}: container "8ac4c48ff98ed399dfb14c95b00ed30a62cb52fd32f3d5256836abde9baafa19" in namespace "k8s.io": not found Dec 13 14:27:29.712029 env[1731]: time="2024-12-13T14:27:29.711982554Z" level=info msg="StartContainer for \"ba5267554ebf9fbac5d25badb43fd6cdba9a533e7bfa734d9093febd7f3b58f9\" returns successfully" Dec 13 14:27:29.729094 systemd[1]: cri-containerd-ba5267554ebf9fbac5d25badb43fd6cdba9a533e7bfa734d9093febd7f3b58f9.scope: Deactivated successfully. Dec 13 14:27:29.781074 env[1731]: time="2024-12-13T14:27:29.781016120Z" level=info msg="shim disconnected" id=ba5267554ebf9fbac5d25badb43fd6cdba9a533e7bfa734d9093febd7f3b58f9 Dec 13 14:27:29.781074 env[1731]: time="2024-12-13T14:27:29.781075541Z" level=warning msg="cleaning up after shim disconnected" id=ba5267554ebf9fbac5d25badb43fd6cdba9a533e7bfa734d9093febd7f3b58f9 namespace=k8s.io Dec 13 14:27:29.781474 env[1731]: time="2024-12-13T14:27:29.781088482Z" level=info msg="cleaning up dead shim" Dec 13 14:27:29.794066 env[1731]: time="2024-12-13T14:27:29.794018358Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4158 runtime=io.containerd.runc.v2\n" Dec 13 14:27:30.082962 kubelet[2095]: E1213 14:27:30.082887 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:30.578148 env[1731]: time="2024-12-13T14:27:30.578103858Z" level=info msg="CreateContainer within sandbox \"f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:27:30.606768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766313844.mount: Deactivated successfully. Dec 13 14:27:30.620277 env[1731]: time="2024-12-13T14:27:30.620217758Z" level=info msg="CreateContainer within sandbox \"f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4727bc71a10d1c08b603682436b2fb9e55c54d86d89023f8469362d3dcb3777f\"" Dec 13 14:27:30.620903 env[1731]: time="2024-12-13T14:27:30.620825398Z" level=info msg="StartContainer for \"4727bc71a10d1c08b603682436b2fb9e55c54d86d89023f8469362d3dcb3777f\"" Dec 13 14:27:30.645636 systemd[1]: Started cri-containerd-4727bc71a10d1c08b603682436b2fb9e55c54d86d89023f8469362d3dcb3777f.scope. Dec 13 14:27:30.697320 env[1731]: time="2024-12-13T14:27:30.697257359Z" level=info msg="StartContainer for \"4727bc71a10d1c08b603682436b2fb9e55c54d86d89023f8469362d3dcb3777f\" returns successfully" Dec 13 14:27:30.702438 systemd[1]: cri-containerd-4727bc71a10d1c08b603682436b2fb9e55c54d86d89023f8469362d3dcb3777f.scope: Deactivated successfully. Dec 13 14:27:30.759207 env[1731]: time="2024-12-13T14:27:30.759142166Z" level=info msg="shim disconnected" id=4727bc71a10d1c08b603682436b2fb9e55c54d86d89023f8469362d3dcb3777f Dec 13 14:27:30.759532 env[1731]: time="2024-12-13T14:27:30.759360835Z" level=warning msg="cleaning up after shim disconnected" id=4727bc71a10d1c08b603682436b2fb9e55c54d86d89023f8469362d3dcb3777f namespace=k8s.io Dec 13 14:27:30.759532 env[1731]: time="2024-12-13T14:27:30.759380198Z" level=info msg="cleaning up dead shim" Dec 13 14:27:30.772568 env[1731]: time="2024-12-13T14:27:30.772466472Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4217 runtime=io.containerd.runc.v2\n" Dec 13 14:27:31.044305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4727bc71a10d1c08b603682436b2fb9e55c54d86d89023f8469362d3dcb3777f-rootfs.mount: Deactivated successfully. Dec 13 14:27:31.084056 kubelet[2095]: E1213 14:27:31.084003 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:31.207824 kubelet[2095]: E1213 14:27:31.207792 2095 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:27:31.589350 env[1731]: time="2024-12-13T14:27:31.589300887Z" level=info msg="CreateContainer within sandbox \"f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:27:31.628510 env[1731]: time="2024-12-13T14:27:31.628454445Z" level=info msg="CreateContainer within sandbox \"f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"77bcdc5da009b70c2a9c6e421c0c17140f1ae9a8f9c55f4699428e38339a2c6b\"" Dec 13 14:27:31.629164 env[1731]: time="2024-12-13T14:27:31.629077337Z" level=info msg="StartContainer for \"77bcdc5da009b70c2a9c6e421c0c17140f1ae9a8f9c55f4699428e38339a2c6b\"" Dec 13 14:27:31.659393 systemd[1]: Started cri-containerd-77bcdc5da009b70c2a9c6e421c0c17140f1ae9a8f9c55f4699428e38339a2c6b.scope. Dec 13 14:27:31.703049 systemd[1]: cri-containerd-77bcdc5da009b70c2a9c6e421c0c17140f1ae9a8f9c55f4699428e38339a2c6b.scope: Deactivated successfully. Dec 13 14:27:31.704809 env[1731]: time="2024-12-13T14:27:31.704703330Z" level=info msg="StartContainer for \"77bcdc5da009b70c2a9c6e421c0c17140f1ae9a8f9c55f4699428e38339a2c6b\" returns successfully" Dec 13 14:27:31.745564 env[1731]: time="2024-12-13T14:27:31.745511425Z" level=info msg="shim disconnected" id=77bcdc5da009b70c2a9c6e421c0c17140f1ae9a8f9c55f4699428e38339a2c6b Dec 13 14:27:31.745564 env[1731]: time="2024-12-13T14:27:31.745565469Z" level=warning msg="cleaning up after shim disconnected" id=77bcdc5da009b70c2a9c6e421c0c17140f1ae9a8f9c55f4699428e38339a2c6b namespace=k8s.io Dec 13 14:27:31.745953 env[1731]: time="2024-12-13T14:27:31.745578410Z" level=info msg="cleaning up dead shim" Dec 13 14:27:31.757233 env[1731]: time="2024-12-13T14:27:31.757162675Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4273 runtime=io.containerd.runc.v2\n" Dec 13 14:27:32.043753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77bcdc5da009b70c2a9c6e421c0c17140f1ae9a8f9c55f4699428e38339a2c6b-rootfs.mount: Deactivated successfully. Dec 13 14:27:32.085176 kubelet[2095]: E1213 14:27:32.085122 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:32.593483 env[1731]: time="2024-12-13T14:27:32.593435060Z" level=info msg="CreateContainer within sandbox \"f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:27:32.629862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount436457455.mount: Deactivated successfully. Dec 13 14:27:32.636423 env[1731]: time="2024-12-13T14:27:32.636373214Z" level=info msg="CreateContainer within sandbox \"f0ca832a88b22acaf8c7101249ad5bd8755356ab2e56ebfc292e97dd35f67b3a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2a27d766ee78c020ef52b9127d49992ba63177aeb177125a95dcac089e7e4164\"" Dec 13 14:27:32.637520 env[1731]: time="2024-12-13T14:27:32.637452854Z" level=info msg="StartContainer for \"2a27d766ee78c020ef52b9127d49992ba63177aeb177125a95dcac089e7e4164\"" Dec 13 14:27:32.671506 systemd[1]: Started cri-containerd-2a27d766ee78c020ef52b9127d49992ba63177aeb177125a95dcac089e7e4164.scope. Dec 13 14:27:32.723507 env[1731]: time="2024-12-13T14:27:32.723452498Z" level=info msg="StartContainer for \"2a27d766ee78c020ef52b9127d49992ba63177aeb177125a95dcac089e7e4164\" returns successfully" Dec 13 14:27:32.792775 kubelet[2095]: W1213 14:27:32.792727 2095 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f37f9f2_7402_4205_8c9c_7127304d979f.slice/cri-containerd-4048c17f9b231e1e6f20b60fe06698208ec4c53a9ae9cc8d7f8bdceeb9887ad4.scope WatchSource:0}: task 4048c17f9b231e1e6f20b60fe06698208ec4c53a9ae9cc8d7f8bdceeb9887ad4 not found: not found Dec 13 14:27:32.958938 kubelet[2095]: I1213 14:27:32.957717 2095 setters.go:568] "Node became not ready" node="172.31.28.77" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:27:32Z","lastTransitionTime":"2024-12-13T14:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:27:33.085845 kubelet[2095]: E1213 14:27:33.085800 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:33.495248 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:27:33.636230 kubelet[2095]: I1213 14:27:33.636165 2095 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-ntk74" podStartSLOduration=5.63611268 podStartE2EDuration="5.63611268s" podCreationTimestamp="2024-12-13 14:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:27:33.635921028 +0000 UTC m=+93.496227252" watchObservedRunningTime="2024-12-13 14:27:33.63611268 +0000 UTC m=+93.496418921" Dec 13 14:27:34.086515 kubelet[2095]: E1213 14:27:34.086468 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:35.087605 kubelet[2095]: E1213 14:27:35.087533 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:35.381897 systemd[1]: run-containerd-runc-k8s.io-2a27d766ee78c020ef52b9127d49992ba63177aeb177125a95dcac089e7e4164-runc.3wunwv.mount: Deactivated successfully. Dec 13 14:27:35.905239 kubelet[2095]: W1213 14:27:35.905018 2095 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f37f9f2_7402_4205_8c9c_7127304d979f.slice/cri-containerd-ba5267554ebf9fbac5d25badb43fd6cdba9a533e7bfa734d9093febd7f3b58f9.scope WatchSource:0}: task ba5267554ebf9fbac5d25badb43fd6cdba9a533e7bfa734d9093febd7f3b58f9 not found: not found Dec 13 14:27:36.090208 kubelet[2095]: E1213 14:27:36.089947 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:37.006909 (udev-worker)[4848]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:37.006942 (udev-worker)[4849]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:37.016233 systemd-networkd[1462]: lxc_health: Link UP Dec 13 14:27:37.024786 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:27:37.024339 systemd-networkd[1462]: lxc_health: Gained carrier Dec 13 14:27:37.090945 kubelet[2095]: E1213 14:27:37.090896 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:37.676713 systemd[1]: run-containerd-runc-k8s.io-2a27d766ee78c020ef52b9127d49992ba63177aeb177125a95dcac089e7e4164-runc.0GA19P.mount: Deactivated successfully. Dec 13 14:27:37.788457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887076308.mount: Deactivated successfully. Dec 13 14:27:38.094538 kubelet[2095]: E1213 14:27:38.094485 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:38.303438 systemd-networkd[1462]: lxc_health: Gained IPv6LL Dec 13 14:27:39.034042 kubelet[2095]: W1213 14:27:39.033997 2095 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f37f9f2_7402_4205_8c9c_7127304d979f.slice/cri-containerd-4727bc71a10d1c08b603682436b2fb9e55c54d86d89023f8469362d3dcb3777f.scope WatchSource:0}: task 4727bc71a10d1c08b603682436b2fb9e55c54d86d89023f8469362d3dcb3777f not found: not found Dec 13 14:27:39.094894 kubelet[2095]: E1213 14:27:39.094815 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:39.516472 env[1731]: time="2024-12-13T14:27:39.516339972Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:39.519117 env[1731]: time="2024-12-13T14:27:39.519077601Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:39.520174 env[1731]: time="2024-12-13T14:27:39.519898805Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:27:39.520798 env[1731]: time="2024-12-13T14:27:39.520760970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:39.525980 env[1731]: time="2024-12-13T14:27:39.525925820Z" level=info msg="CreateContainer within sandbox \"c85cefc4d08ec77458deda0672ef44080e5c4d196a3f8f404e3460e768019784\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:27:39.561529 env[1731]: time="2024-12-13T14:27:39.561473766Z" level=info msg="CreateContainer within sandbox \"c85cefc4d08ec77458deda0672ef44080e5c4d196a3f8f404e3460e768019784\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ddabe6ce9f275ddecc023dcb9f4c08dac25b65f9edf73394d4cd7fd5a84aa457\"" Dec 13 14:27:39.564521 env[1731]: time="2024-12-13T14:27:39.564473962Z" level=info msg="StartContainer for \"ddabe6ce9f275ddecc023dcb9f4c08dac25b65f9edf73394d4cd7fd5a84aa457\"" Dec 13 14:27:39.644509 systemd[1]: Started cri-containerd-ddabe6ce9f275ddecc023dcb9f4c08dac25b65f9edf73394d4cd7fd5a84aa457.scope. Dec 13 14:27:39.771410 env[1731]: time="2024-12-13T14:27:39.771157630Z" level=info msg="StartContainer for \"ddabe6ce9f275ddecc023dcb9f4c08dac25b65f9edf73394d4cd7fd5a84aa457\" returns successfully" Dec 13 14:27:40.097703 kubelet[2095]: E1213 14:27:40.097654 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:40.544137 systemd[1]: run-containerd-runc-k8s.io-ddabe6ce9f275ddecc023dcb9f4c08dac25b65f9edf73394d4cd7fd5a84aa457-runc.7pPyMK.mount: Deactivated successfully. Dec 13 14:27:41.005179 kubelet[2095]: E1213 14:27:41.005052 2095 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:41.098544 kubelet[2095]: E1213 14:27:41.098503 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:42.101611 kubelet[2095]: E1213 14:27:42.101569 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:42.157636 kubelet[2095]: W1213 14:27:42.157596 2095 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f37f9f2_7402_4205_8c9c_7127304d979f.slice/cri-containerd-77bcdc5da009b70c2a9c6e421c0c17140f1ae9a8f9c55f4699428e38339a2c6b.scope WatchSource:0}: task 77bcdc5da009b70c2a9c6e421c0c17140f1ae9a8f9c55f4699428e38339a2c6b not found: not found Dec 13 14:27:42.628323 systemd[1]: run-containerd-runc-k8s.io-2a27d766ee78c020ef52b9127d49992ba63177aeb177125a95dcac089e7e4164-runc.J1dgxQ.mount: Deactivated successfully. Dec 13 14:27:43.102701 kubelet[2095]: E1213 14:27:43.102627 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:44.103573 kubelet[2095]: E1213 14:27:44.103533 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:45.104712 kubelet[2095]: E1213 14:27:45.104656 2095 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"